G's Blog

Just a place to post random things. Enjoy your stay.

Adventures with BSD Episode 2(AKA Yub(sd)ico)

So this is going to be a relativity quick post. I got my yubikey working on GhostBSD.

This was something really simple and stupid in the end(As i suspected). In the process of moving away from systemd on linux i had to re-enable all boot time services. One of which was pcscd which is a service for interacting with smart cards which the yubikey is one(or at least that's how it's interface to)

So all that was needed was to install pcsc-lite from the software station. Then run

sudo service pcscd start

And the yubico authenticator desktop app now finds my yubikey and is able to generate OTP codes!! YAY!

Then to ensure the service is started at boot

sudo rc-update add pcscd default

and voila working yubikey on BSD.

That's all for now

G

#bsd #tech

So long Systemd!

So I, like probably a fair chunk of you, have always felt like systemd was forced onto me. I did not ask for a new init system. Systemd in may ways is doing more than what an init system should do. In some ways that's great but an init system should just init. I was finding myself getting used to it after the last few years(had avoided it till then). Probably partly due to the fact that it EVERYWHERE. I was actually starting to like it even. I finally had enough and woke up.

Yesterday I read an article about the systemd devs trying to force a change on the linux kernel because they did not want to change how systemd worked. Now it turns out that this article was like 6 years old but still it highlighted the fact that systemd is trying to be more than what it is. Kernel is king! Everything else comes after.

So this morning I migrated from Arch Linux to Artix Linux. It was fairly smooth other than a few issues related to having root on encrypted partition but those where mostly my doing in trying to go too quickly. No format/reinstall and no more systemd!

My views may be ill explained but that's it for this post.

Have a great day

G

#linux #nosystemd #tech

Music Discovery

Let me start this off by stating the all my(teenage/adult) life I've been a Rock/Punk/Metal guy (With the occasional Glen Campbell). I've been satisfied with that. Recently I discovered I may have been missing out.

This journey all started New Years Eve when someone I follow on mastodon boosted a post from a musician about there recent (Unreleased at the time) album. I'm not sure what it was about it. The cover? The album name? I'll never know for sure but I told myself I would have a listen when I had time. Boy was I in for a surprise.

The album/artist that started this out is Ride Eternal by Eyeshadow FM 2600 If you follow me on Mastodon you would have seen me post about it before. From the first listen it had me hooked. It's a journey from start to finish. Do yourself a favor and check it out. I particularly like What Doesn't Kill You. I also quite enjoy Vice City Dead and Shallow Grave. Those 2 songs feel like 1 they merge together so great.

So with that I had discovered SynthWave/DarkSynth/RetroSynth. A genre of music I had never delved into or even knew existed. I spent the next few day sampling more of Eyeshadow FM 2600's offerings and was not disappointed.

I am continuing on this journey. Many artists to discover. So let me highlight a couple more.

The way I did this discovery was using the synthwave tag on bandcamp and looking for interesting album cover art/name. First one to catch my eye was The order of Chaos by DEADLIFE. Again based on the cover it seemed interesting and did not disappoint. It's dark and groovy and best enjoyed from start to finish. I'm still really discovering this artist and I'll be obtaining more of his stuff in the coming days. He's also working on a new album and recently dropped A single. Sounds very promising.

The next cover that caught me was that of Liminality by DreamReaper. It's another great album filled with epic beats sure to keep you grooving no matter what you are doing. Best enjoyed in it's entirety. Still lots more by him to check out as well which I will be doing.

Well that's all for now. Keep in mind I'm no professional music reviewer. However do yourself a favor and check out these artists well worth the time. Also make sure you show them your support so they keep putting out great music. World needs more of these independent artists. Ones not directed by greedy music labels/industry.

Take care.

G

#music #synthwave

Quick FOSDEM update

Most of you reading this probably know of FOSDEM. For anyone that does not it's a yearly conference event focusing on Open source.

I've heard of it and watched some talks from it over the years. This year so far I've only watched a couple. Plan on watching more once the archive goes up. I'll update this post with links to the replays for the ones I'm going to mention here.

Today i happen to catch FreeBSD Around the World! (Recording is up) Which was a very informative talk about the history of FreeBSD. It was very interesting to see where FreeBSD originated. I had an idea that it was closly related to original Unix but did not know just how close. Was also interesting to find out that Netflix uses FreeBSD as the base for the operating system they use on all there nodes/servers. They had a talk about it last year at FOSDEM. Worth a watch as well.

Next came the talk i was most looking forward to Regaining control of your smartphone with postmarketOS and Maemo Leste. (Edit Feb4th:Recording is up)Here one of the developers of postmarketOS and one from Maemo Leste talked about the status and future of linux on phones and the drive to get devices running mainline linux. They talked about the PinePhone and the Librem 5 and how these 2 devices are helping to really kick-start that effort. Still all in very early stages but most things are working/coming along at great pace.

I'll update this post with links to the videos once they are up. I will also probably follow up with a second post on FOSDEM once i get a chance to watch more of the presentations.

#fosdem

That's all for now.

G

Adventures with BSD Episode 1 (AKA:Hello from BSD)

So back in November I won a small little HP laptop from my work Christmas party. First I figured I would just distro hop around on it for fun. Then I decided that since I always wanted to try out BSD I would do so on it.

So first order of business was to pick a distribution of BSD to try. I settled on GhostBSD as a first go for no real reason other than it's a Canadian distribution.

Booting into live environment and performing the install went smooth. I went with all the defaults to have a higher chance of success.

First boot things started looking bleak. The touchpad was not working at that point I was not sure if the whole system froze or if it was just the touchpad. I stole the wireless mouse from my desktop and to my delight the cursor started moving. YAY!

Got logged in and started looking around. Really if you did not see the system boot or if you don't run uname -a from terminal you would have almost no clue it's not linux.

So i launched a terminal and did just that:

marcg@marc /u/h/marcg> uname -a
FreeBSD marc.ghostbsd-pc.home 12.1-STABLE FreeBSD 12.1-STABLE GENERIC  amd64

I was also quite pleased to see fish as the default shell it's what I use on Linux and I love it (maybe a post for another day).

So next I ran dmesg just to see how the output differs from Linux and I was greeted with this:

dmesg_screenshot_here

So even tho the system seemed to be running just fine I would not have that error constantly spamming system logs.

A quick google search turned up that the issue was because the emmc in the laptop does not support the trim command and offered a solution. Add the following to /etc/sysctl.conf:

vfs.zfs.trim.enabled=0

So I did that and rebooted. But after reboot the error still repeated. Now during boot I noticed systemd complain about something so I did ctrl+F1 to see what was up and caught something about how the above directive should be in /boot/loader.conf so I moved it to that file and rebooted again. Either things differ between FreeBSD and GhostBSD or the info on placement under FreeBSD was outdated. Either way No more error! Yay!

I will keep using it for a while. Things to fix/For future posts:

  1. Get WiFi working. Not much of a laptop if I have to be plugged in. Hopefully this is doable

  2. Fix touchpad. Otherwise I'll have to get a new mouse since going back and forth is annoying.

  3. Get sleep working properly. It goes to sleep good(like when i close the laptop lid) but it does not wake up. Screen stays black.

  4. Get yubikey working. Tried using it and even tho the software is available something must be missing kernel side or something as it is never detected by any of the yubi apps. Not as big a deal since I can use my phone.

That's all for now.

G

#bsd #tech

PinePhone(ARM) Build Environment setup

So as most of you already know i have ordered a PinePhone. I want to be able to contribute and test as much as I can so I wanted to be able to build packages for it. I figured building direct on the device would be painfully slow so wanted to set something up on my desktop to do so.

I bounced around a few ideas. Cross Compile, Chroot to cross compile in, Emulate ARM with Qemu... In the end I decided to give the Qemu option a go first as it seems like the easiest to setup/maintain. I wasn't completely wrong but it was also a little more complicated than i had assumed at first.

My first idea was just to run one of the pinephone images using qemu. Turns out that can't really be done as qemu can't fully emulate the pinephone. So my next attempt was to run ArchARM using qemu. This is what i will detail here.

So first step is to download the latest generic ARM package found here.

I have setup a folder to host all the files related to ArchARM. So we will want to create an image to hold the ArchARM file system. We do this like so

qemu-img create -f qcow2 -o preallocation=full ArchARM.img 64G

This will create a 64GB image and preallocate the space. This will improve performance.

Next we will create a filesystem and mount this image to copy the base ArchARM system to it. To do this we need to install libguestfs. It is in the AUR and i think soon to be in the community repo. With that installed we can:

Create the filesystem inside the image virt-format --filesystem=ext4 -a ArchARM.img

make the folder to mount it on sudo mkdir /mnt/virtfs

Mount the image sudo guestmount -m /dev/sda1 -a ArchARM.img /mnt/virtfs/

The -m option specifies the partition inside the image to mount. This is not the sda1 on your actual system.

Now we can extract the ArchARM archive to the image

This should be done as root(Not using sudo)

bsdtar -xpf ArchLinuxARM-aarch64-latest.tar.gz -C /mnt/virtfs

Now we need the kernel and initrd from the image so we can boot it with qemu

cp /mnt/virtfs/boot/Image.gz .

cp /mnt/virtfs/boot/initramfs-linux.img .

This will need to be done anytime the kernel is updated in the Virtual Machine.

Now we can unmount the image and we should be able to boot our ARM VirtualMachine.

After some trial and error the proper command to do this is:

qemu-system-aarch64 -machine virt -cpu cortex-a53 -nographic -m 2048 -smp cores=4 -kernel /media/Storage/ArchARM/Image.gz -initrd /media/Storage/ArchARM/ArchARM/boot/initramfs-linux.img -append 'root=/dev/vda1 rw quiet' -drive if=none,file=/media/Storage/ArchARM/ArchARM.img,format=qcow2,id=hd -device virtio-blk-pci,drive=hd -netdev user,id=mynet -device virtio-net-pci,netdev=mynet

This will give the VM 2GB of ram and 4 processor cores. Adjust if your system can't provide that comfortably.

If all goes well you'll get into ArchARM

Starting version 243.162-2-arch
/dev/vda1: clean, 34582/4194304 files, 635873/16777184 blocks

Arch Linux 5.4.1-1-ARCH (ttyAMA0)

alarm login: alarm
Password: 
[alarm@alarm ~]$ uname -a
Linux alarm 5.4.1-1-ARCH #1 SMP Sat Nov 30 18:54:05 UTC 2019 aarch64 GNU/Linux
[alarm@alarm ~]$

YAY!! now have a fully working ARM system.

Only other thing I'm doing is creating a function in fish(my shell. Bash users could create an alias) so that i can start up this VM by just typing strarm.

Hope this was helpful to some. Let me know if you have any questions or feedback.

Have a great day

G

Hello Everyone

I have decided to migrate my blog from using Github Pages to self hosting it using WriteFreely. If all goes well i'll be able to boost this on Mastodon.

I will still post sporadically when I have time and something worth sharing.

I will also be migrating my old posts at some point. Probably over the weekend.

That's all for now.

Have a great day.

G

NextCloud Migration(Again)

So this post will detail the steps I took to migrate my Nextcloud instance from my old VPS(Named Zeus) to my new bigger VPS(Named Hera). When I got Zeus I was not really planing on moving my Nextcloud instance to it but once I decided to keep Zeus I did just that.

After moving Nextcloud I also decided to host a Peertube instance. I quickly realized that with both of those on it I would outgrow the storage on Zeus so I got the next level VPS offered by the hosting company I went with.

And so now I have to migrate again. So lets get on with it.

I will not cover the initial setup required since I assume that if you are in need of a migration guide you already have nextcloud setup. Arch linux has a good guide if you need one. Also to note I run Arch linux so some of these steps/paths will be specific to Arch but the steps should give you an understanding of what needs to happen.

On to the migration. As I said the first step was to perform inital setup and installation of dependencies on Hera. Once that was done the real fun begins.

First we must place the nextcloud instance on Zeus in maintenance mode so that no new files/changes are made while we copy things over to Hera. This is done simply by issuing this command from the root of the nextcloud install.

sudo -u http ./occ maintenance:mode --on

Note that if on your distro the root of the nextcloud install is not owned by the http user you will need to adjust that in the command above.

After this command you should wait around 10 minutes to ensure all clients have received the maintenance notification and have stoped syncing. In my case that's just my couple of PC's and my phone but still best to give it this time. Then we stop our http server.

The next step is to copy the nextcloud application folder from Zeus to Hera. This can be done using rsync initiated from Zeus:

rsync -Aavx /usr/sahre/webapps/nextcloud/ root@hera.gcfam.net:/usr/share/webapps/nextcloud

This ensures that all apps installed on your instance are also copied over.

Once this is done we copy the nextcloud config. Again using rsync and initiated from Zeus:

rsync -Aavx /etc/webapps/nextcloud/config/ root@hera.gcfam.net:/etc/webapps/nextcloud/config

Using the same command we should also copy the webserver config,ssl certificate and webserver logs related to nextcloud.

Next we create a dump of the nextcloud database on Zeus. In my case I'm using mysql so the command is

mysqldump --single-transaction -h localhost -u root -p nextcloud > nextcloud-sqlbkp_`date +"%Y%m%d"`.bak

This will create nextcloud-sqlbkp_20191111.bak. Then we can use rsync to copy this dump to Hera

rsync -Aavx nextcloud-sqlbkp_20191111.bak root@hera.gcfam.net:/root

Now we restore the dump on Hera

mysql -h localhost -u root -p nextcloud < nextcloud-sqlbkp_20191111.bak

Finally we copy the data folder from Zeus to Hera. This is once again done with rsync this time we add the -t option so it preserves timestamps otherwise all files will need to be re-downloaded by clients

rsync -Aavxt /srv/CloudData/ root@hera.gcfam.net:/srv/CloudData

Obviously depending on the amount of data stored on your instance and the connection speed between the two servers this could take a while.

Once this is completed we should be able to start the webserver on Hera.

With that started we should be able to access nextcloud using the IP address of Hera. We do this before changing the DNS entry incase something does not work. If we get the maintenance mode warning then we should be golden.

We can now disable maintenance on Hera

sudo -u http ./occ maintenance:mode --off

Now we can confirm that we are able to login and if everything seems to work. After that we can update the DNS entry and our migration has been a sucess!!

Have a great day.

G

This post is simply to highlight how far technology has come. Specifically when it comes to portable storage. Here is the first ever portable usb drive I owned in the mid 90's.

first_drive

Notice how that holds a staggering 64MB. Back in the day I had a hard time filling that. Now it can barely hold anything other than some documents.

When I found out about this new portable SSD i just had to get it...

Box_Closed

Box_Opened

Simply beautiful packaging really makes this feel like the premium product it is. Comes with everything you need.

Full_Content

Now here is the kicker. This sleek device holds 512GB!! That's right folks half a TB in your pocket. Can hardly believe it and I might not if I did not have it.

For comparison here it is next to my 64MB drive

Drive_Compare

I mean that just blows me away. Since it's a SSD it gives blazing transfer speeds over latest USB 3.1. Even on USB 3.0 it still goes much faster than any other portable usb drive I've had.

That's all for today.

Have a great day.

G

Migrate all the things!!

As i stated in my previous post my primary harddrive is failing on me. To be proactive i am replacing it before it fully dies.

So this will be a somewhat technical post where I will outline the steps I took to migrate to my new HDD. Some of these steps will be specific to my setup but most could be applied to other situations. Hope you find some of this useful or at least informational.

First some details on the setup. I use BTRFS as my filesystem on all my drives. BTRFS provides a subvolume functionality. Subvolumes are similar to partitions in which they can be mounted/accessed independently from the main/root of the filesystem. So my old drive was setup with the following subvolumes:

Subvolume Description
@Home This contains my Home partition
@Storage This contains my primary storage (Hold Downloads,Wallpapers and other general data)
@HomestBU This was used to make backups(More will be explained on how to handle backups with BTRFS subvolumes/snapshots)

This same layout will be replicated on the new drive later.

To avoid any data loss I had moved my @Home subvolume to my raid5 array(Called Storage2) which typically holds my media files and my @Storage subvolume to my external storage device(called Storage3) that normally holds my backups. Now the task will be to move those subvolumes to the new harddrive. So lets get on with it.

This is the drive I am migrating stuff to

Harddrive_Image_goes_here

Lets begin the migration.

Since I have all my storage encrypted the first step is to setup the encryption container. This is done by issuing the following cammand:

cryptsetup -v --type luks --cipher anubis-xts-plain64 --keysize 640 --hash whirlpool --iter-time 5000 --use-random -d /etc/homest luksFormat

Lets breakdown that command a bit shall we

Option Desctiption
-v This simply makes the cryptsetup command output mor detailed information about what it does
--type luks This instructs cryptsetup that we are working in luks mode instead of plain mode (See here for an explanation of the differences between plain and luks mode.)
--cipher anubis-xts-plain64 This is defining the cipher to be used for the encryption operation. I always use a cipher that is not as mainstream. In this case i am using the anubis cipher. I like this cipher for a few reasons. One of them being that it is named after an egyptian god and the creator has stated that anyone who breaks it will be cursed.
--keysize 640 This defines the size(in bits) of the key used for encryption. The bigger the key the stronger the encryption. Anubix has a maximum key size of 320 but since the xtx cipher mode splits the key in 2 we specify 640 here.
--hash whirlpool This defines what hash function(default sha256) will be used the hash the passphrase.
--iter-time 5000 This defines how long(in miliseconds) cryptsetup will spend processing the keyphrase. Default is 1 second I make it go for 5.
--use-random This define the source used for the generation of random numbers used by the format process. Choices are urandom or random. I always use random as it produces a more random result
-d /etc/homest This defines a keyfile to use instead of prompting for a passphrase. That key file is a giant string of random characters
luksFormat This just says to format the device that follows
/dev/sdxY The device to format

Now that the encrypted container is ready we can open it to use it:

cryptsetup open -d /etc/homest /dev/sdxY homest

Now we can format the container with a filesystem:

mkfs.btrfs /dev/mapper/homest

And mount it:

fstab entry

/dev/mapper/homest                              /mnt/btrfsroot           btrfs           compress=zlib,space_cache=v2       0 0

mount /mnt/btrfsroot

Starting with my @Storage subvolume we will now move it to the new drive

First we create a read-only snapshot of the subvolume:

btrfs subvol snap -r \@Storage/ \@Storage.migrate

A snapshot is like a picture of the data currently in that subvolume.

Now we send it over to the new drive:

btrfs send -vvv \@Storage.migrate/ | btrfs receive -vvv /mnt/btrfsroot/

Once this is complete we update the fstab entry to mount the subvolume from it's new location:

/dev/mapper/homest                              /media/Storage  btrfs           compress=zlib,space_cache=v2,subvol=@Storage    0 0

Notice how this entry is similar to the previouse with the differences being the mountpoint(/media/Storage) and that we specify the subvolume to mount.

Now we do the same thing with the @Home subvolume.

First we create a read-only snapshot of the subvolume:

btrfs subvol snap -r \@Home/ \@Home.migrate

Now we send it over to the new drive:

btrfs send -vvv \@Home.migrate/ | btrfs receive -vvv /mnt/btrfsroot/

Once this is complete we update the fstab entry to mount the subvolume from it's new location:

/dev/mapper/homest                              /Home  btrfs           compress=zlib,space_cache=v2,subvol=@Home    0 0

Now the subvolumes on the receive side(now in /mnt/btrfsroot) are still in read-only mode which wont work. To resolve that we simply take a read-write snapshot of the subvolume:

btrfs subvol snap \@Home.migrate \@Home btrfs subvol snap \@Storage.migrate \@Storage

And That is it. Migration complete. All that is left to do is reboot to start using the new drive. Hope you maybe found this post informational.

Have a great day.

G