The year 2022: Late stage 2021 but with new, higher prices

h/t to one of Scott Adams Twitter followers, responding to a challenge to summarize 2022 in the snarkiest way possible.

The whole thing is a psy op run by incompetents at behest of elites inflicted upon the aimless. It came about through sixty years of indoctrination: “Buy this shit from our advertiser; that will make you happy.”

Linode base to LVM conversion

In my last post, I whined that I couldn’t find a how-to on how to convert a Linode virtual machine to an LVM setup. Well, I’ve done it, so I should write this up, no?

I didn’t want the machine to have a swap partition; so there were three things to do:

  1. swapoff while logged on, inside the machine
  2. Edit /etc/fstab to delete the line for the swap drive
  3. Outside the machine in the Linode manager, delete the disk
    • So first I had to power the machine down
    • Then in the Linode virtual machine manager, I had to switch to the Storage tab
    • Now I can click on the swap drive and delete it.
      • I don’t know why, but WordPress is being stupid with lists, which it didn’t used to prior to the most recent “upgrade”. This sublist is supposed to be numbered, damnit. And this particular list item was supposed to be indented even further.

The next thing to do was to shrink the existing disk. I do not know if I could have just done that. I see a resize option in the Linode storage manager. It may be that they have cloud-init wired in, and using the resize button would also have run stuff inside the machine to make everything nice. That’s not the way I went. 🤷

In the Linode manager (at the upper level, where you can see all your virtual machines), there is a three-horizontal-dots menu button. (I don’t know what is the good name for this button. I like the three horizontal lines, stacked, menu buttons because I can call it a hamburger button, and people get the idea of a bun with a patty in between. But I digress.)

I clicked on the three-horizontal-dots menu button, and chose the Rescue mode menu option. This powers down my virtual machine and attaches it as storage to a rescue mode virtual machine (running Fennix). Then in the Linode manager, I used Launch LISH Console to spawn a new web page which is the remote console into the Fennix machine. Although I’m inside the Fennix machine, /dev/sda is still my virtual machine’s main disk. It is not mounted at this time, which is good. So then I ran the command to shrink /dev/sda with resize2fs /dev/sda 9G

So a very real problem with me writing this up is that I don’t have a history command to verify this is what I did. That history was recorded in the Fennix virtual machine which is destroyed after reboot. I’m pretty sure the command was resize2fs /dev/sda 9G but I don’t actually know. When I look stuff up now, it looks like resize2fs applies to the partitions inside a disk device rather than the device itself. But I’m pretty sure I did this.

Then, using the Linode manager, I did shrink the disk. So the next steps were:

  1. Reboot out of rescue mode (wait for everything to boot back up)
  2. Power down the virtual machine (wait for it to shut down)
  3. In the Linode manager of my virtual machine, resize the one-and-only disk to 9 GB
    • The base machine had used about 5 GB of the 25 GB allocated. This leaves another 4 GB free disk space, even prior to moving /var off to another disk.
  4. Then, I added four disks:
    • home
    • tmp
    • var
    • var/mail

Of course, when I added these disks, I had to pick the sizes of what I wanted each to be.

The next part of the puzzle wasn’t obvious either: how does Linode map these newly added disks to the virtual machine? The answer is that by default, it does not.

That’s over in the Configuration tab of the virtual machine manager. (Earlier documentation appears to have called this the Profile tab). Doing an edit of my virtual machine, I could pick the /dev/sdX and assign it to the disk I had created for my purpose.

Okie dokie, time to power up and do the LVM stuff.

Create the physical volumes: pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde

Create the volume groups:

vgcreate vg_mail /dev/sdb
vgcreate vg_tmp /dev/sdc
vgcreate vg_home /dev/sdd
vgcreate vg_var /dev/sde

Create the logical volume groups:

lvcreate vg_mail -l 100%FREE -n lv_mail
lvcreate vg_tmp -l 100%FREE -n lv_tmp
lvcreate vg_home -l 100%FREE -n lv_home
lvcreate vg_var -l 100%FREE -n lv_var

So at this point, we have logical volumes, inside of volume groups (which have physical devices assigned). LVM makes this storage available at /dev/mapper

Format the new storage:

mkfs.ext4 /dev/mapper/vg_mail-lv_mail
mkfs.ext4 /dev/mapper/vg_tmp-lv_tmp
mkfs.ext4 /dev/mapper/vg_home-lv_home
mkfs.ext4 /dev/mapper/vg_var-lv_var

Now comes the tougher part, moving the new storage into production.

The process is to shut down the system to Init Level 1 (so that as little as possible is currently running), mount the new storage, copy the files over, rename the old storage out of the way, and then update the /etc/fstab to reflect the new storage mount point.

Inside the running virtual machine, I gave the command init 1

Now I have to use the Linode virtual machine manager Launch LISH Console to get logged into the running machine (Init Level 1 turns off the network).

mkdir /mnt/newvar
mount /dev/mapper/vg_var-lv_var /mnt/newvar/
cp -apx /var/* /mnt/newvar
mv /var /var.old

Okay, the contents of /var are now inside the LVM logical volume. Now to configure the system to mount that logical volume at the file system mount point /var

First, use blkid to identify the universally unique identifier assigned to the LVM volume. Perhaps blkid says your LVM volume is this:

/dev/mapper/vg_var-lv_var: UUID="epstein-didnt-kill-himself-605169120" BLOCK_SIZE="4096" TYPE="ext4"

Then, edit /etc/fstab to have the UUID entry for the mount point:

UUID="epstein-didnt-kill-himself-605169120" /var ext4 defaults 0 1

Do this for the other LVM volumes and then clean up. Before rebooting, you should try mount -a just to make sure there are no errors; because if there are errors mounting things, that’s going to make the reboot suck, badly.

Cleanup was to delete /mnt/newvar and to delete /var.old (and the other LVM mount points processed the same way).

Kind of hating cloud servers right now

How in the world am I supposed to create LVM (Logical Volume Management) disk layouts on a cloud VM with a single big disk? Before I start piling in data, I want to put /var/mail on it’s own partition.

Maybe it’s just that Google is stupid, and the answer is plain as day if I could find it.

Linode is annoying, because the pages I found said (in essence) “Don’t use LVM, use our attached disks at an additional $2 per disk per month.” Well, I could add a disk and then use LVM to configure it. But that means that I’m going to have a 25 GB /boot partiition and then hardly anything else over on the new disk. What it won’t do is keep the system from going comatose if some process starts spamming a log file and fills the disk. That’s stupid. And I’d be paying $2 a month, forever, for the stupidity.

I want to install LVM so that I have the option of adding another disk later, and it would be super easy. I’ve done LVM at work for years now, and it’s great. But at work, I get to install the machine from a boot ISO, and I get to go through every step of the install. Linode creates new virtual machines from images, where the disk is pre-configured. I don’t get to say I want /home on a separate volume (for example).

Every search I’ve done about LVM has two assumptions behind it: 1) there is a newly added virgin disk, or 2) during install, choose to partition the disk the way you want.

Nothing appears to address the situation where I’ve got a 25 GB disk with 20 GB free, and I’d like to move /home and /var and /tmp to /dev/sda1 /dev/sda2 /dev/sda3

I need to do pvcreate, but it errs out because I don’t have a newly added virgin disk.

I doubt this problem is particular to Linode; I suspect Rackspace and Vultr have the same problem – the preconfigured image is what you get; go kick rocks if you want something else.

It is frustrating, becasue I cannot be the first person on the planet to have thought of this or asked this question. But if the answer is obvious, I’m not finding it with Google search.

The Helm email appliance – you were a good product

I really liked my Helm email appliance. It has done well by me.

Unfortunately, the business behind it doesn’t see it’s future getting better, so they are going to call it quits. I have until December 31, 2022 to build a replacement email server. This is turning out to be a larger project that I’d like.

I do appreciate that The Helm company gave me plenty of warning (I got the email more than two weeks ago). I hope the people at the company find something else they can do which brings more success to them. You have my many thanks for your years of solid service.