In my last post, I whined that I couldn’t find a how-to on how to convert a Linode virtual machine to an LVM setup. Well, I’ve done it, so I should write this up, no?
I didn’t want the machine to have a swap partition; so there were three things to do:
swapoffwhile logged on, inside the machine
- Edit /etc/fstab to delete the line for the swap drive
- Outside the machine in the Linode manager, delete the disk
- So first I had to power the machine down
- Then in the Linode virtual machine manager, I had to switch to the Storage tab
- Now I can click on the swap drive and delete it.
- I don’t know why, but WordPress is being stupid with lists, which it didn’t used to prior to the most recent “upgrade”. This sublist is supposed to be numbered, damnit. And this particular list item was supposed to be indented even further.
The next thing to do was to shrink the existing disk. I do not know if I could have just done that. I see a resize option in the Linode storage manager. It may be that they have cloud-init wired in, and using the resize button would also have run stuff inside the machine to make everything nice. That’s not the way I went. 🤷
In the Linode manager (at the upper level, where you can see all your virtual machines), there is a three-horizontal-dots menu button. (I don’t know what is the good name for this button. I like the three horizontal lines, stacked, menu buttons because I can call it a hamburger button, and people get the idea of a bun with a patty in between. But I digress.)
I clicked on the three-horizontal-dots menu button, and chose the Rescue mode menu option. This powers down my virtual machine and attaches it as storage to a rescue mode virtual machine (running Fennix). Then in the Linode manager, I used Launch LISH Console to spawn a new web page which is the remote console into the Fennix machine. Although I’m inside the Fennix machine, /dev/sda is still my virtual machine’s main disk. It is not mounted at this time, which is good. So then I ran the command to shrink /dev/sda with
resize2fs /dev/sda 9G
So a very real problem with me writing this up is that I don’t have a
history command to verify this is what I did. That history was recorded in the Fennix virtual machine which is destroyed after reboot. I’m pretty sure the command was
resize2fs /dev/sda 9G but I don’t actually know. When I look stuff up now, it looks like resize2fs applies to the partitions inside a disk device rather than the device itself. But I’m pretty sure I did this.
Then, using the Linode manager, I did shrink the disk. So the next steps were:
- Reboot out of rescue mode (wait for everything to boot back up)
- Power down the virtual machine (wait for it to shut down)
- In the Linode manager of my virtual machine, resize the one-and-only disk to 9 GB
- The base machine had used about 5 GB of the 25 GB allocated. This leaves another 4 GB free disk space, even prior to moving /var off to another disk.
- Then, I added four disks:
Of course, when I added these disks, I had to pick the sizes of what I wanted each to be.
The next part of the puzzle wasn’t obvious either: how does Linode map these newly added disks to the virtual machine? The answer is that by default, it does not.
That’s over in the Configuration tab of the virtual machine manager. (Earlier documentation appears to have called this the Profile tab). Doing an edit of my virtual machine, I could pick the /dev/sdX and assign it to the disk I had created for my purpose.
Okie dokie, time to power up and do the LVM stuff.
Create the physical volumes:
pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde
Create the volume groups:
vgcreate vg_mail /dev/sdb
vgcreate vg_tmp /dev/sdc
vgcreate vg_home /dev/sdd
vgcreate vg_var /dev/sde
Create the logical volume groups:
lvcreate vg_mail -l 100%FREE -n lv_mail
lvcreate vg_tmp -l 100%FREE -n lv_tmp
lvcreate vg_home -l 100%FREE -n lv_home
lvcreate vg_var -l 100%FREE -n lv_var
So at this point, we have logical volumes, inside of volume groups (which have physical devices assigned). LVM makes this storage available at /dev/mapper
Format the new storage:
Now comes the tougher part, moving the new storage into production.
The process is to shut down the system to Init Level 1 (so that as little as possible is currently running), mount the new storage, copy the files over, rename the old storage out of the way, and then update the /etc/fstab to reflect the new storage mount point.
Inside the running virtual machine, I gave the command
Now I have to use the Linode virtual machine manager Launch LISH Console to get logged into the running machine (Init Level 1 turns off the network).
mount /dev/mapper/vg_var-lv_var /mnt/newvar/
cp -apx /var/* /mnt/newvar
mv /var /var.old
Okay, the contents of /var are now inside the LVM logical volume. Now to configure the system to mount that logical volume at the file system mount point /var
blkid to identify the universally unique identifier assigned to the LVM volume. Perhaps blkid says your LVM volume is this:
/dev/mapper/vg_var-lv_var: UUID="epstein-didnt-kill-himself-605169120" BLOCK_SIZE="4096" TYPE="ext4"
Then, edit /etc/fstab to have the UUID entry for the mount point:
UUID="epstein-didnt-kill-himself-605169120" /var ext4 defaults 0 1
Do this for the other LVM volumes and then clean up. Before rebooting, you should try
mount -a just to make sure there are no errors; because if there are errors mounting things, that’s going to make the reboot suck, badly.
Cleanup was to delete /mnt/newvar and to delete /var.old (and the other LVM mount points processed the same way).