Bulk change MP3 file genre

I reset my playlists in Nextcloud. During the rescan, as it imported them, the Music app sorted them into their genre. This might be useful. But one author’s genre was Folk, and really, I’d prefer if it were Instrumental.

I tried changing them from the command line, but id3tools trashed the tags. Really, it was a problem with UTF-8 versus something else. All I really know is that when Nextcloud scanned the files, it got Chinese characters instead of anything useful.

Turns out, I used kid3 and EasyTAG to solve the bulk search and replace problem. Why both? Because kid3 let me see what I wanted to change, but EasyTAG would let me (bulk) change them.

kid3 let me change tags just fine. The problem is: only one file at a time.

The kid3 interface is rather nice, otherwise. If I hit Ctrl-A, it selects all and reads all the files and all the tags. I had added Genre to this list of columns at the top, so then I could sort on that.

EasyTAG wouldn’t let me change the main page displayed columns, so that was less-than-ideal. But, it does have a Find feature, and everything I selected in the Find window remained selected in the main window.

What EasyTAG has (which is great) is in the genre field for any song, there are two buttons: a drop-down to select which genre, and an Apply All button for everything selected. Excellent! Apply All is precisely what I wanted.

Also, it turns out that if the predefined list of genres doesn’t match what I want, I can just type in my choice. The Apply All button still works for something I typed. EasyTAG didn’t have a Flamenco genre, but I have 85 Flamenco guitar files. That I can type my own genre makes this a trivial problem.

So after doing an Apply All in EasyTAG, I’d go back to kid3 and do a reload, followed by another Ctrl-A. Then I sorted by (whatever). I’d find something that matched all the songs I wanted to alter, and copy that to the clipboard. Then I’d switch to EasyTAG, unselect all, and go into the Find screen, and paste in the identifier and search I’d select all in the find window, and close the find window. Then I changed the genre and hit Apply All in the EasyTAG main window.

I think I re-tagged close to 800 songs in about fifteen minutes. Woo! Now, the bulk of my music files are in eleven genres, which becomes a playlist without the manual playlist editing. There are 331 songs in the Instrumental genre list. I would have so hated having to manually make a change 331 times.

WordPress migration notes, part 2

One problem is that I need to install WP-CLI on the new server, and dealing with it is not easy.

The installation instructions don’t say one way or another, but WP-CLI should not be installed as root. Later, if you go to run it as root, it will bark at you that you’re doing a bad thing. Okay, nice to know.

But we do now have the problem that the user who runs WordPress (well, Apache, which runs the PHP code that is WordPress) is the www-data user. I cannot log in as the www-data user, by design (it is a good design). So, how to run this WP-CLI stuff?

sudo -u www-data wp <command>

Okay, this says to switch to user www-data (sudo = switch user and do) (the -u option specifies which user, in this case www-data), and run the wp binary and any command line options you want it (wp) to do.

Cool, but the user I’m logged in as has no idea of where the WordPress installation is. So now, every freaking command I have to type, sudo -u www-data wp <command>, also needs --path=/var/www/html/wordpress in there too.

This sucks.

There is supposed to be a file, wp-cli.local.yml, that I can put the path into. But that file is in my directory, and the sudo command switches away from that.

This sucks, still.

The www-data user does have a home directory; but, it won’t ever be used because the account runs /usr/sbin/nologin on every access. That is secure, but it doesn’t help me from having to type sudo -u www-data wp --path=/var/www/html/wordpress<command> every freaking time I need to do something.

Also, I am a fan of using the page-up key to search my bash history. That works great when I type a few letters, say gre and hit PgUp to search through my last few grep commands. Do I need to reassign ownership of files I’ve added to /var/www/html/wordpress/ ? chow and PgUp, and in a keystroke or two, I’ve got chown -R www-data:www-data /var/www/html/wordpress/ ready to run. Ditto the Apache2 enable and disable site commands. There are a ton of examples where just a few keystrokes and the PgUp key are great.

But having to type su and hitting PgUp presents me with a wall of noise, to finally find the command at the end that I actually want to repeat.

This sucks.

So, there is a solution. It seems kludgy, but it works, as long as you are willing to put up with its kludginess.

  1. cd to /var/www/html/wordpress/
  2. create a file, wp-cli.local.yml, in the location where WordPress is installed (where you just did the cd to), and inside it, put:
    • path: /var/www/html/wordpress

So, as long as you are already in the “right” place, and you have this file which points to your “right place”, you don’t have to specify the “right place” on the command-line of WP-CLI.

The other option is to be in my home directory, and do everything via bash scripts. I wanted to use the command-line, but I may need to put one more level of indirection in the process to get things to work easily. Like I said: kludge.

However, since the bash script has WPPATH="/var/www/html/wordpress" in it, all that sudo -u www-data nonsense goes away. Sure, I’m running it as some random user from some random location, but (I assume) that the WP-CLI people are just fine with that because if a random hacker gets into an ssh session on my box, I’m done for, anyway. Why not just assume whoever is running these commands is authorized?

This sucks quite a bit less, although it doesn’t make me warm and fuzzy about security.

WordPress migration notes

I have a production WordPress site on Amazon Lightsail that I need to migrate away from. These are notes on how to migrate over only the stuff I want to keep.

Backstory: Amazon Lightsail was very inexpensive, at under $5 per month for hosting on their smallest machine, and it did fine. Two things became problems, however:

  • Bitnami WordPress is super easy to spin up, and everything just works. But upgrading to a newer version of something (say PHP or MySQL or something) is a non-starter. The only way to upgrade is to spin up a new machine and do a migration to a new machine.
  • Amazon recently did a price increase. Now, I can get a Linode machine with double the RAM for only 27% more, and that will include backups.

Okay, so I need to migrate, but over the years, I’ve tried different plugins, and even though many of them were uninstalled, the installation routine left crap in the database. How to migrate to a new server, but leave behind the crap? This will be the topic of this post.

First, I installed WP-CLI, instructions can be found here.

Then, on the new machine, I installed only those Plugins which I know I need.

I took a snapshot backup at this point, simply because it seems prudent.

On the new machine, I logged in with ssh and ran this:

wp --path='/var/www/html/wordpress' db query "SHOW TABLES" --skip-column-names --allow-root

This gives me a list of the tables in the new machine that I want from the old machine.

+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_termmeta           |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+

This is a pretty minimal list; the old machine has a list 362 tables long! Matomo was a particularly egregious offender here.

With this information, I can use a script written by Mike Andreasen over on the WP Bullet website to dump the databases on the old machine:

# set WP-CLI flags
WPFLAGS="--allow-root"

# define path to the database dumps without trailing slash
DBSTORE="/tmp"
# get the name of the database
DBNAME=$(wp config get DB_NAME ${WPFLAGS})

# list all of the tables regardless of database prefix
TABLELIST=(wp_posts wp_postmeta)

# create the temporary directory for storing the dumps
mkdir -p ${DBSTORE}/${DBNAME}

# loop through tables and export, log details to /tmp/mysqlexport-<database>.txt
for TABLE in ${TABLELIST[@]}
do
    # export the table
    wp db export ${DBSTORE}/${DBNAME}/${TABLE}.sql --tables=${TABLE} ${WPFLAGS} | tee /dev/stderr
done > /tmp/mysqlexport-${DBNAME}.txt

With this done, I scp the files from the old machine to my local machine. Then I scp them up to the new machine. The next script assumes they are in the sql directory in the wordpress folder.

I tried it, but I should have taken a snapshot, first. 😉

I need to search-and-replace all instances of the old domain name in the MySQL dump files, and put in the new domain name. Technically, once the actual switch happens, the new machine will be found at the old name, so this shouldn’t be necessary. But, the whole reason for migrating to a development machine is to test out this migration process. And the new machine does have a different domain name.

The script to upload the MySQL dumps looks like this:

# define WordPress path
WPPATH="/var/www/html/wordpress"

# loop through all of the 
for DUMP in /var/www/html/wordpress/sql/*.sql;
do
    wp db import ${DUMP} --allow-root --path=${WPPATH}
done

But, until the data is cleaned up, the new WordPress website gets the dreaded white-screen-of-death.

New Debian install; ssh and sudo changes

Similar to what I wrote in New OpenSuSE Tumbleweed cannot ssh in but this time with Debian. This has to be done from a physical console login on the machine (or if it was a VM, from the hosting company’s console login desktop service). I’m logged in as root.

apt-get install vim

Debian is pretty bare-metal, man. This is probably good from a security and stability point-of-view.

cd /etc/ssh/
cd /etc/ssh/

Find PermitRootLogin and uncomment it, and change it to yes

Find #PubkeyAuthentication yes and uncomment it.

Find #AuthorizedKeysFile     .ssh/authorized_keys .ssh/authorized_keys2 and uncomment it and remove the second file authorized_keys2

Find PasswordAuthentication no and uncomment it and change it to yesnote that this is temporary!

Save and exit the sshd_config file. I’m not sure which service(s) would need to be restarted here, so I issue the reboot now command and watch the machine reboot. Today’s hardware is amazingly fast, compared to what we lived with a decade ago.

Now, from my remote machine, I ssh in as root. I get asked about accepting the private key, and get prompted for the password. Once I get in, I know I’m good to proceed to the next step.

ssh-copy-id root@host.domain

I get asked to put in my password again, and now public key logins are enabled, instead of password-based logins.

I log in as root again, but this time without a password. At this point, I do some customizations per How to make Ubuntu have a nice bash shell like OpenSuSE (although this is Debian). One nice thing is that ~/.bashrc already had aliases ready for ll being an alias for ls -l

Something I don’t understand is why I cannot copy / paste from the Debian ssh session. My guess is that is has something to do with LS_OPTIONS in the bashrc file. Anyway….

I still needed to add alias ..='cd ..' though.

I log out.

I log in as a non-root user, with a password.

ssh-copy-id user@host.domain

I log in as the non-root user, without a password. Same thing: I add the customizations I like, where I can edit with vim, from doing a less on a file, the .. alias for changing directory up one, and using PageUp to search history. I log out.

I log in as root again. Now, I need to give my non-root user sudo rights.

adduser whatever-the-non-root-user-is sudo

Back to editing /etc/ssh/sshd_config

Find PermitRootLogin and uncomment it, and change it to no

Find PasswordAuthentication yes and uncomment it and change it to no

And then I save and exit the file and reboot the box.

Now I can ssh as the non-root user, and I cannot log in via ssh as root. Also, no-one can attempt to log in with just a password. This is good.

Temporary fix for Nextcloud calendar broken sync

Nextcloud has a nice home page called the Dashboard, which has calendar items and to-do list on it. But ever since Calendar App version 4.5, it has been broken for items sourced outside of Nextcloud. In other words, if you create a calendar item on your smartphone and sync it in to Nextcloud, on the Calendar web page you can see the item, but on the Dashboard home page it will be missing. The solution is to downgrade the Calendar app to version 4.4.5

Steps to perform:

  1. In the Nextcloud admin interface, find the Calendar app and disable it
  2. ssh into your Nextcloud instance
  3. cd /var/www/html/nextcloud/apps/
  4. mv calendar calendar-old
  5. wget -q https://github.com/nextcloud-releases/calendar/releases/download/v4.4.5/calendar-v4.4.5.tar.gz
  6. tar xvf calendar-v4.4.5.tar.gz
  7. chown -R user:group calendar
  8. In the Nextcloud admin interface, select the Disabled apps section. Then Enable (but not update) the Calendar 4.4.5 app.

And now, when you go back to your Dashboard home page, your calendar will have all the items on it. 🙂

You do get to apply this fix after every update. 🙁

Technically, this post title is somewhat misleading: sync is not broken. What is broken is that items that sync in from CalDAV sources apparently have something that, when it is present, causes the Dashboard page to skip those calendar items. It just looks like sync is broken because you knew the items were on your calendar: but when you look at the Dashboard for today, they are missing. I suppose a better title would be Temporary fix for Nextcloud calendar (some) items missing from Dashboard

Ogg > MP3 (thanks, Apple) (not)

I have several CDs (Compact Discs, not Certificates of Deposit) of music that I like. When I popped them into my PC, I got several folders of files I could copy from. I chose to copy the .ogg files because I liked the idea of using an encoding format without weird licensing issues.

Apple has foiled that plan. If I try to play a playlist on an Apple device, the .ogg files get skipped because (apparently) Apple doesn’t feel like playing nice with the Open Source community. They may have more money than God, but adding another codec – that doesn’t have license issues – to their devices isn’t something they are going to spend money on.

When I work on-premises in the office, my co-workers are often noisy and annoying. I want to pop in my Airpods and play background music to drown out their inane chatter. I don’t want to carry the music files on my device; but I do have a Nextcloud server at home that can stream the audio from the Music app web page. I can log in on my iPhone and play the playlist.

But because it’s an iPhone, it auto-skips the Ogg Vorbis files. This doesn’t happen when I’m at home playing the same playlists on Linux or Windows.

So now I get to re-copy the files from the physical media to my NAS (network attached storage) which in this case is a Synology.

First, I get to delete the files with the .ogg file extension. Two steps (for example):

exiftool -p '$filename' -if '$album =~ /WOW Worship: Yellow \(disc 1\)/' *.ogg > wow_worship_ogg_file_list
This generates a file, wow_worship_ogg_file_list, which has the file names in a list.

then to delete them:

xargs -I{} rm -r "{}" < /path/wow_worship_ogg_file_list

Second, after having cleared out the disk space, I can copy from my physical CD to my NAS. That takes a while; and, after it is done, the file names aren’t wonderful. Rename music files to their title to the rescue.

Except, of course, for a duplicate file name. I have an MP3 file I bought from Amazon (published by Monstercat) with the same title as one of the files from the WOW Worship CD. I would prefer to rename the Monstercat file, but really if I’m going to be running the rename music files to their title command often, I need to change the Title inside the .mp3 file. If I don’t, the next time I run it, it will attempt to rename the file to a duplicate name that is already in use.

Exiftool doesn’t really write new Titles, apparently. I think it can, depending on the file type. I wonder if the weird license problems of MP3 are at the root of the problem. Whatever: the answer was to add the id3v2 program and use it instead.

id3v2 -t 'Title by Artist' file.mp3

Now the rename music files to their title script moves the one file to the new file name, and the other file to its simpler file name than what came off the CD.

Discord app update – hooray! :-)

In my last post I whined that it takes a while before OpenSuSE gets the updated Discord app. I checked today, and the update was there. I only have three days to catch up on: nice!

Thank you to Wojciech Kazubski for updating the app in the repository.

I think instead of creating a new post of when Discord goes broken (does an update) and when the new version shows up fixed, I’ll just make a list here.

  • 2023-12-27 Discord client update came through, and this time it matches the server version 🙂 I was down fifteen days, but I’ve been out of town for six days.
  • 2023-12-19 Discord update came through, but maybe it’s too late? Yep, still down. OpenSuSE updated to version .38-1.1 but the Discord app wants to install version .39
  • 2023-12-12 Discord stops working until an update can be applied. Shucks. Three days. I have a group of friends for whom with-alex-jones-returning-journalists-worried-theyll-no-longer-be-only-source-of-misinformation would be a fun topic; but they will have to wait.
  • 2023-12-09 Discord works again, yay! Only four days downtime.
  • 2023-12-05 Discord stops working until an update can be applied. Shucks. Four days was nice while it lasted.
  • 2023-12-01 Discord works again, yay! Only three days downtime.
  • 2023-11-28 Discord stops working until an update can be applied. Shucks. But it was a nice long run.
  • 2023-11-18 Discord works again, yay!
  • 2023-11-10 Discord stops working until an update can be applied. Shucks.

Discord App update again (sigh)

In my previous post, I explained that every time Discord publishes an app update, I’m locked out* until someone fixes it in the OpenSuSE repositories. Four days ago, it showed up in my list of updates, and hooray! I was back in Discord after eight days of being locked out.

Today, another update is published. Sigh.

*“Locked out” is a poor term: I am voluntarily opting-out. I boycott Flatpak and Snap Apps because I dislike those technologies.

OpenSuSE updates this morning: 1,671 (but Discord isn’t one of them)

Every time Discord updates their app, I don’t use Discord for many days. This is a bummer because I have a circle of friends on the Internet I’ve known for 20+ years, and Discord is where we have settled (so far). So when Discord does an update, I go through a dry spell of not being in contact with them.

The OpenSuSE folk tell me I should install the Flatpak version of Discord or the Discord Snap app. I’m not a fan of either Flatpak or Snap. Snap fouled up a machine months ago (and it’s still broken to this day) when I installed it. Flatpak seems like a (no better) replacement for my RPM repositories. Worse, it duplicates storage and doesn’t integrate unless I convert everything over; so that just creates more points for stuff to break. But at least the single developer doing the Flatpak doesn’t have to integrate, so that’s not his problem.

I can use the web version of the Discord app. It has keystroke conflicts with the web browser, though, and because I use temporary containers for everything, it treats me like a new user who didn’t see that sticky post from five years ago….

So, I wait until someone asks the Discord people to update the RPM in the OpenSuSE repository. Eventually it happens. Time before last, it was eleven days.