Quarterly Inventory 2024 – Q1

Dear FutureMe,

Today would be a good day to do a quarterly inventory.

Question: How is your personal life going?

Question: How is your work life going?

Question: How is your volunteer service life going?

Personal Life

There hasn’t really been much change this quarter in my personal life.

I went to the Southern California Linux Expo (SCaLE 21X), but regret it because it was so much money. Previous SCaLE events were at the LAX Hilton, which is half the price of the Pasadena Hilton. The trade-off is that the LAX Hilton has only about five restaurants nearby, so if 400 people break for lunch, those five restaurants are absolutely swamped. If 400 people break for lunch at the convention center in Pasadena, there are probably 30 restaurants within a ten-minute walk nearby. But $400 per night for this show really isn’t worth it to me. If I had stayed three nights for the full four-day show, that would have been $1,200. ACK! For that kind of money, I could pay down my mortgage one month and retire a whole month early. Really, SCaLE is a wonderful show if you already live in Los Angeles and don’t have to spend money at the Pasadena Hilton.

Had my ten-year colonoscopy. Zero polyps found; I get to come back in five years because of my age.

I went to a Jack-In-The-Box restaurant a couple of months ago. Lunch was $20. I suspect this was my last visit to a fast food restaurant ever 1 (well, in California, at least). Sacramento decreed that fast-food workers should get, beginning today, a minimum wage of $20 per hour (as if fast-food workers would make it a career). The result is that Sacramento has completely priced these stores out of business due to inflation (unless they replace the workers with robots).

2024 New Year’s Resolution: go to the gym more often. Resolution failed: I suspended my gym membership. $60 a month is too much (yes, inflation).

One really fun thing for me is that I bought another Tiny PC and put 32 GB of RAM in it, and I am running Proxmox on it. This lets duplicate all the steps I will go through to migrate the website (item (5) in the volunteer service list below) from Amazon to Linode. If I bungle a step, I revert the snapshot and try again. Even better, I can document about how I did the migration to my blog. I did have DNS pointing to this home device, which (via pfSense) did actually route the public Internet to this little host. I’ve since turned this off, but will turn it on again when it comes time to demo the new website.

Work Life

If $44,000 dropped into my lap today, I would retire tomorrow.

I have little to do except e-discovery and email retention policy work. We had a good system where clients would work through legal counsel before opening an email investigation; but, our new(er) management wants to bend over backwards to be helpful. That is a nice sentiment, but the previous practice protected us from liability – only the people with legal training made judgement calls. Now, I have people asking me to find “inappropriate” email, as if I know what the hell that means in a legal context. Sometimes I hate my job.

I did take on printers and the print server. I did build the replacement server and migrated over; that went really well.

The other big project is to check 5 million email that are about to be deleted: are they supposed to be deleted? There’s no way that my direct report and I can read all five million email and verify them all. So, we’re spot-checking. I probably will read about 12,000 email before we can confidently pull the trigger on the deletion process.

Volunteer Service Life

I counted up all the current service commitments I have, and it numbers sixteen at the moment.

  1. Sundays: treasurer of a weekly meeting.
  2. Sundays: Technology captain of a weekly meeting (I run the Zoom camera, speakerphone, and laptop).
  3. Second Sunday: audio recording and posting the recording to our .org website of the second Sunday speaker breakfast monthly meeting.
  4. Tuesdays: Secretary of a weekly meeting.
  5. Second Tuesday: web servant for our little 501(c)(3) central office.
  6. Second Tuesday: liaison to our district (complement of item (10) below).
  7. Second Tuesday: president of the board of our little 501(c)(3) central office.
  8. Last Tuesday: member of a monthly technology sharing session (I presented last month). Nicely enough, this is on Zoom, and happens from 16:00–17:30 which allows me enough time to be secretary at 19:00 (item (4) above).
  9. First Wednesday: Recording secretary, monthly district meeting.
  10. First Wednesday: liaison to our little central office monthly meeting (complement of item (6) above).
  11. Every other Wednesday: co-chair of the Founder’s Day Picnic; as such, I am on the planning committee. I set up the laptop, camera, and speakerphone for Zoom participants. Created two documents, but have a third pending. The other chair has been in Europe, so as far as I can tell, I’m the only one who has done anything.
  12. Thursdays: meet with my sponsee weekly.
  13. Thursdays: treasurer of a weekly meeting. Also, supplies.
  14. Fridays: literature captain of a weekly meeting.
  15. First Saturday: member of a temporary contact committee (meets monthly), and have begun outreach to a local institution.
  16. First Wednesdays (until this weekend): stage manager for our twice yearly dinner and a speaker event.

  1. Edit: this is almost certainly an overstatement. I still like Panda Express, and it hasn’t raised prices ridiculously, but it does qualify as a fast food restaurant. ↩︎

WordPress migration notes, part 2

One problem is that I need to install WP-CLI on the new server, and dealing with it is not easy.

The installation instructions don’t say one way or another, but WP-CLI should not be installed as root. Later, if you go to run it as root, it will bark at you that you’re doing a bad thing. Okay, nice to know.

But we do now have the problem that the user who runs WordPress (well, Apache, which runs the PHP code that is WordPress) is the www-data user. I cannot log in as the www-data user, by design (it is a good design). So, how to run this WP-CLI stuff?

sudo -u www-data wp <command>

Okay, this says to switch to user www-data (sudo = switch user and do) (the -u option specifies which user, in this case www-data), and run the wp binary and any command line options you want it (wp) to do.

Cool, but the user I’m logged in as has no idea of where the WordPress installation is. So now, every freaking command I have to type, sudo -u www-data wp <command>, also needs --path=/var/www/html/wordpress in there too.

This sucks.

There is supposed to be a file, wp-cli.local.yml, that I can put the path into. But that file is in my directory, and the sudo command switches away from that.

This sucks, still.

The www-data user does have a home directory; but, it won’t ever be used because the account runs /usr/sbin/nologin on every access. That is secure, but it doesn’t help me from having to type sudo -u www-data wp --path=/var/www/html/wordpress<command> every freaking time I need to do something.

Also, I am a fan of using the page-up key to search my bash history. That works great when I type a few letters, say gre and hit PgUp to search through my last few grep commands. Do I need to reassign ownership of files I’ve added to /var/www/html/wordpress/ ? chow and PgUp, and in a keystroke or two, I’ve got chown -R www-data:www-data /var/www/html/wordpress/ ready to run. Ditto the Apache2 enable and disable site commands. There are a ton of examples where just a few keystrokes and the PgUp key are great.

But having to type su and hitting PgUp presents me with a wall of noise, to finally find the command at the end that I actually want to repeat.

This sucks.

So, there is a solution. It seems kludgy, but it works, as long as you are willing to put up with its kludginess.

  1. cd to /var/www/html/wordpress/
  2. create a file, wp-cli.local.yml, in the location where WordPress is installed (where you just did the cd to), and inside it, put:
    • path: /var/www/html/wordpress

So, as long as you are already in the “right” place, and you have this file which points to your “right place”, you don’t have to specify the “right place” on the command-line of WP-CLI.

The other option is to be in my home directory, and do everything via bash scripts. I wanted to use the command-line, but I may need to put one more level of indirection in the process to get things to work easily. Like I said: kludge.

However, since the bash script has WPPATH="/var/www/html/wordpress" in it, all that sudo -u www-data nonsense goes away. Sure, I’m running it as some random user from some random location, but (I assume) that the WP-CLI people are just fine with that because if a random hacker gets into an ssh session on my box, I’m done for, anyway. Why not just assume whoever is running these commands is authorized?

This sucks quite a bit less, although it doesn’t make me warm and fuzzy about security.

WordPress migration notes

I have a production WordPress site on Amazon Lightsail that I need to migrate away from. These are notes on how to migrate over only the stuff I want to keep.

Backstory: Amazon Lightsail was very inexpensive, at under $5 per month for hosting on their smallest machine, and it did fine. Two things became problems, however:

  • Bitnami WordPress is super easy to spin up, and everything just works. But upgrading to a newer version of something (say PHP or MySQL or something) is a non-starter. The only way to upgrade is to spin up a new machine and do a migration to a new machine.
  • Amazon recently did a price increase. Now, I can get a Linode machine with double the RAM for only 27% more, and that will include backups.

Okay, so I need to migrate, but over the years, I’ve tried different plugins, and even though many of them were uninstalled, the installation routine left crap in the database. How to migrate to a new server, but leave behind the crap? This will be the topic of this post.

First, I installed WP-CLI, instructions can be found here.

Then, on the new machine, I installed only those Plugins which I know I need.

I took a snapshot backup at this point, simply because it seems prudent.

On the new machine, I logged in with ssh and ran this:

wp --path='/var/www/html/wordpress' db query "SHOW TABLES" --skip-column-names --allow-root

This gives me a list of the tables in the new machine that I want from the old machine.

+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_termmeta           |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+

This is a pretty minimal list; the old machine has a list 362 tables long! Matomo was a particularly egregious offender here.

With this information, I can use a script written by Mike Andreasen over on the WP Bullet website to dump the databases on the old machine:

# set WP-CLI flags
WPFLAGS="--allow-root"

# define path to the database dumps without trailing slash
DBSTORE="/tmp"
# get the name of the database
DBNAME=$(wp config get DB_NAME ${WPFLAGS})

# list all of the tables regardless of database prefix
TABLELIST=(wp_posts wp_postmeta)

# create the temporary directory for storing the dumps
mkdir -p ${DBSTORE}/${DBNAME}

# loop through tables and export, log details to /tmp/mysqlexport-<database>.txt
for TABLE in ${TABLELIST[@]}
do
    # export the table
    wp db export ${DBSTORE}/${DBNAME}/${TABLE}.sql --tables=${TABLE} ${WPFLAGS} | tee /dev/stderr
done > /tmp/mysqlexport-${DBNAME}.txt

With this done, I scp the files from the old machine to my local machine. Then I scp them up to the new machine. The next script assumes they are in the sql directory in the wordpress folder.

I tried it, but I should have taken a snapshot, first. 😉

I need to search-and-replace all instances of the old domain name in the MySQL dump files, and put in the new domain name. Technically, once the actual switch happens, the new machine will be found at the old name, so this shouldn’t be necessary. But, the whole reason for migrating to a development machine is to test out this migration process. And the new machine does have a different domain name.

The script to upload the MySQL dumps looks like this:

# define WordPress path
WPPATH="/var/www/html/wordpress"

# loop through all of the 
for DUMP in /var/www/html/wordpress/sql/*.sql;
do
    wp db import ${DUMP} --allow-root --path=${WPPATH}
done

But, until the data is cleaned up, the new WordPress website gets the dreaded white-screen-of-death.

Abandoned OpenSuSE Tumbleweed for Leap 15.6 beta: much better

In a previous post, I said how I made a huge mistake by “upgrading” to a fresh installation of OpenSuSE Tumbleweed, which came with KDE 6 and Wayland. This broke the KDE window tiling, and every interaction I had with KDE reminded me of what a huge mistake I had made. I’ve re-installed a fresh OS install from Leap 15.6 beta, and everything is good, back to the way it was before.

Firefox did bark at me that my profile was newer than previous; I had to start it with firefox --allow-downgrade

Also, I lost all my Firefox multi-account containers I had set. Thankfully, I had a previous containers.json file lying around.

But yes, now, everything is working excellently. Well, I haven’t tried Factorio or YouTube videos yet: but the important stuff is working.

Previously, I’d moved off Leap to Tumbleweed because tesseract-ocr was too old. It looks like in Leap 15.6 beta that it is a pretty new version.

Reddit + Google partnership seems like a bad idea to me

Exclusive: Reddit in AI content licensing deal with Google

The problem is that (if you live in the USA) your and my tax dollars are spent by national security agencies polluting Reddit with content from sock-puppet accounts to promote certain agendas.

This means, that by design, Google will be training its AI on untrustworthy sources.

Nothing about this plan is wise.

I know that Google does plenty of stupid things accidentally, but this seems willfully stupid.

New OpenSuSE install – whoops, that was a mistake (no KDE tiling window manager) – HUGE mistake

OpenSuSE Tumbleweed was acting squirrelly, so I downloaded an ISO and installed the latest OS from scratch. That was a huge mistake. Now, sometimes my machine spontaneously reboots, and other times windows get blocked for keyboard input.

On the good side, getting back to a working production system was never easier: delete the HDMI sound card and sound works again, add the external repos and codecs, and YouTube works again, add tesseract-ocr and The GIMP, and I can do my web work again. Install my Epson printer, and I can print a document for an upcoming event I’m a volunteer for.

On the bad side, that brand-spanking-new install came with Wayland and KDE 6, which is so new that it doesn’t have automatic window tiling. I hate it.

Whining about a problem isn’t the same as proposing a fix, so here’s what I wish I could fix:

When a new window opens (and it is not a dialog box), re-tile everything on that screen so that everything that showed before, still shows, but the new thing too, takes half the screen. I use “focus follows mouse”, so it is infuriating that as I move my mouse toward the newly opened window, the window underneath activates focus and hides the new screen behind it because the window underneath is full screen. I wouldn’t mind so bad if my old keystrokes worked, and I could shove the full-screen to half-screen: but that doesn’t work either. The previous behavior, which is what I want, is that the previous full-screen window would automatically resize to the other half of the screen when a new window opens.

This weekend I went to the Southern California Linux Expo, and had thought someone might be able to guide me to a solution. Nope, the KDE guy was anti-helpful, pointing me to a non- KDE solution. Checking it out, it is not what I want. I just want the old KWin tiling script to work.

Self-will got me a brand-new OS installation that frustrates me. Yay. I should have just lived with the squirrelly behavior until I heard the “all-clear” signal from the OpenSuSE forums.

New Debian install; ssh and sudo changes

Similar to what I wrote in New OpenSuSE Tumbleweed cannot ssh in but this time with Debian. This has to be done from a physical console login on the machine (or if it was a VM, from the hosting company’s console login desktop service). I’m logged in as root.

apt-get install vim

Debian is pretty bare-metal, man. This is probably very good from a security and stability point-of-view.

cd /etc/ssh/

vim sshd_config

Find PermitRootLogin and uncomment it, and change it to yes

Find #PubkeyAuthentication yes and uncomment it.

Find #AuthorizedKeysFile     .ssh/authorized_keys .ssh/authorized_keys2 and uncomment it and remove the second file authorized_keys2

Find PasswordAuthentication no and uncomment it and change it to yesnote that this is temporary!

Save and exit the sshd_config file. I’m not sure which service(s) would need to be restarted here, so I issue the reboot now command and watch the machine reboot. Today’s hardware is amazingly fast, compared to what we lived with a decade ago.

Now, from my remote machine, I ssh in as root. I get asked about accepting the private key, and get prompted for the password. Once I get in, I know I’m good to proceed to the next step.

[copy]ssh-copy-id root@host.domain[/copy]

I get asked to put in my password again, and now public key logins are enabled, instead of password-based logins.

I log in as root again, but this time without a password. At this point, I do some customizations per How to make Ubuntu have a nice bash shell like OpenSuSE (although this is Debian). One nice thing is that ~/.bashrc already had aliases ready for ll being an alias for ls -l

Something I don’t understand is why I cannot copy / paste from the Debian ssh session. My guess is that is has something to do with LS_OPTIONS in the bashrc file. Anyway….

I still needed to add alias ..='cd ..' though.

I log out.

I log in as a non-root user, with a password.

ssh-copy-id user@host.domain

I log in as the non-root user, without a password. Same thing: I add the customizations I like, where I can edit with vim, from doing a less on a file, the .. alias for changing directory up one, and using PageUp to search history. I log out.

I log in as root again. Now, I need to give my non-root user sudo rights.

adduser whatever-the-non-root-user-is sudo

Back to editing /etc/ssh/sshd_config

Find PermitRootLogin and uncomment it, and change it to no

Find PasswordAuthentication yes and uncomment it and change it to no

And then I save and exit the file and reboot the box.

Now I can ssh as the non-root user, and I cannot log in via ssh as root. Also, no-one can attempt to log in with just a password. This is good.

I read your email

… is a bumper sticker a friend of mine gave me about two decades ago. I never did put it on my car because it would (rightly) freak people out. I did hang it up in my cubicle because … if you work for my employer, I may indeed read your email. You see, I’m the e-discovery guy.

Now really, I’m not going to read your email unless there is some lawsuit or public records act request that indicates your email should be included in the discovery. Even then, I’m not going to read any more than I have to, to verify that the e-discovery query I’ve created is operating properly.

Actually reading your email is a paralegal’s job, after I hand over the evidence, er, everything that matches the search query. Whether it qualifies as evidence needs to be determined by someone with legal training: not me!

I should probably mention that this is within a large organization’s email system, and all employees get training during the on-boarding process that email in our system is the property of the organization: there is no right to privacy here. We are a public sector organization, so anyone can file a public records act request for anything in our email system. Don’t do personal stuff in the corporate email!

There are two of us on the email discovery team. Lately, we’ve been working on the email retention project. We’re going to purge email older than each department’s retention period. It is crucial that we don’t purge items that need to be kept. So these last few days, I’ve been calling up people’s old email, and checking that the addresses of senders and recipients match the labels on the email. There’s about five million email to check; we will not be able to check every one. We’re spot checking.

But, in spot-checking, I really am making the bumper sticker come true. It’s generally tedious, too. If there’s an email address I don’t recognize, there might be a clue in the email thread as to which departments this email is between. So I may have to actually read the email, instead of simply scanning the addresses and labels.

This was a long-winded way of saying that a co-worker of mine sent himself an email in 2008 with a link to a web page article. What the heck: I’ll click that link.

Kudos to you techtarget.com – your link still works, fifteen years later. Impressive.

Temporary fix for Nextcloud calendar broken sync

Nextcloud has a nice home page called the Dashboard, which has calendar items and to-do list on it. But ever since Calendar App version 4.5, it has been broken for items sourced outside of Nextcloud. In other words, if you create a calendar item on your smartphone and sync it in to Nextcloud, on the Calendar web page you can see the item, but on the Dashboard home page it will be missing. The solution is to downgrade the Calendar app to version 4.4.5

Steps to perform:

  1. In the Nextcloud admin interface, find the Calendar app and disable it
  2. ssh into your Nextcloud instance
  3. cd /var/www/html/nextcloud/apps/
  4. mv calendar calendar-old
  5. wget -q https://github.com/nextcloud-releases/calendar/releases/download/v4.4.5/calendar-v4.4.5.tar.gz
  6. tar xvf calendar-v4.4.5.tar.gz
  7. chown -R user:group calendar
  8. In the Nextcloud admin interface, select the Disabled apps section. Then Enable (but not update) the Calendar 4.4.5 app.

And now, when you go back to your Dashboard home page, your calendar will have all the items on it. 🙂

You do get to apply this fix after every update. 🙁

Technically, this post title is somewhat misleading: sync is not broken. What is broken is that items that sync in from CalDAV sources apparently have something that, when it is present, causes the Dashboard page to skip those calendar items. It just looks like sync is broken because you knew the items were on your calendar: but when you look at the Dashboard for today, they are missing. I suppose a better title would be Temporary fix for Nextcloud calendar (some) items missing from Dashboard

Papa Murphy’s website no longer works after Google block

I mentioned here how I added a filter to my browsing to block those annoying Google login pop-ups. I had successfully ordered take-and-bake pizza from Papa Murphy’s before implementing this filter. Today, I can no longer order pizza from them.

Even though I had previously placed an order, and can call up that order from my rewards profile, attempting to actually order anything takes me to a Google Maps page to identify where to pick up from. That page never finishes because of the new filter. Every attempt at adding something to my shopping cart fails because the operation cannot get past the check-in-with-google part.

Well, if I have to choose between keeping the filter in place versus ordering take-and-bake pizza, I’m keeping the filter in place. Which is a shame, because the previous pizza order turned out really well, and was reasonably priced.