WordPress Gutenberg is getting worse

A lot of this blog are my entries to help myself with some task. I like to copy / paste commands that I don’t want to memorize. If those commands help someone else trying to do the same task, that’s wonderful.

Copy / past has been a bit of a chore on WordPress, however. I’ve tried three different plugins. The first one worked for a while, but then broke. It was based on WordPress shortcodes. I don’t recall if it was WordPress that upgraded and broke the plugin, or if it was the plugin that upgraded and broke. Whatever: the shortcode stopped working.

The second plugin worked at least once, but then broke after an update of some sort. It was supposed to work either by specifying a shortcode or text formatting. I’m pretty sure the text formatting was supposed to be for “inline code”. When the plugin saw that the text was marked up that way, it added the copy-to-clipboard function. It was pretty frustrating to go back and edit some old posts and less than a month later, those posts are trashed up without providing copy-to-clipboard access.

This third plugin, Copy Code To Clipboard works well, and it is based on the /preformatted text attribute.

I don’t recall if this is the way it always was, but: it appears that this only works with whole blocks now. You can have inline code or keyboard input within a paragraph, but you cannot have /preformatted within your paragraph.

But, the /preformatted block type is just implementing the HTML tags <pre> and </pre>

So, I can edit in HTML mode and insert it that way, right?

Where WordPress has made things worse, is that now, <pre> and </pre> implement a forced <br></br> immediately before <pre> and immediately following </pre>

And it doesn’t put those codes in the HTML. It just sneaks them in there and taunts me with the extra lines before and after every piece of text I want copy-to-clipboard for.

Thanks, WordPress developers: I hate it. You’ve made the world a worse place.

And another thing ….

This showed up many months ago, shortly after Gutenberg became official: Ctrl-K for creating an anchor (link) used to be great. On another web site I maintain, we have an old kludgy events calendar plugin, and it still works great there. That events calendar plugin does not use Gutenberg.

All I want for Ctrl-K is to highlight the text to form a link, hit Ctrl-K to start the anchor creation, hit Ctrl-V to paste in the URL, and hit <Enter> to finish the anchor.

Guess what no longer works in Gutenberg? Hitting <Enter> to finish the anchor.

I am always so very overjoyed when I have to finish an operation by grabbing my mouse and finding the stupid little button to click to indicate that I want to finish creating the anchor. I’m editing an anchor: there’s really not that much more that I can do here.

Like what the heck was the <Enter> key supposed to otherwise signal?

In the current Gutenberg, it is simply a no-op. Useless. A waste of a keystroke. Until I find the stupid mouse cursor and click on the stupid little submit button, the anchor is incomplete. All editing has come to a stop, until I do some freaking mouse work.

Thanks, WordPress developers: I hate it. You’ve made the world a worse place, again.

WordPress migration notes, part 2

One problem is that I need to install WP-CLI on the new server, and dealing with it is not easy.

The installation instructions don’t say one way or another, but WP-CLI should not be installed as root. Later, if you go to run it as root, it will bark at you that you’re doing a bad thing. Okay, nice to know.

But we do now have the problem that the user who runs WordPress (well, Apache, which runs the PHP code that is WordPress) is the www-data user. I cannot log in as the www-data user, by design (it is a good design). So, how to run this WP-CLI stuff?

sudo -u www-data wp <command>

Okay, this says to switch to user www-data (sudo = switch user and do) (the -u option specifies which user, in this case www-data), and run the wp binary and any command line options you want it (wp) to do.

Cool, but the user I’m logged in as has no idea of where the WordPress installation is. So now, every freaking command I have to type, sudo -u www-data wp <command>, also needs --path=/var/www/html/wordpress in there too.

This sucks.

There is supposed to be a file, wp-cli.local.yml, that I can put the path into. But that file is in my directory, and the sudo command switches away from that.

This sucks, still.

The www-data user does have a home directory; but, it won’t ever be used because the account runs /usr/sbin/nologin on every access. That is secure, but it doesn’t help me from having to type sudo -u www-data wp --path=/var/www/html/wordpress<command> every freaking time I need to do something.

Also, I am a fan of using the page-up key to search my bash history. That works great when I type a few letters, say gre and hit PgUp to search through my last few grep commands. Do I need to reassign ownership of files I’ve added to /var/www/html/wordpress/ ? chow and PgUp, and in a keystroke or two, I’ve got chown -R www-data:www-data /var/www/html/wordpress/ ready to run. Ditto the Apache2 enable and disable site commands. There are a ton of examples where just a few keystrokes and the PgUp key are great.

But having to type su and hitting PgUp presents me with a wall of noise, to finally find the command at the end that I actually want to repeat.

This sucks.

So, there is a solution. It seems kludgy, but it works, as long as you are willing to put up with its kludginess.

  1. cd to /var/www/html/wordpress/
  2. create a file, wp-cli.local.yml, in the location where WordPress is installed (where you just did the cd to), and inside it, put:
    • path: /var/www/html/wordpress

So, as long as you are already in the “right” place, and you have this file which points to your “right place”, you don’t have to specify the “right place” on the command-line of WP-CLI.

The other option is to be in my home directory, and do everything via bash scripts. I wanted to use the command-line, but I may need to put one more level of indirection in the process to get things to work easily. Like I said: kludge.

However, since the bash script has WPPATH="/var/www/html/wordpress" in it, all that sudo -u www-data nonsense goes away. Sure, I’m running it as some random user from some random location, but (I assume) that the WP-CLI people are just fine with that because if a random hacker gets into an ssh session on my box, I’m done for, anyway. Why not just assume whoever is running these commands is authorized?

This sucks quite a bit less, although it doesn’t make me warm and fuzzy about security.

WordPress migration notes

I have a production WordPress site on Amazon Lightsail that I need to migrate away from. These are notes on how to migrate over only the stuff I want to keep.

Backstory: Amazon Lightsail was very inexpensive, at under $5 per month for hosting on their smallest machine, and it did fine. Two things became problems, however:

  • Bitnami WordPress is super easy to spin up, and everything just works. But upgrading to a newer version of something (say PHP or MySQL or something) is a non-starter. The only way to upgrade is to spin up a new machine and do a migration to a new machine.
  • Amazon recently did a price increase. Now, I can get a Linode machine with double the RAM for only $2 more, and that will include backups.

Okay, so I need to migrate, but over the years, I’ve tried different plugins, and even though many of them were uninstalled, the installation routine left crap in the database. How to migrate to a new server, but leave behind the crap? This will be the topic of this post.

First, I installed WP-CLI, instructions can be found here.

Then, on the new machine, I installed only those Plugins which I know I need.

I took a snapshot backup at this point, simply because it seems prudent.

On the new machine, I logged in with ssh and ran this:

wp --path='/var/www/html/wordpress' db query "SHOW TABLES" --skip-column-names --allow-root

This gives me a list of the tables in the new machine that I want from the old machine.

+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_termmeta           |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+

This is a pretty minimal list; the old machine has a list 362 tables long! Matomo was a particularly egregious offender here.

With this information, I can use a script written by Mike Andreasen over on the WP Bullet website to dump the databases on the old machine:

# set WP-CLI flags
WPFLAGS="--allow-root"

# define path to the database dumps without trailing slash
DBSTORE="/tmp"
# get the name of the database
DBNAME=$(wp config get DB_NAME ${WPFLAGS})

# list all of the tables regardless of database prefix
TABLELIST=(wp_posts wp_postmeta)

# create the temporary directory for storing the dumps
mkdir -p ${DBSTORE}/${DBNAME}

# loop through tables and export, log details to /tmp/mysqlexport-<database>.txt
for TABLE in ${TABLELIST[@]}
do
    # export the table
    wp db export ${DBSTORE}/${DBNAME}/${TABLE}.sql --tables=${TABLE} ${WPFLAGS} | tee /dev/stderr
done > /tmp/mysqlexport-${DBNAME}.txt

With this done, I scp the files from the old machine to my local machine. Then I scp them up to the new machine. The next script assumes they are in the sql directory in the wordpress folder.

I tried it, but I should have taken a snapshot, first. 😉

I need to search-and-replace all instances of the old domain name in the MySQL dump files, and put in the new domain name. Technically, once the actual switch happens, the new machine will be found at the old name, so this shouldn’t be necessary. But, the whole reason for migrating to a development machine is to test out this migration process. And the new machine does have a different domain name.

The script to upload the MySQL dumps looks like this:

# define WordPress path
WPPATH="/var/www/html/wordpress"

# loop through all of the 
for DUMP in /var/www/html/wordpress/sql/*.sql;
do
    wp db import ${DUMP} --allow-root --path=${WPPATH}
done

But, until the data is cleaned up, the new WordPress website gets the dreaded white-screen-of-death.

WordPress media upload in wrong folder

This one is a little weird. I had inherited a web site; I do some volunteer service, and the original web site was done in FrontPage 98. People who used the web site knew the URL to a particular file on that web site: the meetings directory.

Later, a member showed me an app that was super useful; but the best way to implement it was as a WordPress plugin. I guess I’m learning WordPress now (not that I was a fan of FrontPage 98: good riddance).

After the conversion, the members that knew the URL to the meeting directory complained that their bookmarks were broken. Fair enough, I had broken them. I got a redirector plugin, and created a 301 redirect from the known URL to the new location.

But there was a problem with the new location: my default WordPress URL scheme for the uploads folder includes putting year and month in the URL. So an upload today would be in ./uploads/2023/04/

What’s going to happen next month, when there is a new meetings directory file? It isn’t going in the April folder, I can assure you. Am I going to have to update the redirect every single month?

I’ve been doing computers for 40 years. Having to update the redirect every single month is stupid. Why can’t I just move the file to the root of the uploads directory?

Well … turns out WordPress needs to have a database entry for every file. I can move the file, but that orphans that entry in the database.

Even if I do move it, how does the old office manager update it? A regular old Media Library upload will upload the new file to a dated folder, and now we’ve got two files with the same name but different locations and URLs.

I had to find a plugin that does media file replace (in-place), but that wasn’t too hard. I use Enable Media Replace by ShortPixel. It was pretty easy to train the old office manager to follow the steps: click on the file in the Media Library, find the Replace button, and follow the directions on screen.

That was six years ago. This morning, the new office manager deleted the file. She had the presence of mind to recognize that something was wrong; but not enough to halt before doing damage. The new meeting directory file now has the wrong name and the wrong URL.

I kind of hate WordPress for the permissions trouble. What looked to be simple with the WordPress CLI (command line interface), wp media import did not work. ./wordpress/bin/wp would only ever get Permission denied. I should probably mention that the “user” I’m logged in as is not the same user as who runs the web site and has access to all the files.

Here are the steps I had to take to repair the damage. I got to figure them out; hopefully you will find them useful. And if the file gets damaged again, I’ll have these instructions for a quick repair.

  1. ssh into the server and find the uploaded file. In this case it will be in ./uploads/2023/04/
  2. rename the file to the old file name.
  3. move the file from ./wordpress/htdocs/wp-content/uploads/2023/04/ to ./wordpress/htdocs/wp-content/uploads/
  4. Delete the file in the WordPress Media Library (web page). WordPress will still show you the file, because it isn’t looking at the file, it’s looking at it’s database entries about the file. It looks like the file is there, but it’s a phantom. Delete it.
  5. Back to the server command line prompt: change the file system permisssions to be way too permisive.
  6. ./wordpress/bin/wp media import ./wordpress/htdocs/wp-content/uploads/file.pdf --path=./wordpress/htdocs/ --skip-copy
  7. change the file system permisssions back. ASAP.
  8. When you look at the WordPress Media Library (web page), you will see your file again – but this time it has a non-time-stamped URL. Huzzah! Paste that link into whatever page needs to serve it up. In my case, since I moved the new file to where the old file was, the links were still good.

How to give way too many permissions (this is terrible):

sudo chmod -R 777 /path/to/folder/wordpress

How to fix the permissions:

find ./wordpress -type d -print0 | sudo xargs -0 chmod 755

find ./wordpress -type f -print0 | sudo xargs -0 chmod 644

The Helm migration is complete

As I mentioned before, The Helm email appliance company is calling it quits, which I understand. If the business isn’t going to make it, it is better to pull the plug than just keep letting things linger. Best of luck to them on their next adventure.

So, what did I do?

  • (there was a detour while Amazon pissed on their customers wanting to run Mail-In-A-Box) (me)
  • I provisioned the smallest Ubuntu 22.04 LTS machine that Linode has.
    • Mildly annoyed that it doesn’t really support LVM (Logical Volume Manager); they have a backup service that runs an agent inside their machines, and that agent doesn’t do LVM. Still, I know that I’m going to need to grow disks, so I had to learn how to re-partition the Linode so I could do LVM. LVM done.
  • I made a mail server on the Linode machine at a domain name I have that I don’t really use. I followed the excellent guide from Christoph Haas at workaround.org: ISPmail guide for Debian 11 “Bullseye”
  • I got RoundCube webmail working for the domain name; complete with SPF and DKIM.
  • I got Thunderbird to send and receive from the domain name.
  • Then I added Nextcloud to the same box. I wanted CalDav for contacts and calendar, when I eventually hook my iPhone to it.
    • The Nextcloud documentation really needs a lot of work here. If I were retired, I would like to help them with their documentation.
    • Finally, I have the files.example.tld function of The Helm replaced, although at a different domain name.
    • Rspamd uses Redis, but so does Nextcloud. But one uses the network stack, and the other, Unix sockets. Get them both set same.
  • Then I added Duplicati backup. This wasn’t great, as it added a ton of overhead in the form of Mono, just for a graphical user interface.
  • I realize that I’m going to want to host my WordPress here too. I don’t want to have to wrangle four Let’s Encrypt SSL certificates, one for each domain. What about a single wildcard SSL certificate?
    • Yes, that can be done, but: my domain names registrar doesn’t support it. Linode does, though. I install the Linode DNS agent on my machine, and spin up Linode DNS servers to do the DNS work. I have to configure my domain names registrar to tell the rest of the world that Linode is where my name servers are.
    • Somewhere in there I installed the Unbounded DNS resolver. Looks like I need this on my home machine, too, for Home Assistant.io1
  • I got to the point where I could request the domain name transfer. Turns out the people at The Helm were going through Ghandi.net. Ghandi.net tooks as long as they legally could, before actually doing the DNS transfer.
    • Ghandi –> registrar, then registrar to point to Linode. Linode DNS needs to be reconfigured for SPF and DKIM. I had gotten some DNS records wrong, too.
  • Thunderbird to connect to the mail.domain.tld, and though the name hasn’t changed, everything underneath has. Thunderbird is not happy; I lose all my old mail.
    • Well, I didn’t, but it is in a new folder now, so that I’ve got an old version of my mailbox and a new version of my mailbox, and they are separate. Not ideal. Perhaps I could have done an IMAP to IMAP transfer, if I hadn’t already moved the domain name.
  • Hey, looky there: one of the volumes filled up (but everything else was unaffected). Time to grow a disk using LVM.
  • iPhone to connect to CalDAV; phew that was not well documented and had tons of conflicting information.
  • Not really happy with Duplicati, so I remove it and Mono, and install Restic backup instead.
  • Okay, so the last thing left to do is to migrate this blog from Amazon to this new Linode machine. The transfer using NS Cloner goes well, as it usually does. But domain names need to be updated via Let’s Encrypt certbot.
    • Crud. I’m on holiday out of town with family, and have only a Windows laptop with me. Per best practice security protocols, I can only ssh in from home. Logging in via root@ is blocked, and I don’t think I can even do a ssh-copy-id without getting in first and lowering the root login barrier. The certbot to add gerisch.org to the domains list is going to have to wait.
  • Here I am, at home, and I’m done. Dovecot, Postfix, RoundCube, Nextcloud, and WordPress all on one box.
  • While I was on holiday, I took the .mp3 files on the Nextcloud, and made Nextcloud Music Player playlists for the different types of files. Then on the 16 hour drive home, my iPhone logged in to the Nextcloud web interface and played playlists.
    • It’s a bit of nirvana to me, to have a large list of songs (randomized of course) playing absolutely advertising-free because I paid for the songs in the first place.
  1. I ended up not connecting Home Assistant to their cloud ↩︎

WordPress initial install error: “Cannot select database”

The full error is

Cannot select database

The database server could be connected to (which means your username and password is okay) but the database could not be selected.

What is actually wrong is that you don’t have a file wp-config.php

From what I gather, it used to be that wget http://wordpress.org/latest.tar.gz would bring in a .tar.gz file which contained wp-config.php. That file isn’t there any more in the source.

In the old scheme, the installer would modify it with the user name, password, database table name and then proceed with the rest of the installation.

If I had to guess, I’d guess the new scheme is supposed to do cp wp-config-sample.php wp-config.php and then the installation picks up as it did before (modifying it with the user name, password, database table name); then proceeding with the rest of the installation.

Someone got the idea that instead of maintaining two wp-config files, they could maintain and ship one, and then copy it during install. This is a good idea: makes the source a tiny bit smaller, saving storage and transfer bytes. Just one thing though: do the copy, stupid, and check your results. Err out in a rather ugly mess if you didn’t get the copy right – then at least you’d hear about it mightily if you got it wrong.

The solution is to manually copy the file, edit it with the user name, password, and database table name, and then try to install again, twice.

If you simply copy wp-config-sample.php to wp-config.php and then run the install, it’s going to bark at you that wp-config.php already exists. Also, it is not going to ask you for the user name, password, and database table name. Since you already had to fuck around with the wp-config.php file, surely you already took care of the user name, password, and database table name.

So,

  1. start the install from scratch
  2. copy the file wp-config-sample.php to wp-config.php
  3. edit the new file, supplying database table name, user name, and password
  4. start the install from scratch again and let it bark at you that the new file already exists
  5. click the try again link.

Finally the “famous five minute install” is done after you spent thirty minutes in frustration finding this post and not doing what the documentation says.

Personally, I think it is low quality programming to leave this bug in the basic install process. It’s been there for months. So, what? No-one at Automattic tests the installer any more?

PHP Upgrade for Bitnami Lightsail WordPress

Turns out the way to upgrade is to spin up a new box (or two) and migrate.

Step 1) Spin up a new instance. At the moment I’m using Amazon Lightsail.

Step 2) assign a DNS entry to it. At the moment I’m using Hover. I do have the DNS entries set to a 15 minute time-to-live. Whatever IP address that Lightsail assigned is what I put into Hover.

Step 3) Set the new machine to know it’s new host name.

  1. Of course, the what-used-to-work is different now. The command is now sudo /opt/bitnami/bncert-tool

Step 4) Get logged in to the new instance of WordPress. BTW, the login user name has changed. It used to be bitnami now it is user

Step 5) Update WordPress to the current version, if it’s out of date.

Step 6) Delete the plugins in the base image that won’t be migrating over. BTW, one of the plugins, TaxoPress, apparently had a different name prior to updated and would err out instead of deleting. Do upgrade the ones I’m keeping.

Step 7) I use NS Cloner and NS Cloner Pro to migrate between servers. I like the people there; they did actually help me when I was having an error getting it to run. I was migrating a site with All-in-One Event Calendar by Time.ly and apparently that plugin just does not play nice with database records or something. I am lucky that I bought a licence a long time ago; since then they have had to raise their prices. As a tool, it has been working great, but the price increase was really steep. If I did this for a living, I’d have no qualms about paying the annual license fee.

And then ….

The problem is that I just migrated gerisch.org to davidgerisch.xyz, but I really want the web site on gerisch.org

Okay, so there are two ways out of the problem here.

Alternative 1 is to go to the old gerisch.org and run sudo /opt/bitnami/bncert-tool and change it to something else and then go to davidgerisch.xyz and run sudo /opt/bitnami/bncert-tool and change it to gerisch.org AND THEN do database search and replace to swap out davidgerisch.xyz to be gerisch.org instead – all on the new machine. My experience with these sort of database search and replaces hasn’t been wonderful. There’s also the problem of being logged in to the web site I’m changing the name of; at some point I cut off my own feet while I’m trying to stand on them (DNS -wise).

I went with alternative 2:

  1. In Lightsail, detach the static IP that gerisch.org is pointing to.
  2. In Lightsail, delete machine (old) gerisch.org
  3. In Lightsail, spin up (new) gerisch.org
  4. In Lightsail, attach the static IP for gerisch.org to this new machine.
  5. Run sudo /opt/bitnami/bncert-tool to assign the machine it’s new name gerisch.org
    1. Note that with alternative 2, the Hover domain name registration hasn’t changed. The public IP is on a different box (running an out-of-the-box WordPress install), but from the DNS point of view, this is simpler – nothing has changed. DNS name gerisch.org is still pointing the same IP address it always has.
    2. With alternative 1, I had the problem that the old box “knew” it was gerisch.org, so I had to run bncert-tool to change it to something else. If I didn’t, and I just ran bncert-tool on the new box, trying to claim gerisch.org, the Let’s Encrypt people would complain, correctly, that this domain name is currently in use on a box it can talk to right now, and that box has a different IP address. Am I trying to steal it’s identity?
  6. Do the top steps 4, 5, 6, and 7 again: Update WordPress, plugins, and migrate with NS Cloner Pro.
  7. Delete the running machine davidgerisch.xyz – it was only every going to be a temporary container anyway.
  8. Change all the Hover entries to point to the same IP as gerisch.org

My site is pretty small, so the migration with NS Cloner Pro takes under five minutes. If I had more data and it was going to take longer, I’d probably figure out how to enable FTP so that NS Cloner Pro could use that.

Advertising sucks (again)

I don’t know how much money there is in tracking people and selling their online profiles / behavior patterns. My guess is that a huge amount of folly has people convinced that their folly is worth it. I hope that they are severely disappointed.

I first noticed with WordPress, that Automattic (the company behind WordPress), really wants to track your every move. They created the Gravatar system, and it is something that you cannot opt out of. You as a WordPress admin were not allowed local profile pictures – you had to use Automattic’s avatars or use nothing. And now it’s gotten worse. Your web site won’t run right without reporting in to the Automattic servers.

Every visit of yours to any WordPress site will generate a “hit” of you going to that web site. It’s worse than cookies, because at least you can delete your cookies.

What I’ve noticed is that if I have uBlock Origin turned on and “Block remote fonts” turned on, then WordPress does not render the admin panel correctly. Remote fonts are a way for the web site to get your machine to “phone home” to someone else’s servers. Apparently, Dahsicons have been a thing since WordPress 3.3.

Why should my web site make a call to Automattic’s servers just because you visited my web site? It does that with Gravatar (unless I try really hard to block that).

Other web sites appear broken when remote fonts are turned off.

I have a hard time believing that there is any good value to me for my web browser to retrieve on every page load an image file from a remote server just to show a button.

Bitnami phpmyadmin

Just a quick note for me to easily find and remember how to access PHP My Admin on a Bitnami WordPress instance

From the command line on my local machine:

ssh -4 -N -L 8888:www.gerisch.org:443 -i $insertpathtopemfilehere nottheadmin@gerisch.org

And then in a browser:

https://www.gerisch.org:8888/phpmyadmin

Lastly, remember that the login name to phpmyadmin is root (not the Bitnami application password, or any other user name).

Because public Internet access to PHP My Admin would be a Very Bad Idea, the Bitnami WordPress image is configured such that PHP My Admin refuses to run, if the requests don’t come through www.gerisch.org

This is a good idea.

But what that also means is that I need something listening on my www.gerisch.org address, that can forward the network traffic to the remote web server.

ssh -4 says use IP v4 addresses only (suppresses IP v6 errors if your machine doesn’t have that).

ssh -N says do not execute remote commands (all we’re going to be doing here is port forwarding).

ssh -L says local to remote port forwarding will be done.

8888:www.gerisch.org:443 says the local port to listen on is port 8888, the local address to listen on is the home address of www.gerisch.org, and when listening on the “server” www.gerisch.org, know that it will be listening for port 443 traffic (https instead of http). Another way of thinking about this is that your web browser that is throwing HTTP GETs and PUTs will be throwing them at port 8888, since that is the port the service is listening on. But when the traffic is thrown across the Internet, ssh is going to throw the traffic to www.gerisch.org port 443. Yet, www.gerisch.org:443 is really just a front for gerisch.org:443

ssh -i says to use a public/private key pair for logging in (instead of a password). $insertpathtopemfilehere is the variable that holds the path to the .pem file.

ssh nottheadmin@gerisch.org is the actual remote login name and server name.

COVID-19, new water heater, WordPress annoyances, Zoom meetings, oh my

Wow a lot of stuff has happened since my last post. I’m still catching up; but, I didn’t want to go too long without pointing out I’m still alive.

COVID-19: John Hopkins University has some computer science students who are doing data gathering and mapping that on to ArcGIS. The web page works as a status report of where we are today. Thanks to Ars Technica for the original article.

Today, Italy went over the 10,000 dead mark.

New Water Heater: I went two weeks without hot water. I am grateful this was before COVID-19, because I used my gym membership for my daily hot shower. In fact, a friend of mine, way back when, pointed out that if you ever go homeless, a gym membership is a way to stay human for around $20 per month.

And now the gyms are closed due to COVID-19. Well that hurts the homeless even more.

The whole water heater debacle deserves a post of it’s own, so I will do that, later.

WordPress Annoyances: there are things that don’t work, and, the WordPress Support Forums are a mass of dead and empty posts of people asking for help. Other forms of help don’t seem to be, either.

I want to migrate between sites, and from single-site to multisite, but man this stuff just does not work.

Zoom Meetings: Man oh man, I wish I had listened to my stock picking guys when they said Zoom was the new hotness in video conferencing over the Internet. Zoom stock price has nearly doubled since then. And now, even I use Zoom, and I know of three people who signed up to pay a monthly subscription. By the way, Discord is pretty cool, too.

Microsoft should be ashamed of themselves that they couldn’t leverage their leadership with Skype and Teams into being the industry leader. Of course Google had a shot way back when with Hangouts, too. Google though is just kind of a big failure to get anything done since merging with Doubleclick and abandoning the whole “Don’t be evil” motto.