Proxmox copy of WordPress virtual machine – changing the siteurl

I’ve gone into Proxmox and cloned a WordPress machine to a new machine. I configured DNS and DHCP to assign a new host name for the machine; now I need to get WordPress to understand that too.

Because WordPress stores the site URL inside the database, this means running a MySQL query.

The problem is that the old WordPress site (because that is what is in the new machine’s database) keeps telling Apache to serve up the pages from the old machine. So everything on the new machine will need to resolve at https://tratest.example.com but because WordPress is going to its database to find out where everything is, as soon as the page loads, it tries to go to https://aawp.example.com

That machine is powered off in Proxmox, so obviously nothing works.

Can’t really use any tools inside WordPress to do the search-and-replace, so I need something outside of WordPress. I generally do not install phpMyAdmin, because 1) it is extra work to configure Apache to serve up a different website just for this one function, and 2) that becomes just one more place a bored 14 year old might try to break in. If I don’t need it, why put it out there?

So let’s try some MySQL queries from the command line.

UPDATE wp_options SET option_value = replace(option_value, 'https://aawp.example.com', 'https://tratest.example.com') WHERE option_name = 'home' OR option_name = 'siteurl';

Nice! I did a restart of Apache, and now the new machine at the new domain name serves up the content from the cloned machine. I know that this worked because the old machine in Proxmox is still powered off.

There are also several other changes I made:

  • hostnamectl set-hostname tratest.example.com
  • edited /etc/hosts and copied the 127.0.1.1 entry to 127.0.2.1 and added the new host name, per Change host name and domain
  • edited the Apache .conf file in /etc/apache2/sites-available/ and replaced the ServerName entry

Let’s Encrypt for my internal domain

It is time to renew my wildcard SSL certificate for an internal domain I have, and here are the steps I went through to solve it. When I say internal domain I’m referring to a DNS domain that exists on the public Internet, but which wholly and only points to the IP address of my home broadband router. That router has pass-through enabled, so that essentially, my pfSense box is my presence on the Internet for everything inside my home.

I turned off HAProxy so that pfSense wouldn’t be sending the challenge traffic to the only internal server I put out there. The internal server, Nextcloud, doesn’t play nice with others; in order to keep things consistent, they want it to be an appliance where the only stuff running on the box is their code. Okay, I get that. This wouldn’t be so annoying if it wasn’t bug-riddled junk that is in a huge rush to implement new features. Can you say “AI”? But I digress.

I created a new Linode API key in case the problem was that the old API key didn’t have access. Well, the first new key had the wrong selector, and resulted in “Your OAuth token is not authorized to use this endpoint”.

The problem is that the pfSense script is trying to generate a challenge key and insert it into a web server that doesn’t exist. The pfSense web admin portal is that web server. When I turned off HAProxy, that should have opened it up. It did, but I couldn’t tell because the Linode API key was wrong.

Okay, maybe I need to log in to the pfSense box and manually use a generated challenge key? How to log in to the pfSense box? When was the last time I did that?

Here’s a convenient command:

 history | awk '{$1="";print substr($0,2)}' | grep "ssh " | grep -v history | sort | uniq

We run the output of the history command through awk to remove line numbers, then search for "ssh " (the trailing space omits ssh-copy-id and such), run that through sort, and run that through uniq. Et voilà, and I have a list of all twelve boxes I’ve logged in to since history.

Sigh: pfSense isn’t one of them.

But this was a good exercise: I did get logged into pfSense, and did find the “Your OAuth token is not authorized to use this endpoint” problem.

I deleted the previous Linode v4 API certificate specifications, and it worked.

Time to turn HAProxy back on.

Okay, the short form is:

  1. Generated a new Linode API access token with Domain read/write access
    • This probably won’t be required if the access token hasn’t expired.
  2. pfSense > Services > HAProxy > Settings > disable and apply
  3. pfSense > Services > Acme > Certificates > pick certificate and Edit > delete the Domain SAN list entry > Add a new Domain SAN list entry with the new Linode API access token > Save
  4. pfSense > Services > Acme > Certificates > pick certificate and hit Renew
  5. Do the other certificate in the list
  6. pfSense > Services > HAProxy > Settings > Enable and apply

WordPress copy to test environment

I’m a fan of Tenets of IT

Number 15 of which is “Everyone has a test environment, not everyone is lucky enough to have a separate production environment.”

Heh.

This post will be how I copied a production web site to a test environment.

Prerequisites are:

  • A virtual machine server
  • A domain name
  • A wildcard certificate for that domain name

In my case, for the virtual machine server, I bought a used Lenovo Tiny PC from Amazon, loaded it up with RAM and installed Proxmox on it.

I had bought a domain name, really for my Nextcloud instance, but I can also use it for my home lab.

I have a firewall, which can get SSL certificates from the EFF project Let’s Encrypt, via the certbot / acme protocol. I went through the trouble to get a wildcard certificate, so that any box in my domain name can be SSL protected.

The basic steps

  1. Prepare the new machine
  2. Install WordPress
  3. Export WordPress “production” and import to “test”
  4. .
  5. .
  6. .
  7. Profit!

Prepare new machine

  1. Install Debian
  2. Add vim and other configurations
  3. Change host name and domain
  4. Add ssh key login
  5. Install Apache and MariaDB
  6. Install WordPress
  7. Update Apache enabled sites to include SSL
  8. Update Apache default Debian setting
  9. Install one WordPress plugin to import the export
  10. A note about ASE (Admin Site Enhancements)

Install Debian

This was a Proxmox step, and I think I did it from a .ISO file

Add vim and other configurations

apt-get install vim
update-alternatives --config editor
vim ~/.bash_profile
export EDITOR=vim
[ -r $HOME/.bashrc ] && source $HOME/.bashrc
export PS1='\[\e[32;40m\]\u\[\e[37m\]@\[\e[32m\]\H:\w\[\e[30m\] \[\e[32m\]\$\[\e[30m\] \[\e[0m\]'
vim ~/.bashrc

Find the aliases I want and add them, uncomment them, etc. I always add:

alias ..='cd ..'
vim /etc/inputrc

Find # "\e[5~": history-search-backward and uncomment it

Change host name and domain

I should mention that in my home firewall, it is also my local DNS resolver. So inside there, I have server1.example.com mapped to the IP address Proxmox gave to my new virtual machine (Proxmox got it from my DHCP server).

hostnamectl set-hostname server1.example.com
vim /etc/hosts

In here, I added an entry for 127.0.1.1 which maps the fully qualified host and domain name to the host. So for example, 127.0.1.1 server1.example.com server1

The address 127.0.1.1 is specified because Apache will try to identify the site by name (later). Everything in the 127.x.x.x maps to the local machine, so they all go to the same place. But having it as 127.0.1.1 stops a duplication conflict with 127.0.0.1 for localhost

Add ssh key login

ssh-copy-id root@server1.example.com

Never, in production, would I be commonly logging in as root. But this is a test / play environment, and I find the process cumbersome to set up an alternative user, and then have to be constantly doing a su - (switch user) to root. Since this is in my home lab and not visible on the public Internet, this is not so much a risk. And … before I really try anything screwy, I can take a Proxmox snapshot.

Another thing I run into, is that because I can rip and replace virtual machines easily, I tend to have to delete old entries from the ./ssh/known_hosts file.

ssh-keygen -R "server1"

Install Apache and MariaDB

Essentially, I am following the instructions here at Rose Hosting

One change I make is that immediately after installing MySQL, I run the process to secure the MySQL installation:

mysql_secure_installation

Yes, I set the switch to unix_socket authentication. I have zero need for MySQL to authenticate across a network. This is overkill for a home lab, but since this is how production is going to be set, the machine in test should match it.

Install WordPress

I still follow the instructions at Rose Hosting

Update Apache enabled sites to include SSL

The Rose Hosting instructions don’t disable the 000-default.conf web site

a2dissite 000-default.conf

Now, how to make https:// work? Well, there is already a default-ssl.conf file, so all it really needs is a certificate and key, and for Apache to use SSL. The SSL certificate files mentioned there are /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key

I have exported my wild card certificate and key to my local machine, so now I upload them to those directories, and change the names in the default-ssl.conf file.

Update the ServerName setting in default-ssl.conf to server1.example.com

Update the DocumentRoot setting in default-ssl.conf to /var/www/html/wordpress

Add rewrite rules to the non-SSL site to redirect to the SSL site. In wordpress.conf add:

RewriteEngine On
RewriteCond %{REQUEST_URI} !.well-known/acme-challenge
RewriteRule ^(.*)$ https://%{SERVER_NAME}$1 [R=301,L]
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
apachectl -t

If this checks out well, that’s nice, but there is still one more thing to add:

a2enmod ssl

Update Apache default Debian setting

This one threw me for a loop – all my redirects were going to 404 error pages.

It turns out that the default setting on Debian has an Apache configuration file with rewrites not allowed.

vim /etc/apache2/apache2.conf

Find my way to the <Directory /var/www/> section. Change the AllowOveride setting from None to All.

The /var/www directory is of course higher up in the directory structure of what Apache is going to serve up. Because it is higher, the AllowOveride None directive overrides the lower level allow all. Whoops – for WordPress this is no bueno.

Finally, restart (not reload) Apache:

systemctl restart apache2

Install one WordPress plugin to import the export

For this, I am following the instructions by Ferdy Korpershoek on this YouTube video

(The YouTube front page has become politicized trash, but for technical videos, it still has good stuff one can find).

Essentially, I install the All-in-One WP Migration plugin on the production web site, do an export, and then install a fixed version of the All-in-One WP Migration plugin on the test web site.

Ferdy does clean up the new / fresh web site first, by deleting and emptying the default pages and posts. I also deleted the default plugins.

After the import is done, I need to log in again, because the database was replaced, which is where my login credentials are stored. After getting logged in, I need to save Permalinks twice.

A note about ASE (Admin and Site Enhancements)

I was almost at hooray! But, I have a small security enhancement via Admin and Site Enhancements (ASE) which threw a tiny wrench in the monkeyworks. Yes, in production, I’ve hidden the login URL to somewhere other than normal. So after the import, All-in-One WP Migration (100 GB version) provides a link to update the Permalinks, but because of ASE, that URL was not found. No biggie, I simply had to use the URL that was appropriate for the production web site.

Profit!

And now I get to bask in the glory of messing the heck out of the test (not production!) WordPress web site. Fun times. 🙂

I think I’ll take a Proxmox snapshot first.

Migrated to Manjaro on my main machine

I actually did this several weeks ago. I like it.

What I like about it is that it is a rolling release, so for example, Firefox is version 129.0 – not stuck on an older ESR (Extended Support Release). I wanted the newer Firefox for PDF editing, including graphic file insertion.

I did have to go through all the different KDE plugins I’d installed for tiling window managers, and delete them and the hidden subdirectories they had configuration files in. KDE no longer automatically tiles nicely the way the previous KWin tiling script did; but, I have a few keystrokes assigned to tile right or left, and it is not too bad.

I have not yet figured out the correct installation order for Proton + Wine + Steam + other stuff to get Windows games to install and run on Steam. That was the last thing I’d done on OpenSUSE, and it was very nice. I’m glad I’d seen that it can work – now all I have to do is spend some time experimenting or hanging out in the Manjaro forums to get it to work.

The only thing that is a little broken is that sound sometimes just quits in Firefox. If I hit the volume mute and unmute, it comes back. That is slightly annoying; but if that is the worst thing that happens to me, I’m still a pretty lucky guy.

Quarterly Inventory 2024 – Q2

Dear FutureMe,

Today would be a good day to do a quarterly inventory.

Question: How is your personal life going?

Question: How is your work life going?

Question: How is your volunteer service life going?

Personal Life

There hasn’t really been much change this quarter in my personal life.

One event that was noteworthy was that my 86-year-old mom was in the hospital for three days, (well, four, if waiting in the emergency room waiting room counts as “in”), with a C-diff gut biome infection. Apparently, an antibiotic she was taking for an eye infection allowed Clostridioides difficile to run amok.

I’ve been a little sad and depressed about how much WordPress work I need to do. This blog needs to be moved to a new server, but I really don’t want to take all the crap along, that the various plugins have added to the database, never to leave. That is a more complicated migration than just shipping all the junk over. “Complicated” is proving disheartening to document and plan. I have another couple WordPress sites to migrate in the volunteer service life category, too.

I had a need to edit a PDF; I get to submit an application to our local Sheriff’s office to meet with people about to leave incarceration. The good news is that Firefox (my favorite browser) has PDF editing capabilities now. Alas, the PDF application form wants signature and initials. The bad news is that Firefox ESR is version 115, and adding graphics to PDFs doesn’t show up until version 119. So I need to upgrade. I tried to upgrade from OpenSuSE Leap 15.6 to OpenSuSE Tumbleweed Slowroll, but the ISO image is broken and spontaneously reboots almost immediately after booting. Now my main machine is broken, big time. I installed Debian with KDE. That worked fine, except guess what else uses Firefox ESR version 115? At least I was back up and functional, but now I need to find a rolling release with KDE. I tried PCLinuxOS. It was quite amusing to me that they call themselves The Boomer Distribution. Technically, I’m a Boomer, although when the real Boomers were off doing drugs sex and rock-and-roll at Woodstock, I was 8 years old and discovering that I liked reading / wasn’t great at sports.

Anyway, I gave up on PCLinuxOS after a week. I couldn’t get Steam to work, and the PCLinuxOS forums don’t let me just sign up: I’d have to email some guy at a Google email address my preferred User ID and password. Yeah, no – I’m out.

I have installed Manjaro. I like it pretty well. I have become a part of the Oh by the way, I use Arch club.

There are still things I want to tweak, to make using it smoother. But it does run Steam, and it does run KDE with Kröhnkite. I’m happy with this.

Work Life

If $39,000 dropped into my lap today, I would retire tomorrow.

One fun thing sprung out of taking on printers and the print server. It had a problem because old printers would be deleted from the print server, but not from the client workstations that used to print to them. Some portion of 5,000+ workstations printing to 830+ printers were configured for more printers that are no longer there. I’m guessing that about 5% of the workstations are configured for printers which no longer exist. Microsoft Windows apparently just pounds on the print server, asking if the (deleted) printer is back online yet. I’d be curious if they abuse their own print server that way. They probably don’t have a print server because network printing is hard.

Anyway, I get to bring in a print server log file, parse it for missing (deleted) printers, and generate a service ticket to have the desktops group visit the user and remove the missing (deleted) printer. Of course, one user can have multiple deleted printers, and I don’t want to generate six tickets for six printers for one user. I’ve been getting to keep track of these in a Maria DB database, and doing all sorts of Perl scripting to help me with this mini-project. It has been fun.

The other aspect is that I got print server reboots to work in about a minute; where before it was problematic. If there were more dead printers than Apache threads, the server would get into a deadlock, waiting for the dead printer threads to time out – but the polling cycle was faster than all of them could time out. Oof.

Volunteer Service Life

I still have too many volunteer service commitments. One dropped off on May 19. The event was successful, with 178 people attending from all over northern California. Another dropped off on June 8, when the Founder’s Day Picnic was over. We were hoping to feed 300 people, but only 83 showed up. I had been flying pretty blind on this one. We came out in the black, but only barely.

Migrated from OpenSuSE to Debian on my main machine today

This morning, I hadn’t planned on that, but….

I had a need to edit a PDF. I know that Firefox has the ability to do so; and I filled in information. But then it asked for a signature and initials – I have those in .jpeg form, but Firefox didn’t have the buttons for inserting images. Whoops: Firefox ESR version 115 doesn’t have that, because that showed up in Firefox 119.

Well, it has been a while since Tumbleweed was disappointing because of a lack of KWin tiling script support. I had downgraded from Tumbleweed to Leap 15.6. Maybe I should try Tumbleweed again.

Also, I’d been listening to the FLOSS Weekly podcast over on Hackaday, and their guest Brodie Robertson had mentioned Tumbleweed Slowroll as something new. I kind of liked the idea, so I tried a few steps.

These are list here, at the official page:

 zypper ar https://download.opensuse.org/slowroll/repo/oss/ temp
zypper in openSUSE-repos-Slowroll -openSUSE-repos-Tumbleweed
rm /etc/zypp/repos.d/*.repo # or backup
zypper dup

This got me an empty screen with a blinking cursor. Yay. Not.

I downloaded the Slowroll ISO and put it on a USB stick.

I used the BIOS to choose the USB stick to boot off of, and got the “Install” option. Sure, that can be a little drastic, but I’ve done this many times before. Mostly, it is a little annoying to find that I don’t have an application installed that I’d like to use at the moment. But because my /home is physically on a different drive, I’m safe to not lose any data (reasonable precautions taken).

I go to install Slowroll, and it reboots before starting the installation. The motherboard logo briefly shows, and then I’m back at the “Install” menu item again.

I’ve got a boot loop.

Great. Just great.

Did I mention that I dearly love systemd and journalctl (not). Back in the good old days, something would append to /var/log/messages, and I’d have a chance to figure out what went wrong. But with systemd, the journal is new every boot, and although I can successfully boot to a previous read-only snapshot, there’s nothing there from an aborted installation. Nothing at all. There’s only the current boot messages (which being from a successful snapshot tell me nothing).

Okay, maybe there’s something wrong on my boot drive. Physically disconnect the /home disk drive, boot off of a gparted ISO, and delete every partition on the boot / OS drive.

Try the Slowroll ISO USB again.

I’ve still got a boot loop.

Just great.

I’ve been building some Debian machines, as servers, so I can practice WordPress migrations. I pop in that USB stick, and Debian installs fine.

I still have to wrangle bringing in my /home disk drive and mounting it as /home, but at least it will work.

And here we are, a few hours later, on Debian with KDE. Thunderbird looks pretty different.

It was a little disconcerting that KDE > System Settings > Display and Monitor > Display Configuration > Scale works on each display independently of each other. OpenSuSE applied the scale to both screens simultaneously. I can see why it could make sense that one monitor (say a 4K monitor) might have a different scale factor than another way smaller one. But it was unexpected, so disconcerting. It can be really hard for me to read the screen when the screen is at 100% scale on both large monitors.

It is mildly amusing to me that I get to do How to make Ubuntu have a nice bash shell like OpenSuSE all over again, but for my main machine this time.

Bulk change MP3 file genre

I reset my playlists in Nextcloud. During the rescan, as it imported them, the Music app sorted them into their genre. This might be useful. But one author’s genre was Folk, and really, I’d prefer if it were Instrumental.

I tried changing them from the command line, but id3tools trashed the tags. Really, it was a problem with UTF-8 versus something else. All I really know is that when Nextcloud scanned the files, it got Chinese characters instead of anything useful.

Turns out, I used kid3 and EasyTAG to solve the bulk search and replace problem. Why both? Because kid3 let me see what I wanted to change, but EasyTAG would let me (bulk) change them.

kid3 let me change tags just fine. The problem is: only one file at a time.

The kid3 interface is rather nice, otherwise. If I hit Ctrl-A, it selects all and reads all the files and all the tags. I had added Genre to this list of columns at the top, so then I could sort on that.

EasyTAG wouldn’t let me change the main page displayed columns, so that was less-than-ideal. But, it does have a Find feature, and everything I selected in the Find window remained selected in the main window.

What EasyTAG has (which is great) is in the genre field for any song, there are two buttons: a drop-down to select which genre, and an Apply All button for everything selected. Excellent! Apply All is precisely what I wanted.

Also, it turns out that if the predefined list of genres doesn’t match what I want, I can just type in my choice. The Apply All button still works for something I typed. EasyTAG didn’t have a Flamenco genre, but I have 85 Flamenco guitar files. That I can type my own genre makes this a trivial problem.

So after doing an Apply All in EasyTAG, I’d go back to kid3 and do a reload, followed by another Ctrl-A. Then I sorted by (whatever). I’d find something that matched all the songs I wanted to alter, and copy that to the clipboard. Then I’d switch to EasyTAG, unselect all, and go into the Find screen, and paste in the identifier and search I’d select all in the find window, and close the find window. Then I changed the genre and hit Apply All in the EasyTAG main window.

I think I re-tagged close to 800 songs in about fifteen minutes. Woo! Now, the bulk of my music files are in eleven genres, which becomes a playlist without the manual playlist editing. There are 331 songs in the Instrumental genre list. I would have so hated having to manually make a change 331 times.

WordPress migration notes, part 2

One problem is that I need to install WP-CLI on the new server, and dealing with it is not easy.

The installation instructions don’t say one way or another, but WP-CLI should not be installed as root. Later, if you go to run it as root, it will bark at you that you’re doing a bad thing. Okay, nice to know.

But we do now have the problem that the user who runs WordPress (well, Apache, which runs the PHP code that is WordPress) is the www-data user. I cannot log in as the www-data user, by design (it is a good design). So, how to run this WP-CLI stuff?

sudo -u www-data wp <command>

Okay, this says to switch to user www-data (sudo = switch user and do) (the -u option specifies which user, in this case www-data), and run the wp binary and any command line options you want it (wp) to do.

Cool, but the user I’m logged in as has no idea of where the WordPress installation is. So now, every freaking command I have to type, sudo -u www-data wp <command>, also needs --path=/var/www/html/wordpress in there too.

This sucks.

There is supposed to be a file, wp-cli.local.yml, that I can put the path into. But that file is in my directory, and the sudo command switches away from that.

This sucks, still.

The www-data user does have a home directory; but, it won’t ever be used because the account runs /usr/sbin/nologin on every access. That is secure, but it doesn’t help me from having to type sudo -u www-data wp --path=/var/www/html/wordpress<command> every freaking time I need to do something.

Also, I am a fan of using the page-up key to search my bash history. That works great when I type a few letters, say gre and hit PgUp to search through my last few grep commands. Do I need to reassign ownership of files I’ve added to /var/www/html/wordpress/ ? chow and PgUp, and in a keystroke or two, I’ve got chown -R www-data:www-data /var/www/html/wordpress/ ready to run. Ditto the Apache2 enable and disable site commands. There are a ton of examples where just a few keystrokes and the PgUp key are great.

But having to type su and hitting PgUp presents me with a wall of noise, to finally find the command at the end that I actually want to repeat.

This sucks.

So, there is a solution. It seems kludgy, but it works, as long as you are willing to put up with its kludginess.

  1. cd to /var/www/html/wordpress/
  2. create a file, wp-cli.local.yml, in the location where WordPress is installed (where you just did the cd to), and inside it, put:
    • path: /var/www/html/wordpress

So, as long as you are already in the “right” place, and you have this file which points to your “right place”, you don’t have to specify the “right place” on the command-line of WP-CLI.

The other option is to be in my home directory, and do everything via bash scripts. I wanted to use the command-line, but I may need to put one more level of indirection in the process to get things to work easily. Like I said: kludge.

However, since the bash script has WPPATH="/var/www/html/wordpress" in it, all that sudo -u www-data nonsense goes away. Sure, I’m running it as some random user from some random location, but (I assume) that the WP-CLI people are just fine with that because if a random hacker gets into an ssh session on my box, I’m done for, anyway. Why not just assume whoever is running these commands is authorized?

This sucks quite a bit less, although it doesn’t make me warm and fuzzy about security.

WordPress migration notes

I have a production WordPress site on Amazon Lightsail that I need to migrate away from. These are notes on how to migrate over only the stuff I want to keep.

Backstory: Amazon Lightsail was very inexpensive, at under $5 per month for hosting on their smallest machine, and it did fine. Two things became problems, however:

  • Bitnami WordPress is super easy to spin up, and everything just works. But upgrading to a newer version of something (say PHP or MySQL or something) is a non-starter. The only way to upgrade is to spin up a new machine and do a migration to a new machine.
  • Amazon recently did a price increase. Now, I can get a Linode machine with double the RAM for only $2 more, and that will include backups.

Okay, so I need to migrate, but over the years, I’ve tried different plugins, and even though many of them were uninstalled, the installation routine left crap in the database. How to migrate to a new server, but leave behind the crap? This will be the topic of this post.

First, I installed WP-CLI, instructions can be found here.

Then, on the new machine, I installed only those Plugins which I know I need.

I took a snapshot backup at this point, simply because it seems prudent.

On the new machine, I logged in with ssh and ran this:

wp --path='/var/www/html/wordpress' db query "SHOW TABLES" --skip-column-names --allow-root

This gives me a list of the tables in the new machine that I want from the old machine.

+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_termmeta           |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+

This is a pretty minimal list; the old machine has a list 362 tables long! Matomo was a particularly egregious offender here.

With this information, I can use a script written by Mike Andreasen over on the WP Bullet website to dump the databases on the old machine:

# set WP-CLI flags
WPFLAGS="--allow-root"

# define path to the database dumps without trailing slash
DBSTORE="/tmp"
# get the name of the database
DBNAME=$(wp config get DB_NAME ${WPFLAGS})

# list all of the tables regardless of database prefix
TABLELIST=(wp_posts wp_postmeta)

# create the temporary directory for storing the dumps
mkdir -p ${DBSTORE}/${DBNAME}

# loop through tables and export, log details to /tmp/mysqlexport-<database>.txt
for TABLE in ${TABLELIST[@]}
do
    # export the table
    wp db export ${DBSTORE}/${DBNAME}/${TABLE}.sql --tables=${TABLE} ${WPFLAGS} | tee /dev/stderr
done > /tmp/mysqlexport-${DBNAME}.txt

With this done, I scp the files from the old machine to my local machine. Then I scp them up to the new machine. The next script assumes they are in the sql directory in the wordpress folder.

I tried it, but I should have taken a snapshot, first. 😉

I need to search-and-replace all instances of the old domain name in the MySQL dump files, and put in the new domain name. Technically, once the actual switch happens, the new machine will be found at the old name, so this shouldn’t be necessary. But, the whole reason for migrating to a development machine is to test out this migration process. And the new machine does have a different domain name.

The script to upload the MySQL dumps looks like this:

# define WordPress path
WPPATH="/var/www/html/wordpress"

# loop through all of the 
for DUMP in /var/www/html/wordpress/sql/*.sql;
do
    wp db import ${DUMP} --allow-root --path=${WPPATH}
done

But, until the data is cleaned up, the new WordPress website gets the dreaded white-screen-of-death.