New OpenSuSE installation – Facebook videos / GIFs don't play

This is just a reminder to myself that when I install fresh OpenSuSE, that when Facebook videos don’t play (but Youtube does), the solution is to go into software management, and to add libxmp4 and MP4Tools. I think that was it. It was important to allow vendor change, and I had to do a lot of acknowledgments for that.

I see that I changed:

  • libxmp4
  • libwx lots of stuff
  • bento4
  • MP4Tools
  • ffmpeg-4

Previously, I had added the Adobe Flash stuff, and that had fixed some of the trouble, but not all of it.

New motherboard

For my birthday, I bought myself a new motherboard. The previous motherboard was speedy; but, not stable. I pulled out an ASRock X370 Taichi and put in an MSI X470 Gaming Plus.

I kept everything else the same: RAM, video card, storage. So far, the MSI motherboard is performing admirably.

Three little snags I ran into:

  • Backups were a pain.
  • Sound card appeared to not work (but probably did).
  • Btrfs is not reconfiguration friendly.

I never did get Clonezilla to work as a backup. I’d bought an external USB hard drive from Costco last year I think. No matter how many times I tried to put a partition on it, it would err out. I think this was because the drive had an MBR (Master Boot Record) config on it instead of GPT (GUID Partition Table). Ultimately, I booted off of a GParted live DVD, and wiped the external USB that way. Then created an ext4 partition for the whole 5 terabytes. From there, rebooted back into OpenSuSE and used rsync. Specifically:

su -
mount /dev/sdc1 /mnt
rsync -aAXv /home /mnt

That took a while. But after I got a backup of my home directory, I was free to start taking apart hardware. 🙂

But yeah, I started around 9:00 AM, and only got the good backup going by 11:30 AM. Cryptic error messages are cryptic.

The Taichi motherboard removal actually went reasonably easy.

What did delay me a little bit was that when I first installed the Noctua NF-S12A PWM system fan, I installed it 90° off; the cable from the fan was about a finger’s width too far away from the motherboard connector. Although it was super easy to remove the Noctua – it has rubber posts and grommets instead of screws (which make it super quiet) – putting it back in to the case was slightly difficult. During the initial build, the fan went in first, so using the needle nose pliers to pull on the stretchy polymer posts was easy. But this time, the power supply and motherboard are already in there, and I don’t really want to have to pull all that out for one corner of the fan mounting. Eventually I got it, but it wasn’t easy.

Boot the machine up, and things are looking pretty good. But I have this fear that sound and Linux are enemies, so I go in to YaST and test the sound. The sound tests fail. Following instructions though, SDB:Audio troubleshooting specifically this test:

speaker-test -c2 -l5 -twav

did produce sound! So sound is working after all. It’s just something in YaST the fails to to produce the test sound. Apparently.

All I really know is that I got to the point where I disabled the second sound card (it’s built into the video card), rebooted, and decided to just try Youtube. Youtube worked. I had sound and everything. I’ll call that a win.

HOWEVER, now it’s time to bring in my files for my home directory, from my backup. And I had forgotten to do some manual partition work during the initial install. I had wanted to wipe both /dev/sda and /dev/sdb so that during the initial install, hardware detection would find what is in the MSI X470, with no previous crud from the Taichi motherboard be hanging around.

But I had not bothered to manually change the partitioner to make /dev/sdb the /home directory. I figured I could do that later. I figured wrong.

Under previous systems, it was pretty easy to delete /home on /dev/sda3 and then configure Linux to mount /home on /dev/sdb1 instead.

Btrfs was having none of that. And if I thought the gparted errors about the external USB partitions were cryptic – this took obfuscation a whole new level.

The good news is that I’d already done all the copies to /dev/sdb1 (from the external USB backup on /dev/sdc1 ), so that work wasn’t wasted.

And indeed, it was easier to just wipe /dev/sda and install all over again. This time, during partitioning, I specified the existing /dev/sdb1 to be mounted as /home and Btrfs left it’s grubby mitts off my home directory disk.

Finished the reinstall, deleted the second audio, and et voila – my machine seems almost exactly like it was this morning when I woke up. I almost can’t tell I did a whole motherboard swap out underneath; except so far no spontaneous reboots. 🙂

Some more audio books I have listened to

Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig. I love this book; but, do not listen to the forward. Some people who didn’t get the ending, complained to the author about that, so Mr. Pirsig added a major spoiler to the forward. In my opinion, the book is much more powerful as it dawns on you what is happening.

The Brothers Karamazov by Fyodor Dostoyevsky. I listened to this one because it’s a classic, and Jordan B. Peterson seemed to like it. It was Dostoyevsky’s final work; so in theory he’s gotten good at it by now. On the one hand, it was interesting pretty much the whole way through. On the other hand, some of the writing, where characters describe themselves and what they are thinking and thinking about doing and how they are and I’m going to kiss you now and you will forgive me and I am such a wretch and – oof, it was annoying to me. I can’t imagine people actually talking like that.

After On by Rob Reid. This is great. I both liked the main story, and the little side stories of the Amazon reviewer, the character stories, and the cheesiest hero story ever written. I also liked that it opened up to exploration some thinking about what Super A.I. could mean, and that private reputation services will be super valuable in the future. Number two in the list of “most enjoyable books I’ve listened to this year.”

Anathem by Neal Stephenson. It was okay. I suppose it was better than okay; I don’t think I ever found myself bored with it. However, I don’t find that I can identify much with a friar of supertech trying to save the world while running from the Inquisition. Oh, and there were space aliens, too. One thing I did like about the story was the idea that one might get inured to how magical supertech might be, if one grew up with it and it seemed ordinary. (Which reminds me of an old joke from Saturday Night Live: (Jack Handy) “We would cut down less trees, if every time we cut a tree down, it screamed; unless trees screamed most of the time and at random.”

The Diamond Age by Neal Stephenson. I liked this book. It was quite interesting, and never boring. The writing is good in that I bought in to the main character’s well being and growth. However, there was at least one gratuitous sex scene in there (which set up an ending scene) which I severely disliked. The technology aspects were intriguing. Early on, there is some dystopia which it seems the first victim would have of course seen coming. So that rang flat.

Hyperion by Dan Simmons. Meh. I liked the idea that each passenger on the journey would get to tell their own story. Most of the stories were interesting. The one character, though, was every bit as awful as I feared it would be. Any time a writer gets to write a self-described “poet”, your ears will be subjected to a string of epithets-as-art as if that were a thing. Reminds me of a story a friend of mine told, who at age 4, snuck into a closet with a Fisher-Price Tape Recorder, and recorded in a whisper “shit caca caca caca shit caca shit.” Four year old him was all super impressed with his own power to use foul words. This book has that.

Artemis by Andy Weir. I very much liked this book. I liked the main character, I liked the setting, I liked the skulking around and dangers of getting caught. This book was fun. Number three in the list of “most enjoyable books I’ve listened to this year.”

Cutting for Stone by Abraham Verghese. Man, what a powerful book. I was on an airplane, and sitting next to me was a psychiatrist; we got to talking. He told me this was his favorite book, so I listened to it. I think part of what makes it so good is that the author lived through some of the environment and transitions he writes his characters in to; this makes everything seem authentic. The book also is interesting because the setting is so foreign, there isn’t anything (for me at least) to dismiss. It’s all novel to me. The one thing I didn’t like about the book was near the end, where the main character does the most despicable thing. I suppose it’s needed to move the story toward it’s sad end. I was not wanting to finish listening to the book after that. But I thought I ought to give him a chance to somehow redeem himself, so I kept listening.

The Calculating Stars by Mary Robinette Kowal. This book isn’t really aimed at me; so I didn’t enjoy it as much as people in it’s target audience would.

Snow Crash by Neal Stephenson. This has been my favorite book this year. The characters, the settings, the corporate cultures that become environments and families, the technologies; man this book has it all: and a smart villain and a robot hero and main characters that you really do want to see succeed. Just lots of fun.

Twelve Rules for Life: An Antidote to Chaos by Jordan B. Peterson. I liked this book. Most of the rules are common sense; but Professor Peterson likes to delve a little deeper into the “why” of each rule. One of the rules, “Do not let your children do anything that makes you dislike them” is probably the biggest failing of most parents. I don’t think it’s useful to point at parents today and say “You fouled up”; but, if you are a new parent (or not yet a parent), it is probably super important to hear (and understand why) you should not let your children do anything that makes you dislike them. When I was growing up, there was a whole lot of propaganda masquerading as education about how children should not be made to feel bad; children that are made to feel bad now, act bad later. This completely ignored the reality all children push the boundaries of what is acceptable because they need to know what the boundaries are. Anyway, I read several books in my formative years: Zen and the Art of Motorcycle Maintenance when I was in my early 20’s. How to Win Friends and Influence People by Dale Carnegie and Games People Play by Eric Berne in my early teens. I wish I had read Twelve Rules for Life then, too (although it wouldn’t be published for another 35 years).

How to use Lightsail snapshots to revert to a previous version

I have the new Bitnami WordPress multisite web server up and running. I’d like to make a backup of it, prior to mucking with it, so that I can revert back if needed.

Schrodinger’s Backups: The condition of any backup is unknown until a restore is attempted.

Murphy’s take on Schrodinger’s Backups: You’re fucked. The backup is dead.

Well, that is often the case when you just lost the computer, and you now need to restore from your “backups”.

Let’s see what it takes to successfully take a Lightsail snapshot and restore to it.

Technically, you spin up a new instance, move the IP address, and delete the old instance. So you will be incurring a little bit more extra charges with Lightsail, because for a little while, you had two instances. Snapshots cost money, too.

Step the first: shut down your instance.

In theory, this step should not be necessary. The snapshot process should work on the running image. It probably will.

In theory, there is no difference between theory and practice. In practice, there is.

Although it is a remote chance, there is a problem of database coherency. What if, at the exact moment you take a snapshot, some database transaction is only half-posted? What if one half of the transaction is written to disk, then the snapshot happens, then the other half of the transaction gets written to disk? When you restore, the database is going to no longer be coherent.

For some databases, there are a whole subset of features and work done to ensure atomic transactions that prevent any piece of the transaction of being committed until all of it can be verified to be done. That’s all nice and everything, but what’s wrong with just shutting down the server? If your server is so mission critical that you cannot have a minute or two of downtime, you should be working on clusters of machines that can announce themselves into the cluster, and announce themselves out of the cluster and gracefully transition between states.

Power down the server, and the server is quiescent with the world.

Step the second: take the snapshot.

A picture being worth a thousand words, here’s thirteen thousand words:

Go to the snapshot manager tab a click the Create snapshot button
Lightsail picked a name for you; click the Create button to launch the snapshot process
This takes a minute or three
Once the snapshot is complete, you get the raindrops menu button

Step the third: the snapshot becomes the machine.

The raindrops menu has the option to create a new instance from the snapshot

See that big orange Create Instance button? Click it!

I may be a stockholder of Amazon.com stock, and will see revenue slightly rise as you invoke an additional charge on your account. Click it!
Now there are two instances; one pending, and the other stopped

Eventually, the new instance is running. But we still need to move off of the old instance.

The static IP address that DNS points to is connected to the server that crashed and is going away
The new instance, WordPress_Multisite-2, has a random IP address assigned during creation
After selecting the static IP from the list, click the green Checkmark button to assign it to the new instance
We now see the new instance in the wild, at the old IP address DNS points to
Delete the old instance, so as to not leave trash laying around

That’s pretty much it. The snapshot has been launched as a new instance, and is almost a verbatim copy of the old instance. Almost.

When the new instance was spun up, it got a new security certificate fingerprint.

mech-dump (part of Perl WWW::Mechanize) is incredibly stupid about it's input file name

If the file name it is dumping does not end in:

.html

the mech-dump will spit out an error that the file content is not text/html it is text/plain. And of course, it immediately quits without doing anything helpful.

And then you go and look inside the file, and this is right at the top:

content="text/html"

You ask yourself What the Hell?

It’s a terrible error message; that’s the hell of it. The error message should say “Input file name does not terminate with the string .html”

I use Linux a lot, and in Linux, files do not have to have file extensions in their names. Over in the Windows side, it is expected that a file name has an extension. Windows uses that file name extension to figure out which program should be associated to the file type. But in Linux, the program association data is written inside the file itself.

This has two effects. First, files in Linux don’t need file extensions in their name. Second, you can name a file in Linux to not have the file extension, and the file works anyway.

So, if I’m writing a Perl script on Linux and I want to dump out something I’ve just pulled down from a web server, using WWW::Mechanize, I might be inclined to name the file where I’m dumping this web form to www_mechanize_web_form_dump

And this would be a mistake, because when I later run

mech-dump www_mechanize_web_form_dump

I’m going to get spit at with the message that the file does not contain HTML, it contains only plain text.

It would have saved me a bunch of time, if the error message would have been “mech-dump does not interpret files with names that do not end in .html”

That might seem kind of a silly input file name constraint, but at least the error message wouldn’t be misleading.

Bitnami WordPress Multisite – DNS spoofing

In an earlier post, I said I hope you have pointed your domain name at your static IP address. Well, what if you don’t want to?

The point being that the DNS entry for the domain name currently points to the production WordPress site, and really, I would like to set up this multisite WordPress installation without having to change the public DNS entry.

Also, setting up this, my personal blog, I was using No-IP DNS services. I could update the DNS entry for gerisch.org, and the DNS replicated out almost instantly. It was great. But the other web site I’m working on (the one that got me into WordPress at all), is using Network Solutions for their DNS. They take their good sweet time replicating DNS entries out to the world. I don’t really want to post an update to DNS, wait, dink around with the new site while the production site is down, decide to revert to production, post an update to DNS, wait again while Network Solutions gets around to pointing everyone back to the production web site.

It would just be better if the new web server machine never got away from it’s own self when doing lookups for the domain name it will eventually be.

So I can start the WordPress install from the IP address of the server out on the public Internet. However, WordPress during it’s install, is going to do a DNS lookup, and try to invoke code on the server where the DNS really does resolve. Which isn’t where I am. So I’m going to try to install a fake DNS server on the new server, and have it redirect all calls to the old domain to the new server.

Step the first: install dnsmasq

sudo apt-get install dnsmasq

Next, set up listening on the local host address:

sudo vim /etc/dnsmasq.conf

Find your way to the line #listen-address= and edit it thus:

listen-address=www.gerisch.org

And save and exit

sudo vim /etc/dhcp/dhclient.conf

Find your way to #prepend domain-name-servers www.gerisch.org; and uncomment this line. Save and exit.

And now it gets weird.

The Bitnami / AWS Lightsail images use something called cloud-init : https://cloudinit.readthedocs.io/en/latest/topics/modules.html

So if you were going to try to edit /etc/hosts or /etc/resolv.conf you get warned to not edit them by hand, because they will be replaced on next boot. But they sure as heck don’t tell you where to implement edits. Just don’t do it here.

Turns out there are template files in /etc/cloud/templates that hold the magic.

cd /etc/cloud/templates
sudo cp hosts.debian.tmpl hosts.debian.tmpl.original
sudo vim hosts.debian.tmpl

Now I’m going add a line below www.gerisch.org localhost which will be the IP address I want this machine to go to whenever it tries to resolve the domain name of the production web site

And indeed, if I use dig from an ssh session in the machine, dig reports back the local machine’s address, not the one out on the public Internet

Apple Watch is kind of stupid – it cannot connect to Ford Sync

I had a need to leave my iPhone behind, but I wanted to listen to an audiobook while I drove somewhere. I downloaded the book to the Apple Watch, and then went to the car to run my errand. But the Apple Watch would fail to pair to Ford Sync.

Ford Sync was introduced to the world in 2007. The Apple Watch was introduced to the world in 2015. Apple, a company with plenty of money to do research and development, should have gotten this right.

It’s not like Ford is some small niche car company.

The Apple Watch and Ford Sync would talk with each other and show me the PIN, and ask me to verify that the PIN showed up. It did. And then the pairing would fail. This is stupid. Apple has enough market share to do quality control, and be good at this sort of thing.

But apparently at Apple, Quality is Job 3.