Stock market ideas: Surveillance technology

An idea I like, is the personal body cams that law enforcement is adopting. I have some personal friends in law enforcement who have expressed dislike of them, as liability engines. But a link to a video that one of them provided showed me, non- law enforcement, that officers put up with an amazing amount of abuse, and still can be polite, restrained, professional, and tolerant. So I think it’s a win-win: the officer can prove honest behavior, and, the member of the public knows that their bad behavior is on the record.

So one of the companies I like is the former Taser, now known as AXON Enterprise – stock ticker AAXN.

I don’t yet own this stock. It’s currently at $43, although two weeks ago it was down to $40.

But I do think they are in a growth market. Many (if not 99%) of law enforcement agencies already have a relationship with AXON / Taser, with the pistols that apply non-lethal force. Although bigger law enforcement agencies have already bought body cams, I don’t think that penetration within the smaller markets is anywhere near complete. I think that 65% of of law enforcement is still in the adoption phase.

I previously had done well with Ambarella – stock ticker AMBA – getting in around $30 and selling at $70. Ambarella was more of a technology play, being a company that makes CCD sensors for GoPro cameras, and other cameras. My point is that there can be growth stocks in these areas. I don’t think AXON is going to sell as many body cams as GoPro sold their cameras; but, they will probably a decent percentage of them, still.

Semiconductor technology

I have four stocks in this category; three I’m pretty happy with, one is a “meh”.

Intel is my current favorite, ticker symbol INTC. One of the pieces of advice I had gotten was “find the clear leader in an industry, buy that, and hold on”. Intel seems to me, to clearly be the leader in semiconductor fabrication technology. I bought it at $24 per share, and it is currently at $47.

I do like that Intel pays a dividend.

Another thing that I liked about Intel is that they had a partnership with Micron, on a type of memory they named “Optane”. I know that everything in computer technology is about the pipeline of storage into the registers of the CPU. If we could make the CPU have enough storage, and, we wouldn’t need external storage, and everything would be going at the full speed of the CPU.

But that isn’t physically possible, if only because once in a while, the power goes out. CPUs use Dynamic RAM (it is the registers the CPU manipulates, and on-board memory called L1 cache). Dynamic RAM is dynamically refreshed with electrical power. When the power drains out, so does the data. Some sort of storage is needed, that doesn’t lose it’s data when the power is off. Since the 1950’s, the “storage” has been external to the CPU, and is orders of magnitude slower than the CPU itself.

I think the Optane idea could (potentially) flip computing on it’s head: the memory becomes so fast, that the pipeline of storage into the registers (and back), can be made direct. Or put another way, the CPU could run at the speed of memory – which is the storage. What if the external storage was the same speed (not orders of magnitude slower) as the CPU? What if the RAM was the disk? What if every register retrieve and store were permanent?*

Now really, even Optane memory does not run at the 2.x or 3.5 GHz of a CPU. Most Dynamic RAM access is in the 1.2 GHz range. So most modern computers spend a lot of hardware design on fetching data from the comparatively slow RAM, keeping as much of it on the CPU chip as possible, and then dealing with cache misses, and branch prediction misses, and all sorts of work to keep things in sync when the whole scheme isn’t perfect.

But what if 1.2 GHz was fast enough? Could it be fast enough, if there was no difference between RAM and storage? If the RAM addresses were the storage addresses?

Optane memory is essentially the next wave of solid state disk; and has capacities of same. How does the game change, when your 2 Terabyte Optane storage means that really, you have 2 TB of RAM? In six years, it will be 16 TB of RAM; eight years = 32 TB, ten years = 64 (if not 128) TB of RAM.

I expect that ten years from now, the Optane memory will have CPU electronics on the Optane chips, and the computing will be done on the memory chips. It’s a lot of work to ship bits off chip to a CPU, have the CPU alter them, and then ship those bits back, across the backplane, to end up back on the storage chips. It’s time consuming, too.

This is what I mean by turning computing on it’s head.

Anyway, it’s probably obvious that I’m a fan, so I like both Intel and Micron Technology – ticker symbol MU. I bought MU at about the same price as it is today. However, six months ago, it was double what it is today. I should have sold 1/2 my position then, and made a note somewhere that what was left is now free money.

*Not “forever” permanent, but from the point of view that “if the power goes out, we don’t care, because the data has already been written to storage”.

Cancer immunotherapy / Biotechnology

I currently have two stocks in these categories; one is doing well, and the other, very poorly.

The poor one is Advaxis ticker symbol ADXS. I bought ADXS at $4.82, and today it is at $0.22. That’s right: I’ve lost 95%.

Sigh.

So, having lost so much, is there really much of a reason to liquidate at such a loss? The original idea was that Advaxis had, through research, found some evidence they could use the immune system to fight certain cancers. I don’t think the company is just going to give up. Immunotherapy is working for other cancers. But the problem is that the company could run out of cash before they have a product they can market. But if they keep plugging away, they might finally be able to publish that breakthrough. And then, they should be able to get a good price for their technology.

My other stock is CRISPR Therapeutics AG – ticker symbol CRSP. I bought CRISPR Therapeutics at $20 per share, and today it is at $31.

At one point, six months ago, it was at $70 per share. I should have sold 30% of my shares, and been on a free ride since then. 😉

But really, I still think the technology is good. So I’m not going to lament not-having-cashed-in. Long term, I think I’ll be happy that I still have all my shares.

One of the things though, with this blog, is that if I do get to point of cashing out the purchase price of a stock, I can record that here. In the past, I had cashed out the purchase price of a stock, but then later sold everything when it continued to drop (in the short term). I couldn’t tell, from the view of my holdings that my stock broker gives me, that I was already riding on free money. So I sold. And then the price jumped way up, more than 8x what I bought in for.

Stock market investing ideas

I figure that since this is a place I can record longer term ideas, and, that ideas I have regarding stock market investing don’t really have a good home, I can put them here.

Now really, I’m a fan of putting my notes about data near the data. So what I would really like, is inside my stock market portfolio management web page, that the vendor provide me with a small text field that I can update with a short note. But I don’t have that, so elsewhere, the information will have to reside.

On to the ideas:

  • 5G cellular towers
  • Neodymium miners (or processors)
  • Cancer immunotherapy / Biotechnology
  • Semiconductor technology
  • Surveillance technology

5G cellular

I think this technology has a huge growth potential. One of the trade-offs though, is that higher frequencies are required for higher bandwidth. Higher frequencies emit more power; but, electromagnetic power drops off with the cube of the distance (I think – it could be that the power drops off with the square of the distance). The upshot is that if the two antennas (sender and receiver) are going to be heard at the higher frequencies, they will need to be located closer together in physical space. So today, “good coverage” has one tower around three miles from the next tower (4G cellular).

With 5G cellular, the distance between towers will be 250 meters / 820 feet / .15 miles. The growth in towers (“base stations”) is going to go exponential (at least during the startup phase).

Perhaps it would be smarter to buy stock in the companies that make the transceivers (chips or whole power+chips+antenna). Problem is, I don’t know who these companies are.

But I like the idea of rent; that American Tower can make the initial investments, and the recoup their cost over the next nnn years. When I first heard of AMT, it was around $140 per share; today it’s at $168.

Ticketmaster is a bane to artists

Three of my friends and I wanted to go to a concert in March of 2019.  Several of the artists are favorites.  The least expensive seats were $31 per seat.

Ticketmaster wants a $30 surcharge, plus two other surcharges, for a total of $36 in surcharges.

For some strange reason, the four of us are not going to the concert.

But we customers don’t really have the power here; it’s the artists who need to boycott the locations that have onerous contracts with Ticketmaster.

Bitnami WordPress automatic start of services

First off, KVM and QEMU are wonders of technology, and I’m thankful for those projects and their magic.

Background is that I made a default install of OpenSUSE 42.3.  I also tried OpenSUSE 15 (which is newer than 42.3, but whatever).

OpenSUSE 15 really did not like the Bitnami invocation of MySQL; but, it could be that I tried the initial install as a LAMP server, running at level 3.  With OpenSUSE 42.3, I tried an initial install as a KDE Desktop running at level 5, plus the LAMP server pattern.  That had worked in the past, so I wasn’t going to fight “well, at least this works”.

I did get the Bitnami stack installed and running.  I even got the default URL changed from “/wordpress/” to just “/”

Next step to accomplish, so I have a nice snapshot to revert to, is for the Bitnami stack to automatically start.  For whatever reason, searching for this information never easily comes up with results.  So I’m writing it down here.

cp installdir/ctlscript.sh /etc/init.d/bitnami-APPNAME

Edit the file /etc/init.d/bitnami-whatever-you-named-it

Add this near the top:

#
# chkconfig: 2345 80 30
# description: Bitnami services

And run this:

chkconfig --add bitnami-whatever-you-named-it

Test with an init 6, and if you can get to your web site without having to start MySQL and Apache with the ctlscript.sh, then you’re good.

Take that snapshot!

How long does a web site last?

I participated in an early social network of sorts, Slashdot.org, way back when.  I think I signed up in about 2002.  A bunch of people on there became friends.  Some of us have even met IRL (In Real Life).  One of those people went by the handle “kitten”, although I never met him IRL.

Kitten died in early 2010.

But, he had a website of his own, and I liked it primarily because of it’s domain name.  A favorite band of mine had a song named mirrorshades. That kitten had the domain name mirroshades.org piqued my interest.  I wonder if he was a fan of the same band?

There wasn’t much there on the web site, but still, it was cool that someone I interacted with had his own web site.  This wasn’t just a MySpace page, this was a full blown register-a-domain-name, get-a-server, set-up-apache, write-HTML web site of his own.  Looks like the first capture of it by the Internet Wayback machine was 2007.  For whatever reason, whois is coming up empty on the domain registration record.

2007 isn’t terribly early for real, personal, web sites.  Many people tried it with running a machine in their own home, and using a Dynamic DNS service, so that even if their ISP changes their IP address, the domain name still resolves back to that web server running in the basement.  The other way to go, was that hosting services were coming on the scene around that period of time.  Perhaps kitten bought and paid for a Rackspace plan?

What happens when the web site owner dies?

Yes, it’s a little morbid.  But we are all going to die someday.  Pretending it won’t happen in willful ignorance.

Back to the point: if it’s a web server in a basement, someone is going to power it off, at some time.  Right?  Or a power outage happens (yes, there are ways around this).  Maybe mom and dad got it, and it’s still going.  But even then, they are going to die.

Kitten knew he was dying – perhaps he asked a friend to keep it running for him?  Just how long does dedication to a dead friend last?  People change, and move on in their life.  That can be a crucially good thing; so at what point does keeping this web site (that does nothing, really) still make sense?

Or, perhaps, a cloud provider was paid to keep it running?  Every minute of every day, the expiration of the contract looms ever closer.  Either way, the DNS registration eventually runs out.*

Anyway, I’ve had a tab in my list of pages I visit every day: http://mirrorshades.org/wc/index.shtml

Sunday, November 4, I got 404 Not Found

For what it is worth, since shortly after February 14, 2010, the page looked like this (kitten’s obituary announcement): https://web.archive.org/web/20170318170358/http://mirrorshades.org/wc/index.shtml

For eight years and ten months, that web server has been serving up that page.  The image at the top would rotate out per visit.  The one of the young woman, bare shouldered, was quite cool.

And this could be a temporary glitch.  Tomorrow, I might open that page, and it will resolve again.  If so, I’ll update this post.

*Domain not found is a different error than The requested URL /wc/index.shtml was not found on this server.

How Microsoft is like an abusive boyfriend

Disclaimer: this is my opinion only, and is worth every penny you paid me for it. 😉

This weekend, the organization I work for is moving 1/2 of us off of NCP (Novell Core Protocol) file servers, to Microsoft CIFS (Common Internet File System) file servers.  I’m in the testing group. Windows Explorer, showing me file shares, is noticeably faster.

My boyfriend has decided to stop beating me.

I no longer have that other boyfriend, Novell, so everything is nice now, right?  My Microsoft boyfriend is happy now, he no longer needs to beat sense into me every night, for keeping up that relationship with that loser, Novell.

I’ve been doing computers for a long time; what follows are five times Microsoft shipped code to put the hurt on me and my users, for having the gall to also be Novell customers.

  • IFSHLP.386 and the anti-virus demo
  • Windows 3.10 –> Windows for Workgroups 3.11 setup.exe
  • Windows 95 Get Nearest Server
  • Outlook access via MAPI of a GroupWise mailbox
  • Windows 2000 network multiplexor

This phenomena is not new, in fact, someone came up with a clever phrase: “DOS isn’t done until Lotus won’t run”.

This refers to an incident where Microsoft shipped a new version of DOS that implemented LIM EMS (Lotus, Intel, Microsoft Expanded Memory Specification) differently.  Lotus had 1-2-3, the first “killer app” – the spreadsheet – and Microsoft wanted all that money.  They were just shipping Windows, and because it was graphical, it was slower.  In speed tests, Lotus 1-2-3 calculated faster than Microsoft Excel inside Windows.  So Microsoft shipped a new version of DOS, with LIM EMS that implemented memory access on word boundaries, instead of byte boundaries (or something like that).  The upshot is that before, you could install DOS, install Windows, install Lotus 1-2-3, and everything ran fine.  After, with the new DOS, when you went to launch Lotus 1-2-3, Windows immediately erred out with a big ugly GPF (General Protection Fault) due to illegal memory access, and a dialog box that told you you should contact your vendor (Lotus) for a version of the program that doesn’t crash Windows.

At the same time, Microsoft was running advertising in all the trade publications, with a picture of a jet fighter pilot and his crash helmet.  The subtext of the ads were “You should get your spreadsheet for Windows from the vendor who wrote Windows.  Less crashes that way” (something to that effect).

This was essentially the first big evidence that Microsoft was the jealous boyfriend who would beat his girlfriend (you) who also dated someone else (Lotus, in this case).

IFSHLP.386 and the anti-virus demo

Installable File System (IFS) Helper, for the Intel 386 architecture, background here: https://en.wikipedia.org/wiki/Installable_File_System.  This was an idea that programmers ought to get to the file system via a standardized operating system call.  Prior to IFSHLP.386, software that needed disk access hooked into Interrupt 21 – and several pieces of software would need to hook on any one PC.

As this idea grew, Novell asked Microsoft if they should get on board.  Of course, they said.  Hooking into the file system to provide storage over the network was what NCP did – it’s reason for existence.  You installed the Novell client software stack, you logged in, and now your PC has a drive E: where it didn’t before, and that drive E: was now on the other side of the network cable.  Novell was the perfect candidate to be an installable file system.

As this idea grew, anti-virus software vendors asked Microsoft if they should get on board.  And Microsoft told them: No, Interrupt 21 will always work, you can count on it, and there is no reason to make your life more complicated by mucking around with installable- this, and dynamic- that.  Just hook into Interrupt 21 like you always have, and things will be fine.

After a while, the IFS idea grew, and shipped, and was promoted as a generally good thing.

And then, Microsoft held a press conference.

They put a virus on a PC, and ran various partner’s anti-virus on the PC.  They all found and cleaned the virus.  They put the virus on a Windows NT file server.  As soon as the the PC accessed the file (going through Interrupt 21), the anti-virus software triggered and protected the PC from the virus on the network file server.  And then they put the virus on a Novell NetWare file server, and accessed the file.  The virus was not detected, because the Novell software stack on the PC had been configured to use IFSHLP.386 to get to the files on the network!  Microsoft made a huge deal of the fact that if you used Novell NetWare for your file server, you were putting your company at risk.  What a bunch of terrible programmers Novell were; they hid viruses from your anti-virus programs.

Thankfully, the computer press was aware of Microsoft’s abusive boyfriend behavior, and instead of flooding all channels of communication with “OMG! Novell! What a bunch of losers!”, they took a weekend off, and asked Novell to explain themselves.  The computer press confirmed with the anti-virus vendors the story about IFS versus Interrupt 21.  Novell agreed to to re-add the Interrupt 21 support they had dropped in the presence of IFS.  So then the computer press either didn’t run the story, or, ran the story of the anti-virus demo with the caveat that Novell promised to support the older access method in the future.

Windows 3.10 –> Windows for Workgroups 3.11 setup.exe

Back in the day, we used to move PCs around a lot.  Also, there was a saying “The 3 R’s of troubleshooting: Reboot, Reinstall, Reformat”.  First, reboot.  Is the problem solved? If no, reinstall whatever software package was troubling the user.  Is the problem solved? If no, reformat the hard drive and install everything from fresh again.  It was brutal, but effective.  Our methods developed around this practice, and it became somewhat easy.

Step 1: boot the machine from a floppy disk drive.  Issue the Format command in DOS to wipe the C: drive.

Step 2: install DOS from the floppy (all of DOS fit on one floppy).

Step 3: Reboot from the C: drive, and use the Novell stack from the floppy to log in to the the network.

Step 4: Switch to the F: drive, and change directory to the subdirectory (folder) where Windows 3.10 was.

Step 5: Launch setup.exe, with the command line switch to tell it to install Windows to the C: drive.

A short while late, the machine had a brand new, fresh install of Windows on it; “R3” of The 3 R’s of Troubleshooting was complete.  (We always told our users to store their documents on the H: drive, on the Novell network, so no-one ever lost any files this way).

It is worth noting that all the floppies from Windows 3.10 had been copied to the one network location, so we didn’t need to carry six additional floppies with us.  It made a lot of sense to put them on the network, log in to the network, and run the installers off the network.  It was much faster than floppies, too.  We didn’t worry about licenses, because we never bought a PC that didn’t come with Windows on it.

And then, Microsoft shipped an update to Windows; Windows for Workgroups (WfW), version 3.11.  This was “the first version of Windows, built with the network in mind”.

One of my co-workers copied all six floppies to the network, and told us of the new location.  We were going to have to keep track of which machines had Windows 3.10 versus WfW 3.11 for license reasons, but otherwise we had no qualms – until it came time to 3R a client’s computer.

Steps 1 – 4 were identical.  Step 5, however, had a landmine built into it.

Microsoft shipped WfW 3.11 setup.exe with code that reached into the install media and did something nasty.  If the install was from physical media (a hard drive or floppies) the physical drive ignored it or otherwise didn’t care.  But if setup.exe was running from a virtual (Novell network mapped) drive, it reached into the drive and crashed the entire server.

It. crashed. the. entire. server.

The server suffered an “ABEND” (abnormal end), and broadcast to everyone on the network that it had crashed, your files are lost, the end times have arrived, too bad, so sad, log back in after the file server comes back up, hope you had backups…

I crashed a server, with 80 users on it.  My co-worker crashed a server with 50 users on it.  We learned, the hard way, not to run setup.exe from the F: drive.

I, being the clever guy I am, came up with a work-around.  I renamed setup.exe to setup.not, and made a DOS batch file, setup.bat, that took setup.exe’s place.  It did five things:

  1. Copied the F: drive WfW folder contents to the C: drive
  2. Changed to the C: drive
  3. Renamed setup.not to setup.exe
  4. Deleted setup.bat
  5. Launched setup.exe

Setup.exe still did whatever nasty thing Microsoft programmed it to do; but because the install media was a physical hard drive (C: drive), the nastiness had no effect.

Back to my point of Microsoft being the abusive boyfriend, the analogy for this instance is the jealous boyfriend seeing you borrow another friends truck to help the jealous boyfriend move in; but because the truck belonged to some other guy, your boyfriend deliberately crashes the other guy’s truck into a brick wall.

Later, I was talking with a Novell product manager, and his comment was that they were thankful to Microsoft for this sabotage.   Before, they were happy that things just worked at all.  After, they learned they needed to practice defensive programming, as if a malicious actor was trying to crash their server.

Windows 95 Get Nearest Server

Early Novell NetWare servers had an easy (if simplistic) way of helping to set stuff up: broadcasts.  The type of packet was called a SAP packet, for Service Advertising Protocol.  Note that SAP was specific to Novell IPX/SPX networking, and had nothing to do with TCP/IP (the dominate protocol today).  IPX/SPX and SAP were a Novell thing.  When a server started up, it broadcast a SAP packet: “I’m a server, if you need me, here’s my address.”  Some things, like Hewlett-Packard JetDirect print servers would send a SAP packet every 30 seconds: “I’m a printer, if you need me, here’s my address.”

A client PC booting up on the network would broadcast a SAP Get Nearest Server packet.  This was the other side of the coin: “I’m a client, and I need to know what servers are available.  I’m asking which servers are nearest me.”  And all the NetWare boxes on the network would send a packet back “I’m a server, here’s my address.”  The network connection would be established and communication would flow.

An interesting assumption that Novell programmers made, was that the file server was the fastest machine on the network.  If you have 200 PCs on a network, serving all of them isn’t going be the job of some underpowered hand-me-down piece of junk that no-one wants any more.  It’s going to be the biggest, fastest, best box money can buy.  And “fastest” meant “answers quickest”.

The “nearest server” was the box that was a server, and replied the quickest.  If there were several servers on the network, all would reply; but the client would choose the first one with an answer as the “nearest”.

So how could Microsoft abuse the network?  By adding SAP Get Nearest Server replies to Windows 95.  Mind you, Windows 95 didn’t provide Novell NetWare network support; they just answered the Get Nearest Server broadcasts with “I’m a server, here’s my address.”

It was all lies, of course.  The client PC would then attempt to establish the network connection, but the Windows 95 box would sit there silently, non-responsive.  I imagine some programmer in Redmond, Washington, watching  the network failure play out, was grinning like the proverbial Cheshire Cat.

So of course, one of our power users had just gotten a really fast PC – faster than his file server, in fact.  Microsoft had the big shindig, announcing Windows 95.  Our power user stood in line to buy that retail copy of Windows 95 on opening day, came back to work, and upgraded his PC.

Which promptly began answering Get Nearest Server broadcasts with “I’m a server, here’s my address (whispering only to itself ‘but you can go stuff yourself’).”  All the printers on the network vanished.  All the file servers to log in to vanished.  All the servers in Remote Console vanished (how us administrators administered servers).

Kind of like the abusive boyfriend standing on your doorstep, and to everyone who shows up at your door, getting confrontational with them, to make sure they can’t reach you, nor can you reach them.  All that poor mail carrier wanted to do was deliver a freaking letter.

Outlook access via MAPI of a GroupWise mailbox

One wonderful resource we had in the GroupWise environment was a mailing list named NGW List.  Anyone could subscribe, and if you administered Novell GroupWise (NGW) you ought to have.  If you had a problem, we could tell you how we solved it, or what work-arounds there were.  At it’s height, it had more than 200 messages per day.  We sometimes discussed feature requests.  But a problem was that someone at Microsoft had the job of lurking on the list, looking for ideas.

So, a new version of GroupWise dropped, and Novell told us how great it was; that it was now fully MAPI compliant.

MAPI was a Microsoft standard: “Messaging Application Programming Interface”.  Developers wanted to be able to send email from their programs, and Windows might have MS Mail installed on it, or MS Outlook installed on it.  How to send email, if different mail providers are available?  The answer was MAPI, which Microsoft made, and provided as a standard.

On the list, we asked “because the new GroupWise was fully MAPI compliant, does that mean one can install Microsoft Outlook on the PC, and Outlook would access the mailbox on the GroupWise server?”  Yes, the answer was: just fine.  Outlook never knew it wasn’t talking to MS Mail or MS Exchange.  It just worked.  Life was good.

Until that Microsoft lurker reported upstairs that people were successfully using Outlook as a client to GroupWise servers.

Magically, one day not very long later, a Windows Update appeared, and things were patched.  Outlook was patched.  Outlook accessed the GroupWise mailbox via MAPI, and seeing it was GroupWise, created a new, duplicate, folder at the root level named “mailbox”.  And if you only ever used Outlook, life was grand.  Every time you logged in, Outlook would look at the mailbox, see two system folders at the root level named “mailbox”, pick the right one (the one with mail in it), and continue merrily along it’s way.

The GroupWise client, on the other hand, was completely unprepared for a duplicate system folder at the root level.  Like, GPF unprepared.  Like, wow that was ugly unprepared.

So although the GroupWise server was 100% MAPI compliant, GroupWise had additional features that MAPI didn’t support.  Specifically, To Do lists.  MAPI was an email protocol, and To Do items aren’t in any email spec, so MAPI doesn’t support them.  Doesn’t really matter, if all you are trying to do is email, and all Outlook was needing to do was email.

But if you did want to also do To Do lists, you needed to crank up the GroupWise client.  After the Windows Update which patched Outlook (and Outlook fouled over your GroupWise mailbox),  your GroupWise mailbox was completely unusable by the GroupWise client.

Novell had to scramble, and soon found the problem.  They had to issue an emergency patch to the database maintenance routine “gwcheck”.  It now (still to this day) includes a fix “deldupfolders”.  Run a gwcheck with the deldupfolders option, and your mailbox becomes un-f*cked.  Don’t run Outlook against it again, until Novell can issue a fix to the GroupWise client that does not crash when duplicate system folders magically appear in the folder structure.

It’s like your boyfriend works at Nestle, you invite him over for dessert, and when he finds you bought Hershey chocolate syrup for the ice cream, he takes an axe to your dinner table.  Weird thing was: why did he bring an axe with him, to dessert in the first place?

Windows 2000 network multiplexor

You try to access a file on a file server.  You are doing this from Windows.  Windows handles the call to the network, and gets to do with it what it may, as it takes care of your request.  If the file you want to get is on an NCP server, your experience may be sub-optimal.

Where we experienced this the worst, was again after one of those magical Windows Updates that got applied to a ton of machines on a Patch Tuesday.  All of our users on Windows 2000, after the patch, were taking three minutes to log in.

The day before, it took only a few seconds.  What happened?

Microsoft, in typical jealous boyfriend fashion, decided to play passive-aggressive in serving up network resources.  “Where’s the server? Hang on while we find out.”

After the patch, when a PC asked for an NCP server, the user’s PC (through the Windows 2000 network multiplexor), threw the name resolution request out to all CIFS servers first, and then waited 30 seconds for any of them to respond.  After 30 seconds, since none of the CIFS servers answered the call for the NCP service, it threw the name resolution request at the NCP servers (which of course, responded instantly).  Then it went on to resolve the next server name request.

Which took another 30 seconds.

To. the. same. server.

We typically mapped E:, F:, G:, H:, I:, and Z: all to the same server (but to different subdirectories on disk).  Six drive letters, at 30 seconds of timeout each, turned every login from a few seconds before the patch to three minutes after the patch.  Our users were howling at how painful this was.  All we could tell them was “don’t reboot, or you will have to log in (and wait) again”.

Novell again had to scramble for a fix, which was to roll out a new version of the Novell Client.  It would avoid asking the Windows 2000 network multiplexor to resolve NCP server names, by keeping a cache.  Find it once, with the 30 second penalty, but you’ll never have to find it again, ever again.

Their long term solution, by the way, was to add CIFS support to the NCP file servers.  Sure, Microsoft could (and from history, would) foul over anything NCP related, but if the server became a CIFS server; well, Microsoft couldn’t sabotage that without sabotaging their own.

Which brings me to today

Today, my PC will no longer ever again make an NCP call to a NCP file server.  All evidence points to it being speedy.  Very speedy.

My abusive boyfriend, Microsoft, who I now vow to be with forever and ever, is showing me his magnanimous generosity by not beating me up today.

And you know damn well that I had better be grateful.  Damned grateful.

How I moved a local development Bitnami WordPress to the root of the web server

What it took to move WordPress from /wordpress to just /

Turns out it was not as easy as I first thought.

First, let’s define the environment:

  • OpenSUSE 42.3 in a virtual machine
  • Downloaded and copied into the machine:
    • bitnami-wordpress-4.9.8-0-linux-x64-installer.run
  • KVM / QEMU with sudo virsh snapshot-create-as every step of the way.

I should point out that during the install, it asked me where to put the web site.  I told it /opt/bitnami

So actually, the WordPress code is in /opt/bitnami/apps/wordpress

Note that this is for the files on disk; it has nothing to do with the URL scheme.

The installer does it’s thing, and I get a WordPress site running on the URL scheme <ip address>/wordpress/

Irritation for me is, the production web site I’m wanting to experiment for is <ip address>/

Five changes are needed for the fix.

  • Search and replace the database
  • Edit the /opt/bitnami/apps/wordpress/conf/httpd-prefix.conf file
  • Edit the /opt/bitnami/apps/wordpress/htdocs/wp-config.php file
  • Edit the /opt/bitnami/apps/wordpress/conf/httpd-app.conf file
  • Settings –> Permalinks –> Save Changes

Search and replace the database

Before, I was using the All-In-One WP Migration plugin, because it came with the Bitnami image, and, at a WordPress meetup I went to, the people there said this was a great way of doing a development site.  And it was, for a while.

Problem is, the All-In-One WP Migration guy decided to change the rules, and the latest update refuses to work unless you pay up, for any site larger than 40MB.  I’ve never seen a site less than 200MB, so that’s a no-go for me.

I’ve been using UpdraftPlus for backups (for free), and decided that it wouldn’t hurt to pay them for some of their premium services, which were advertised as also being able to do migration.  Turns out that isn’t nearly as easy as it was with All-In-One WP Migration, but it can be wrestled to the ground and made to work, with a bit of effort.

Anyway: Settings –> UpdraftPlus Backup –> Advanced Tools –> Search / replace database.  Search for /wordpress and replace with /

Note that you do not want to restart services after the search-and-replace but before the file editing below.

Edit httpd-prefix.conf

The httpd-prefix.conf file is explained here: Move WordPress to a different URL path on the same domain

The change is that the Alias setting goes from /wordpress/ to simply /

Edit wp-config.php

The wp-config.php file gets edits, so that

define(‘WP_SITEURL’, ‘http://’ . $_SERVER[‘HTTP_HOST’] . ‘/wordpress/‘);
define(‘WP_HOME’, ‘http://’ . $_SERVER[‘HTTP_HOST’] . ‘/wordpress‘);

define(‘WP_SITEURL’, ‘http://’ . $_SERVER[‘HTTP_HOST’] . ‘/‘);
define(‘WP_HOME’, ‘http://’ . $_SERVER[‘HTTP_HOST’] . ‘/‘);

Edit httpd-app.conf

The httpd-app.conf file gets edits, so that

RewriteBase /wordpress/
RewriteRule . /wordpress/index.php [L]

RewriteBase /
RewriteRule . /index.php [L]

Save Permalinks

One thing I learned during this whole ordeal, is that the Save Permalinks action creates a new .htaccess file for you, which I needed before the root /wordpress/ URL would go away.

Gotcha’s

One thing that caused me quite a bit of trouble is that just doing the edits of the files was not enough; but I didn’t know that.  After doing the edits of the files (only), the WordPress Admin site worked fine.  But every attempt to go to any of the content resulted in a 500 internal server error.

By the way, the WordPress community and debugging tools truly suck to help one figure out what’s wrong here.  But that’s a rant for a different post.

So I thought my migrations were failing because I couldn’t get to my content after migration.  But really, because I had a default-out-of-the-box installation, I never tried to check the First Post comment or any of the other links.  I made the changes to httpd-prefix.conf and wp-config.php and the Admin site worked fine.  After the restore, I could log in to the Admin site (with the password from the production site), and that worked fine.

But my content was always broken, and I didn’t know it until I stripped down everything back to the most rudimentry snapshot I took before I edited anything.