The one with an overbuilt solution…

Then there was the one about the overbuilt-solution.

We were a “Microsoft Shop”. Windows NT4 was in full-swing. Virtualization was in its infancy.

Leadership discouraged the exploration of “best of breed” solutions. Making things more useful by simplifying them wasn’t permitted. This often meant that the alternative was interpreted as “not Windows.”

Windows was “the only solution”.

One of the applications that was needed was something to translate the incompatible line-endings of submitted text files from a particular customer into something that Windows could read.

I knew this would be as simple as a cron job to trigger a periodic unix2dos command. It’s a built-in command. Trivial for a low-priority utility box. And because it was coming from a customer’s *nix system, that command could even be injected quite harmlessly into their workflow so it happened before it was even sent to us.

“Impossible!” leadership would howl.

Rather than ensure it’s done before transfer, we’ll do it after. We’d also encounter the cost of this particular server sitting entirely idle except for the two times per day (about 1/2 second each) that it would have to do it’s assigned job. It couldn’t be tasked with any other process or job because, in those days, a server was assigned one task. This went far beyond company policy and was engrained in the very thinking — the cultural belief in IT fields.

So, we did it the enterprise way because “that’s the way it’s always been done.”

  • Select hardware, because virtualization was such a new concept (several years by that time) that it couldn’t be trusted.
  • Buy a new server, for about $3k with a suitable onboard RAID-1, dual NICs, dual-power, dual-CPUs. It’ll draw about 100 watts at idle. Always. It will occasionally run a bit higher than that, but it’s practically idle. All the time.
  • Wait about two months for the servers to arrive.
  • Buy a Windows license.
  • Buy a patching license.
  • Double it (again) because policy would require redundancy. They’ll need to be installed in pairs at least per environment.
  • Another thing policy would require is that equal hardware must also then be deployed to Staging and Prod environments. That results in deployment of six servers, minimum.
  • Ensure we have physical space and capacity in the datacenter to support those six servers — because policy — so, that’s the maximum possible load of 230w each, times six.
  • And that’s beyond the unprovisioned $24k hardware cost.

Don’t forget to ensure it’s included in the security/patching list.

And update the inventory list. Because we’ll also need to dispose of it in five years’ time.

Y’know, spending all of that time making the assorted “impossible” claims will really irritates people who are already doing the impossible.

“Challenge accepted.”

So we did it the more efficient way. A simple unix2dos command on an already-existing, low-priority Linux utility VM. Yep, we managed to sneak one of those in. Two, actually. And it ran flawlessly for several years. And it was low priority. If something else needed resources, it would happily step entirely out of the way and wait.

As I recall, it was literally:

unix2dos -k ${filename}

Actually, it was stuck in crontab, so it was very slightly more complex.

Line-endings wouldn’t be an issue today, of course. Operating systems thankfully are graceful enough to ignore certain low-level encoding limitations.

Mostly.

Alerts = Interruption + 1

I had yet another audible alert trying to get my attention.

Because I use several virtual desktops, and don’t have everything avilable on screen all the time, and because developers don’t have a consistent/meaningful method (the Apple Notifications concept is really useful, but isn’t well-adopted), that alert could have come from anywhere.

I can’t see everything all the time, so there was no way to see whether the alert came from any particular app.

I spent like 10 minutes trying to figure it out. I clicked through every open app to see if they each had their own, recently-added component or feature, or additional alerting/notification panels.

No idea what it was.

Then, another alert chimed away. Exactly the same sound. Definitely an alert. But from where?

Then it dawned on me: I’d heard that same alert maybe a month or two ago.

It was just my AirPods giving me the polite notification that a pod’s battery had 10% remaining.

So, I’m not complaining about Apple. Not at all. I’m not even complaining about getting this kind of alert: it was saying, effectively, “headphone batteries are getting low”. In fact, an improvement might also be an additional ability for the AirPods to invoke the connected device’s Notifications panel.

What I’m really more annoyed about is the proliferation of notifications in general. However trivial they might be perceived, are still interruptions to your workflow.

The regular frequency of SMS/text messages? Interruptions. Paging alerts? Interruptions. Getting FaceBook/Youtube/Twitter/etc alerts? Interruptions. Alerts about truly trivial tasks? Interruptions. An inconsequential outage of something that’s unused? Interruptions.

Oh, and if you insist on having so much involvment in all of that assorted tech that you want to get endless interruptions, fine.

But if you insist on also having audible-alerts that can be heard by anyone else within earshot: not fine.

Full Speed Ahead!

I haven’t had the mental or physical capability since this whole ordeal began a full year ago to go for a run. I used to run twice a week.

I’m just now back from one.

I was only out for about 20 minutes and, yes, it was a bit of a run/walk. It consisted of more running than walking.

Provided I don’t encounter any other significant/life-threatening injuries or setbacks, then I’ll also see about planning and preparing for an actual, organized 5K race as well.

Walking, jogging, running. I don’t care which.

But I will do one.

One Read/One Write – Isn’t the whole story…

Not many years ago (dial-up days… dark times) we taught “Click save” for all of your important data.

Our current technology has evolved to the point, thankfully, where Save (or, worse, Save/Apply/OK) simply isn’t necessary anymore.

So back in the dial-up days, whenever one of our customers would have Finals-Week (many of them, around the same time each evening for about two or three weeks) students would all take their assorted three-hour exams at the same time.

Hundreds of colleges and universities. Each with thousands (or more) students. At around the same time each night during a two or three-week window. There could easily have been 200,000 or more taking their exams.

Consider the then mindset — we’d spent, by that time, decades teaching, “Click Save!”

You see, they (students, faculty, administration, developers) didn’t trust the database. The same database that housed every aspect of their identity, course list, the course content itself. Everything.

Sure, it had redundant power, network, CPU, disk — everything. Any conceivable hardware failure had redundancy.

But they didn’t trust this mysterious “database” thing.

They wanted — insisted on — a “just in case” solution.

Consider the introductory statement above. This, of course, led to, “What if we give them a Save button on the page?!”

Sure, it already had a Save button, which triggered a write to the DB and a refresh of the page. But it evolved: we also had it write their exam to a flat file.

Just in case.

It’s just one read, and one write, after all. It wouldn’t generate any extra load. Besides, “doing it right” would take too much work. Having something that saved with every click? Too much work.

Now, envision having 200,000 students all clicking “Save” every 30 seconds or so all during a three hour window.

The DB handled it just fine. It barely broke a sweat.

Even though we then tasked it with something more than just “update the database”. So, when somebody clicked Save, we’d have it:

  1. write to the DB, then…
  2. connect to storage
  3. check the reference table to then check the right folder
  4. check that folder’s file count
  5. wait while storage reported the number of objects
  6. create a new directory if the current one had too many objects
  7. update the reference table then
  8. write a plain-text copy of that user’s exam
  9. respond to the Save request by with a page refresh

It’s just one read, one write. What could go wrong?

Now, do it all, 200,000 times. Every 30 seconds.

Oh, and as the number of files grew in that folder, it would take longer… and longer… and longer just to see if it needed to move. In fact, it would actually result in those storage devices dropping offline because they were so busy checking to see what the directory’s object count was.

So, it was reported as “one read/one write” every 30 seconds, which sounded trivial enough. That became a bit less trivial when it’s multiplied by 200,000 students doing the same every 30 seconds and the time increases as the file count does.

It turned out that the helpful “fix” was entirely self-inflicted. It began with very premise that the DB wasn’t trusted. This was both compounded and complicated by a few misunderstandings and misrepresentations of the nature of the data and how data moves around.

All because somebody didn’t trust a database and it was just “one read/one write”.

The database? It didn’t have any problems at all. Well, it did every now and again, but that’s not the point of this particular rant.

The time needed to been spent more meaningfully by educating customers about the reliability of these new-fangled “compuserves“, “interwebz“, and “databasing” things.

And to make it just one read/one write, that Save button would’ve had a more meaningful job of doing nothing new any more complex than simply updating a database and refreshing the page — well, that, and perhaps reducing the risk of the student’s internet connection timing out.