Not many years ago (dial-up days… dark times) we taught “Click save” for all of your important data.
Our current technology has evolved to the point, thankfully, where Save (or, worse, Save/Apply/OK) simply isn’t necessary anymore.
So back in the dial-up days, whenever one of our customers would have Finals-Week (many of them, around the same time each evening for about two or three weeks) students would all take their assorted three-hour exams at the same time. Hundreds of colleges and universities. Each with thousands (or more) students. Each at around the same time each night during a two or three-week window. There could easily have been 200,000 or more taking their exams.
Consider the then mindset — we’d spent, by that time, decades teaching, “Click Save!”
You see, they (students, faculty, administration, developers) didn’t trust the database. The same database that housed every aspect of their identity, course list, the course content itself. Everything.
Sure, it had redundant power, network, CPU, disk — everything. Any conceivable hardware failure had redundancy.
But they didn’t trust this mysterious “database” thing.
They wanted — insisted on — a “just in case” solution.
Consider the introductory statement above. This, of course, led to, “What if we give them a Save button on the page?!”
Sure, it already had a Save button, which triggered a write to the DB, but we would also write their exam to a flat file. Just in case. It’s just one read, and one write, after all. It wouldn’t generate any extra load. Besides, “doing it right” would take too much work. Having something that saved with every click? Too much work.
Now, envision having 200,000 students all clicking “Save” every 30 seconds or so all during a three hour window.
The DB handled it just fine. It barely broke a sweat.
Even though we then tasked it with something more than just “update the database”. So, when somebody clicked Save, we’d have it:
- write to the DB, then…
- connect to storage
- check the reference table to then check the right folder
- check that folder’s file count
- wait while storage reported the number of objects
- create a new directory if the current one had too many objects
- update the reference table then
- and write a plain-text copy of that user’s exam
- respond to the Save request by with a page refresh
It’s just one read, one write. What could go wrong?
Now, do it 200,000 times. Every 30 seconds.
Oh, and as the number of files grew in that folder, it would take longer… and longer… and longer just to see if it needed to move. In fact, it would actually result in those storage devices dropping offline because they were so busy checking to see what the directory’s object count was.
So, it was reported as “one read/one write” every 30 seconds, which sounded trivial enough. That became a bit less trivial when it’s multiplied by 200,000 students doing the same every 30 seconds and the time increases as the file count does.
It turned out that the helpful “fix” was entirely self-inflicted. It began with very premise that the DB wasn’t trusted. This was both compounded and complicated by a few misunderstandings and misrepresentations of the nature of the data and how data moves around.
All because somebody didn’t trust a database and it was just “one read/one write”.
The database? It didn’t have any problems at all. Well, it did every now and again, but that’s not the point of this particular rant.
The time needed to been spent more meaningfully by educating customers about the reliability of these new-fangled “compuserves”, “interwebz”, and “databasing” things.
And to make it just one read/one write, that Save button would’ve had a more meaningful job of doing nothing new at all more complex that simply updating a database — well, that, and perhaps reducing the risk of the student’s internet connection timing out.