As anyone who deals with software in the IT world knows, releasing (or deploying) software is a more voodoo than science.
When Microsoft puts out a new version of Windows, or even an update, it’s not like some programmer burns it to a disk and ships it out for distribution.
There are countless hoops to be jumped through, and that software passes through more hands than a 1960 penny.
Even something as simply as a ‘minor’ software release is not free from the laborious, complex process. And, as I experienced last week, it can easily go from a non event to hours and hours of frustration.
In this case it was the server monkeys. The deployment was hideously torpedoed three times and behind each disaster was the server monkeys…
Did I mention it was also three consecutive days?
Day one. All of the testers were getting all different results. The cause? Servers that were supposed to be retired from our live server cluster were activated. Since they were not supposed to be there they didn’t have up-to-date operating systems and hence, my projects software was giving all kinds of crazy results.
That only took 13 hours to sort out what the cause was.
Day two. The people in charge of the servers, the top monkeys, didn’t bother to tell anyone that we couldn’t deploy our software ontime because another group was also doing a deploy and we were put on standby. It’s not like our project was a surprise. It had been on the schedule for two months. So, we’ve got people in several timezones sitting on their thumbs for hours.
Day two, the sequel. We finally get the go ahead, it’s 8:30 on a Friday night, and we’re all hoping this wraps up fast. The deployment completes and UAT (user acceptance testing) kicks off. Less than a minute goes by when we get the alert. There’s an error! Phones start ringing, people running from office to office, hunting dogs bray as they dash all over the place, all looking for the problem. We’re 8 hours from this going live and it’s broken. The answer? The servers used for agents in the field are down for database maintenance.
What? But… But we had a deployment.
Oh, did you? Too bad. We’re busy and can’t bring our servers up.
But, it’s been on the schedule.
Well, the server monkeys didn’t tell us. You’ll have to wait.
Wait? How long?
We’re not saying.
Saying, that’s right. We’ll be done when we’re done.
When, we’re, done.
So, FriDAY, turned into Friday night, which turned into Saturday 12:30 A-fricken-M. The field servers finally come up, and … nothing works. Hair pulling ensues along with a lot of screaming. Someone, somewhere, finally does something right and now the software is working. Finally.
So, three times we were royally screwed. Each time the server monkeys, who do this for a living, I might add, completely screwed the pooch, and not once did they say a word of apology. Instead they went along as if everything was going according to plan.
I could be wrong. Maybe that was their plan.