Article Originally Posted to Spiceworks after the recent storms and ensuing disasters in the western United States:
Down in southern Arizona, we get some very, very wild thunderstorms. During the monsoon, they hit suddenly and can be exceedingly violent. Usually these deluges are out in the desert, away from town, and are fun to watch but pose very little threat to homes and businesses in town. This year, that was NOT the case.
I’m the sole IT manager for a large group of automotive dealerships. Our system isn’t terribly complicated, but it relies heavily on our fiber Internet connection because the vast majority of our business applications are cloud based or hosted off site. We also use a SIP-trunked VoIP phone system, so virtually everything we do here is dependent on our ability to get out. Our system is running on a Layer-2 Metro Ethernet provided by our ISP. It’s fantastic because I have direct access to almost everything in our system, but the Achilles’ heel is our central network room — one or two key components in there run everything. The two most important pieces of equipment are of course our UTM router and our primary switch that handles our VLANs.
The worst month for monsoon storms is July and let me assure you, this past July was the worst I’ve seen in 13 years here. We had a night storm that lasted for about six hours that was literally right on top of our town. The thunderclaps were so loud they shook my apartment to the extent I was worried about windows breaking.
Around 4 a.m., I woke up to my cell phone alerting me of a problem with our system. I have an outside service that pings our router at regular intervals. Sure enough, I attempted to log in to my router, and it was down.
Then I realized MY home Internet connection was down, too! Thinking it was just an area-wide outage, (our ISP is very good at recovering from unforeseen outages) I took my time getting up. So, I showered and wandered into work.
By the time I got to work, the ISP had re-established Internet, but virtually half of our company was not back online. A quick trip to the central network room and a bit of investigation and I quickly found that one of our primary L2+ switches had been zapped by lightning along with a wireless backhaul that feeds one of our smaller dealerships.
Now, I’m fairly certain the surge that took out the switch is the same one that brought down the wireless backhaul. I had been careful to install an arrester block as close to the backhaul hardware as possible and even tested that it was properly grounded. But, apparently that wasn’t enough.
One of my policies — and believe me, it took a while to convince my boss it this was worth doing — is to keep a backup of key pieces of equipment on hand. This includes a Layer 2+ 48-port switch and a spare UTM router. It was a big investment, but being down for one to two days is not an option. When I buy other smaller items, like the surge arrester, I tend to spec out at least two units.
My boss understands that while I may not need two units now, or possibly ever, being in a relatively small town we have to invest to have equipment on hand.
So, I quickly got the backup switch in place, changed out the wireless backhaul and the blown arrester, and we were back up and fully operational by noon! I then ordered and installed indoor, in-line surge arresters on every network line that connects to an outdoor device. I had the boss approve the purchase of another switch, and we were good.
All told, I lost a lot of equipment at another of our locations that also had a switch blown out that night and a backhaul at a third location was ripped off the roof by the severe winds.
I learned a lesson about “assuming it won’t happen,” and I looked the hero because I was able to recover from a significant “act of God” outage quickly. My recommendation: Talk to the powers that be about keeping redundant critical equipment on hand, and don’t think that just because it hasn’t happened to you yet that it won’t ever.
St. Aubin Technologies does not claim ownership of the above article.