Dimension Data > IT outsourcing > The top three Monday morning networking headaches (and how to avoid them)

The top three Monday morning networking headaches (and how to avoid them)

TwitterFacebookGoogle+LinkedIn

 

Rich Schofield | Business Development Director - Networking | Dimension Data

Rich Schofield | Business Development Director – Networking | Dimension Data

There are good reasons why many networking technicians in corporate IT divisions dread Mondays (and it has nothing to do with how they’ve spent their weekends!). That’s because Mondays can quickly become more stressful and intense than any other day of the week if the organisation doesn’t have consistent and reliable network management processes and procedures in place.

We recently tested this theory at Dimension Data by speaking to our own network support professionals at our Global Service Centres around the world. We asked them what in their view causes the most common networking headaches for organisations on a Monday morning, and this is what they said:

  1. The tension headache: weekend configuration changes

Almost everyone who’s worked in an IT support centre before knows this scenario well. You arrive at work after the weekend, only to find that the network has slowed down to a crawl. After several hours of painstaking investigation, it transpires that a technician has made a minor configuration change over the weekend, bypassing change control, and therefore didn’t understand what the impact might be on the rest of the infrastructure. The only cure for this familiar headache is proper change management. But what should such processes and procedures look like and how will they prevent this common human error?

  1. The migraine: incident management over weekends

Things break; it’s a fact of life. It’s how you deal with breakages that makes all the difference. But let’s face it, when things break over weekends, the problem is somewhat compounded. Failing devices are usually covered by some sort of maintenance and support contract with either a service provider or the vendor itself. But what if the broken device isn’t covered? It may be too old, or perhaps the organisation is trying to cut support costs. Surprisingly often, though, the organisation wasn’t even aware that the device existed in the first place … until it failed. The cure? A proper, up-to-date inventory and skilled, experienced employees on duty who follow standardised incident management processes. Isn’t that easier said than done, though?

  1. The rebound headache: weekend releases and deployments

You often don’t have a choice: you need to implement and deploy patches and do other general maintenance over weekends when everyone else is relaxing. Network traffic is low then, and there’s little risk of disrupting business when the organisation should be making money. The thing is, nobody’s there to use and test the system once the work is done. This leaves all user issues to be resolved – you guessed it – on a Monday! This headache can be avoided with proper lab testing and user acceptance testing after release and deployment. But how do you organise that with no disruption to ordinary business?

While the cures for these Monday morning headaches may seem obvious, they still leave some questions unanswered as to how they should be implemented. There’s some more sound advice for you in this latest thinking article called Monday Morning Networking Headaches. Read it carefully … and call us in the morning (or any other time of the day) if your headaches persist.

Watch our Hangout as networking experts discuss the top Monday morning networking headaches and how to avoid them.