
The IT outage which occurred on Friday July 19th is estimated to have affected 8.5million computers globally. The cost of resolving the matter is still being calculated, not to mention the losses incurred through cancelled flights, appointments, utilities issues and so on.
Most software producers know the importance of QA testing at all times, especially when rolling out global updates to a known client base numbering in the millions. Apparently not all of them do adhere to this widely known rule. A similar incident happened in 2010 when McAfee released an update which caused Windows XP to fall into a reboot cycle which required manual repair.
So what can those who depend on reliable software do to reduce the impact of such instances?
Microsoft Windows is the most widespread operating system in use today. Every industry relies on it in some way or another, although most cloud infrastructure solutions run on a variant of Unix.
Where possible, critical systems should not run on Windows and instead should align with cloud infrastructure best practice, in running a stable Linux or Unix build for those reasons.
Outsourcing has been in the news for the wrong reasons of late, but what we are seeing is that proper implementation of offsite data processing and storage solutions has always proven beneficial should the worst happen.
In our case, over the last ten years in which we have held client data at our remote, secure and highly energy efficient data center, not only have none of our clients lost any of that data, but where losses have occurred at a clients own premises, our mirrored solutions have so far been able to maintain a 100% data recovery and restoration record.
Perhaps you should speak to us?