& cplSiteName &

First AWS, Now Microsoft Cloud; Who's Next?

Ashwin Krishnan
5/4/2017
50%
50%

So, there we have it. Within a window of a few months, the top two public cloud providers on the planet -- Amazon Web Services Inc. and Microsoft Cloud -- have had bodily seizures that have caused the rest of us (mere cells in their ecosystem) to go into crazy orbits. Enough of the drama, let's get to facts. In this age of information deluge it would not be presumptuous to assume that the reader may have forgotten the specifics, so let's recollect.

The Amazon Simple Storage Service (S3) had an outage on Tuesday, February 28. An authorized S3 team member who was using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. However, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. And the rest, as they say, is history!

Now let's turn to the Microsoft episode. On Tuesday, March 21, Outlook, Hotmail, OneDrive, Skype and Xbox Live were all significantly impacted, and trouble ranged from being unable to log in to degraded services. True to form, the Microsoft response was to downplay the impact and provide little detail (by contrast, Amazon provided a much more detailed post mortem). A subset of Azure customers may have experienced intermittent login failures while authenticating with their Microsoft accounts. Engineers identified a recent deployment task as the potential root cause. Engineers rolled back the recent deployment task to mitigate the issue.

So, is this the death of public cloud? Nah. Far from it. And anyone who says otherwise should have their head examined. BUT, it should serve as a wake-up call to every IT, security and compliance professional across every industry. Why? Because this kind of "user error" or "deployment task snafu" can happen anywhere -- on-premises, on private cloud and on public cloud. And since every enterprise is deployed on one or more of the above, every enterprise is at risk. So enough of the fear mongering. What does someone do about it? Glad you asked.

There are really three vectors of control: scope, privileges and governance model.

Scope is really the number of "objects," a.k.a. the nuclear radius of what each admin (or script) is authorized to work on at any given time. Using the Microsoft Cloud example (I realize I am extrapolating since they have not provided any details), this may be the number of containers a deployment task can operate on at any given time.

Privileges calls for controlling what an administrator or task can do on the object. For instance, continuing with the container example from above, the privilege restriction could be that the container can be launched but not destroyed.

And finally, you need a governance model. This is really the implementation of best practices and a well-defined policy for enforcing the above two functions -- scope overview and control enforcement -- in a self-driven fashion. In this example, the policy could be to ensure that the number of containers an admin can operate remains under 100 (scope) and that any increase in that number automatically requires a pre-defined approval process (control). Further sophistication can easily be built in, where the human approver could easily be a bot that checks the type of container and the load on the system and approves (or denies) the request. Bottom-line -- checks and balances.

So there you have it. The two large public clouds have suffered embarrassing outages in the past month. They will recover, get stronger and most likely have future outages as well. The question for the rest of us is what we learn from their experience and how to make our environments in our own data centers and on private and public clouds better! If we don't, we may not be lucky enough to fight another day.

— Ashwin Krishnan, SVP, Products & Strategy, HyTrust

(3)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
akrishnan940
50%
50%
akrishnan940,
User Rank: Light Beer
5/4/2017 | 12:46:54 PM
Re: AWS S3 outage and proper architecture
Thanks for the detailed comments. Yes - if you automate a poorly defined process, you are going to crash and burn faster. And there is no 'stress testing' that either cloud providers and enterprises are voluntarily embracing to expose any holes and fix them. But first step is to know what you don't know or haven't acknowledge - the scope, privileges and putting a governance around that
danielcawrey
50%
50%
danielcawrey,
User Rank: Light Sabre
5/4/2017 | 11:36:58 AM
Re: AWS S3 outage and proper architecture
These are the large-scale issues that can afflict cloud systems. I'm sure Amazon and Microsoft are learning from the mistatkes made. Let's keep in mind that this is all still really new, and everyone is learning as we all go along. 
mladeb
50%
50%
mladeb,
User Rank: Light Beer
5/4/2017 | 11:18:32 AM
AWS S3 outage and proper architecture
Northern Virginia is the cheapest AWS region and many services including AWS own dashboard do not follow AWS high availability architecture recommendations and use only one region, the cheapest one, even for valuable services and data. 

Backup and disaster recovery is also an issue and using multiple public cloud providers for most valuable services and data makes sense. Especially in case of natural disasters public cloud providers should ensure that rest of the regions can operate without disruption and AWS outage proved that AWS can deliver on that expectation. Funny part of the story was that some AWS own applications were impacted due to single region architecture as well as may other popular applications relying on only one region availability.

With automation scripts that take manual input there will always be possibility for humans to cause disaster even with well defined scope, privileges and governance model but the point is that with proper architecture in case of both human caused and natural disasters public cloud provides real benefits and can cope with them - especially by using public multi cloud for high availability architecture.  
More Blogs from Column
Using blockchain to speed the automation of one of the trickiest areas in telecom – inter-carrier settlements – seems to be showing promise as startups emerge to lead the way.
AI-driven solutions are being used in telecommunications to support various elements of the customer experience that most CRM systems just can't handle.
By 2021, telecom networks will see a turning point where AI-driven technologies will be necessary to deploy, run and manage 5G services and leverage comprehensive network automation solutions.
The US could start a steady climb back up the global rankings if more cities practice intensive spectrum management and make new LTE-Advanced upgrades.
Operators will need full control over network and application performance if 5G services are to thrive.
Featured Video
Flash Poll
From The Founder
After almost two decades at Light Reading, it's time for a different optical adventure.
Upcoming Live Events
September 24-26, 2018, Westin Westminster, Denver
October 9, 2018, The Westin Times Square, New York
October 23, 2018, Georgia World Congress Centre, Atlanta, GA
November 6, 2018, London, United Kingdom
November 7-8, 2018, London, United Kingdom
November 8, 2018, The Montcalm by Marble Arch, London
November 15, 2018, The Westin Times Square, New York
December 4-6, 2018, Lisbon, Portugal
March 12-14, 2019, Denver, Colorado
All Upcoming Live Events
Hot Topics
MWCA Day 2 Recap: '5G' Rolls Out & We Roll On
Phil Harvey, US News Editor, 9/14/2018
Apple: It's the End of the SIM as We Know It
Iain Morris, International Editor, 9/13/2018
MWCA Day 1 Recap: 5G Is Here…?
Phil Harvey, US News Editor, 9/13/2018
So Long, & Good Luck With That
Steve Saunders, Founder, Light Reading, 9/14/2018
Animals with Phones
Live Digital Audio

A CSP's digital transformation involves so much more than technology. Crucial – and often most challenging – is the cultural transformation that goes along with it. As Sigma's Chief Technology Officer, Catherine Michel has extensive experience with technology as she leads the company's entire product portfolio and strategy. But she's also no stranger to merging technology and culture, having taken a company — Tribold — from inception to acquisition (by Sigma in 2013), and she continues to advise service providers on how to drive their own transformations. This impressive female leader and vocal advocate for other women in the industry will join Women in Comms for a live radio show to discuss all things digital transformation, including the cultural transformation that goes along with it.

Like Us on Facebook
Partner Perspectives - content from our sponsors
One Size Doesn't Fit All – Another Look at Automation for 5G
By Stawan Kadepurkar, Business Head & EVP, Hi-Tech, L&T Technology Services
Prepare Now for the 5G Monetization Opportunity
By Yathish Nagavalli, Chief Enterprise Architect, Huawei Software
Huawei Mobile Money: Improving Lives and Accelerating Economic Growth
By Ian Martin Ravenscroft, Vice President of BSS Solutions, Huawei
Dealer Agent Cloud – Empower Your Dealer & Agent to Excel
By Natalie Dorothy Scopelitis, Director of Digital Transformation, Huawei Software
All Partner Perspectives