I’m writing this post on my way home from the Amazon Web Service (AWS) re:Invent conference in Las Vegas, having spent a day in an analyst-only preview of all the news from the show.
As I write this, the 30,000 or so attendees at the show are blissfully unaware of just how much is coming down the wire in terms of new AWS’s new innovations. It seems a little strange to have left the event before it has actually started for everyone else, but 12 hours spent at 30,000 feet is a great way to ruminate over what I’ve just heard.
There will, of course, be innumerable articles detailing what AWS is rolling out — the Snowball Edge, a hardware offering designed to be situated in remote locations (think corporate data centers or positions like on-site at wind farms or in freight ships). New compute instances that provide far more power, and far more prospects for highly specific use cases. There is Amazon Athena, an interactive serverless query service that can run standard SQL queries over AWS’ S3 storage service.
Then there is the oft-rumored, and now confirmed Amazon artificial intelligence offerings — powerful A.I. tools that can readily be used by developers offering image analysis (via AWS Rekognition), text to speech (via Amazon Polly) and speech recognition and natural language understanding (via Amazon Lex). All of these tools are obviously developed off the back of services that Amazon itself offers — in particular Lex is a none-too-subtle reference to Amazon Alexa, the device from which it saw its genesis.
There is the introduction of PostGres compatibility into AWS’ Aurora database offering and AWS GreenGrass, an offering that allows customers to embed AWS’s Lambda serverless computing offering into connected devices which will allow local computer, storage, messaging and networking.
There is Silvermine, a personalized service health dashboard that helps AWS customers get a better handle on the operations of their particular AWS stack. And OpsWorks, AWS orchestration offering which now comes bundles with managed Chef. Amazon X-ray, a distributed application debugging service that allows users to debug ad trace end to end requests. Amazon Glue, an offering to allow data warehouse users to perform their extract, transform and load (ETL) functions in the cloud.
There’s virtual private servers priced at a paltry $5 per month, potentially a move that will seriously hamper the huge growth seen by Digital Ocean. There are the tools to bring it all together — AWS Step Functions allows users to coordinate distributed applications using visual workflows and visually arrange components as a series of steps. AWS Shield which offers integrated DDoS for AWS services ELB, CloudFront and Route 53 and Blox, a collection of open source projects for cluster and container management offering scheduling and orchestration functions.
But it is, perhaps, SnowMobile, the announcement that gathered the largest number of gasps at the analyst event, that best exemplifies why AWS is so far in front, and how it manages to continue to be so.
SnowMobile is a massive, container based storage device that is designed to be a “snowball on steroids.” Organizations can use SnowMobile to physically load massive quantities of data from their on-premises storage to AWS. SnowMobile is a container chock full of storage, huge networking capability and attached to a truck with its own security detail. SnowMobile drives up to a customer’s premises, connects into the customer’s storage and slurps up the data they want transferred.
It then drives away, with its security detail on board, and transports said data to an AWS facility. We were told that there are already around 50 SnowMobiles deployed globally across four different AWS regions, and that customers are using them today to transfer data to AWS.
When the analyst were first told about SnowMobile, most people thought it was a joke. When we found out it was legitimate, there was a generally dropping of jaws. But upon further reflection, SnowMobile isn’t such a hard thing for AWS to pull off. Amazon, AWS’s parent company, is one of the biggest logistics operations in the world — every day it arranges for the transportation of hundreds of thousands of packages. It owns the logistics infrastructure — from warehouses to robotic picking and packing machines. From trucks to airplane and on top bricks and mortar distribution locations.
Alongside that logistics capability, AWS runs one of the largest cloud computing infrastructure footprints on earth — it knows how to move, process and deal with data safely, securely and at the very highest performance levels. Put those two different functions together — computing infrastructure and logistics control — and you quickly arrive at solutions that look not unlike the SnowMobile.
And that is the amazing thing about Amazon. While its two main public cloud competitors, Google and Microsoft, have their own massive infrastructure footprints, and have huge experience in technology, consumer hardware devices and developer ecosystems, Amazon alone has the combination of experience at disrupting traditional bricks and mortar distribution models, experience at building out global scale computing infrastructure, and a huge appetite for controlling every step in every process it is involved with.
In a presentation by James Hamilton, AWS’s eccentric and widely respected engineer, we all heard about how AWS has built its own firmware to sit on third-party power switches. What other vendor would design so deeply that it would customize something so mundane as a power switch that is only used in the unlikely event of a power outage.
And AWS’s reason for doing so? Because there is a one in a million chance that when those power switches are activated, they may, seeing a risk of short circuit, not allow generator power to flow to the data center. AWS wasn’t prepared to suffer an outage like that and is both prepared to risk a million dollar generator to ensure up time, and to obsess about the tiniest of details to ensure the very highest of reliability. This is an organization that, as Hamilton told us, deploys every day the amount of computing capacity that would have supported the entire Amazon operation back in 2005, when it was already an $8.5 billion revenue company.
At the start of the analyst event, Andy Jassy, AWS’s CEO, and the guy who has been running the organization since its inception, gave the analysts a run down and update on AWS business. He waxed poetic about AWS’s recently announced partnership with VMware, one which pragmatically allows organizations to move to the cloud but within the context of their current reality which is often a VMware virtualized one. He talked about the investments that AWS is making into its partner network, and the help they’re giving those partners to build their own practices. He expressed the view that AWS needs to accept that it takes time to build that sort of organizational muscle memory, and that AWS realizes it needs to help these young partners to grow to maturity. And the fact that Jassy, a hugely busy guy at re:Invent, dedicated over an hour to talk with a group of analysts, is a good indication that AWS is truly taking the enterprise seriously.
Of course there are the odd criticisms: AWS hasn’t got an awesome track record when it comes to open source, for example. And AWS needs to be careful around competing with its own ecosystem specifically, and the broader industry generally. A case in point is Blox, a service which clearly has Kubernetes and Docker in its sights — AWS needs to be careful not to be seen as being too voracious.
But, at the end of the day, it all comes back to customers. It’s true that AWS won’t own 100% of the cloud computing market share — other vendors certainly have a part to play, but judging by the incredible positivity of AWS customers, even before they had heard any details about the announcements from the show, is an indication of just how well AWS is executing. Gartner already estimates that AWS is many orders of magnitude bigger than its nearest public cloud competitor, but it seems that their rate of innovation is also that far ahead.
Extrapolate that out over a few years and you have a vendor of incredible breadth and depth, and one which will likely continue to define modern IT for the foreseeable future.
This article is published as part of the IDG Contributor Network. Want to Join?