Does your WordPress website go over a cliff in July 2018?

Secure connections, faster web sites, better Google search rankings – and well before Google throw a switch that will disadvantage many other web sites in July 2018. I describe the process to achieve this for anyone running a WordPress Multisite Network below. Or I can do this for you.

Many web sites that handle financial transactions use a secure connection; this gives a level of guarantee that you are posting your personal or credit card details directly to a genuine company. But these “HTTPS” connections don’t just protect user data, but also ensure that the user is really connecting to the right site and not an imposter one. This is important because setting up a fake version of a website users normally trust is a favourite tactic of hackers and malicious actors. HTTPS also ensures that a malicious third party can’t hijack the connection and insert malware or censor information.

Back in 2014, Google asked web site owners if they could make their sites use HTTPS connections all the time, and provided both a carrot and a stick as incentives. On the one hand, they promised that future versions of their Chrome Browser would explicitly call out sites that were presenting insecure pages, so that users knew where to tread very carefully. On the upside, they suggested that they would positively discriminate secure sites over insecure ones in future Google searches.

The final step in this process comes in July 2018:

New HTTP Treatment by Chrome from July 2018

The logistics of achieving “HTTPS” connections for many sites is far from straight forward. Like many service providers, I host a WordPress network, that aims individual customer domain names at a single Linux based server. That in turn looks to see which domain name the inbound connection request has come from, and redirects onto that website customers own subdirectory structure for the page content, formatting and images.

The main gotcha is that if I tell my server that its certified identity is “www.software-enabled.com”, an inbound request from “www.ianwaring.com”, or “www.obesemanrowing.org.uk”, will get very confused. It will look like someone has hijacked the sites, and the users browser session will gain some very pointed warnings suggesting a malicious traffic subversion attempt.

A second gotcha – even if you solve the certified identity problem – is that a lot of the content of a typical web site contains HTTP (not HTTPS) links to other pages, pictures or video stored within the same site. It would normally be a considerable (and error prone) process to change http: to https: links on all pages, not least as the pages themselves for all the different customer sites are stored by WordPress inside a complex MySQL database.

What to do?

It took quite a bit of research, but cracked it in the end. The process I used was:

  1. Set up each customer domain name on the free tier of the CloudFlare content delivery network. This replicates local copies of the web sites static pages in locations around the world, each closer to the user than the web site itself.
  2. Change the customer domain name’s Name Servers to the two cited by CloudFlare in step (1). It may take several hours for this change to propagate around the Internet, but no harm continuing these steps.
  3. Repeat (1) and (2) for each site on the hosted WordPress network.
  4. Select the WordPress “Network Admin” dashboard, and install two plug-ins; iControlWP’s “CloudFlare Flexible SSL”, and then WebAware’s “SSL Insecure Content Fixer”. The former handles the connections to the CloudFlare network (ensuring routing works without unexpected redirect loops); the latter changes http: to https: connections on the fly for references to content within each individual customer website. Network Enable both plugins. There is no need to install the separate CloudFlare WordPress plugin.
  5. Once CloudFlare’s web site shows all the domain names as verified that they are being managed by CloudFlare’s own name servers with their own certificates assigned (they will get a warning or a tick against each), step through the “Crypto” screen on each one in turn – switching on “Always use https” redirections.

At this point, whether users access the websites using http: or https: (or don’t mention either), each will come up with a padlocked, secure, often greened address bar with “https:” in front of the web address of the site. Job done.

Once the HTTP redirects to HTTPS appear to be working, and all the content is being displayed correctly on pages, I go down the Crypto settings on the CloudFlare web site and enable “opportunistic encryption” and “HTTPS rewrites”.

In the knowledge that Google also give faster sites better rankings in search results over slow ones, there is also a “Speed” section in the CloudFlare web site. On this, i’ve told it to compress CSS, JavaScript and HTML pages – termed “Auto Minify” – to minimise the amount of data transmitted to the users browser but to still render it correctly. This, in combination with my use of a plug-in to use Google’s AMP (Accelerated Mobile Pages) shortcuts – which in turn can give 3x load speed improvements on mobile phones – all the customer sites are really flying.

CloudFlare do have a paid offering called “Argo Smart Routing” that further speeds up delivery of web site content. Folks are shown to be paying $5/month and seeing page loads in 35% of the time prior to this being enabled. You do start paying for the amount of traffic you’re releasing into the Internet at large, but the pricing tiers are very generous – and should only be noticeable for high traffic web sites.

So, secure connections, faster web sites, better Google search rankings – and well before Google throw the switch that will disadvantage many other web sites in July 2018. I suspect having hundreds of machines serving the content on CloudFlare’s Content Delivery Network will also make the site more resilient to distributed denial of service flood attack attempts, if any site I hosted ever got very popular. But I digress.

If you would like me to do this for you on your WordPress site(s), please get in touch here.

A small(?) task of running up a Linux server to run a Django website

Django 1.11 on Ubuntu 16.04

I’m conscious that the IT world is moving in the direction of “Serverless” code, where business logic is loaded to a service and the infrastructure underneath abstracted away. In that way, it can be woken up from dormant and scaled up and down automatically, in line with the size of the workload being put on it. Until then, I wanted (between interim work assignments) to set up a home project to implement a business idea I had some time back.

In doing this, i’ve had a tour around a number of bleeding edge attempts. As a single page app written in JavaScript on Amazon AWS with Cognito and DynamoDB. Then onto Polymer Web Components, which I stopped after it looked like Apple were unlikely to have support in Safari on iOS in the short term. Then onto Firebase on Google Cloud, which was fine until I thought I needed a relational DB for my app (I am experienced on MongoDB from 2013, but NoSQL schemas aren’t the right fit for my app). And then to Django, which seemed to be gaining popularity these days, not least as it’s based on Python and is designed for fast development of database driven web apps.

I looked for the simplest way to run up a service on all the main cloud vendors. After half a day of research, elected to try Django on Digital Ocean, where a “one click install” was available. This looked the simplest way to install Django on any of the major cloud vendors. It took 30 minutes end to end to run the instance up, ready to go; that was until I realised it was running an old version of Django (1.08), and used Python 2.7 — which is not supported by the (then) soon to be released 2.0 version of Django. So, off I went trying to build everything ground up.

The main requirement was that I was developing on my Mac, but the production version in the cloud on a Linux instance — so I had to set up both. I elected to use PostgreSQL as the database, Nginx with Gunicorn as the web server stack, used Lets Encrypt (as recommended by the EFF) for certificates and Django 1.11 — the latest version when I set off. Local development environment using Microsoft Visual Studio Code alongside GitHub.

One of the nuances on Django is that users are normally expected to login with a username different from their email address. I really wanted my app to use a persons email address as their only login username, so I had to put customisations into the Django set-up to achieve that along the way.

A further challenge is that target devices used by customers are heavily weighted to mobile phones on other sites I run, so I elected to use Google’s Material user interface guidelines. The Django implementation is built on an excellent framework i’ve used in another project, as built by four Stanford graduates  — MaterializeCSS — and supplemented by a lot of custom work on template tags, forms and layout directives by Mikhail Podgurskiy in a package called django-material (see: http://forms.viewflow.io/).

The mission was to get all the above running before I could start adding my own authentication and application code. The end result is an application that will work nicely on phones, tablets or PCs, resizing automatically as needed.

It turned out to be a major piece of work just getting the basic platform up and running, so I noted all the steps I took (as I went along) just in case this helps anyone (or the future me!) looking to do the same thing. If it would help you (it’s long), just email me at [email protected]. I’ve submitted it back to Digital Ocean, but happy to share the step by step recipe.

Alternatively, hire me to do it for you!

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Officially Certified: AWS Business Professional

AWS Business Professional Certification

That’s added another badge, albeit the primary reason was to understand AWS’s products and services in order to suss how to build volumes via resellers for them – just in case I can get the opportunity to be asked how i’d do it. However, looking over the fence at some of the technical accreditation exams, I appear to know around half of the answers there already – but need to do those properly and take notes before attempting those.

(One of my old party tricks used to be that I could make it past the entrance exam required for entry into technical streams at Linux related conferences – a rare thing for a senior manager running large Software Business Operations or Product Marketing teams. Being an ex programmer who occasionally fiddles under the bonnet on modern development tools is a useful thing – not least to feed an ability to be able to spot bullshit from quite a distance).

The only AWS module I had any difficulty with was the pricing. One of the things most managers value is simplicity and predictability, but a lot of the pricing of core services have pricing dependencies where you need to know data sizes, I/O rates or the way your demand goes through peaks and troughs in order to arrive at an approximate monthly price. While most of the case studies amply demonstrate that you do make significant savings compared to running workloads on your own in-house infrastructure, I guess typical values for common use cases may be useful. For example, if i’m running a SAP installation of specific data and access dimensions, what operationally are typically running costs – without needing to insert probes all over a running example to estimate it using the provided calculator?

I’d come back from a 7am gym session fairly tired and made the mistake of stepping through the pricing slides without making copious notes. I duly did all that module again and did things properly the next time around – and passed it to complete my certification.

The lego bricks you snap together to design an application infrastructure are simple in principle, loosely connected and what Amazon have built is very impressive. The only thing not provided out of the box is the sort of simple developer bundle of an EC2 instance, some S3 and MySQL based EBD, plus some open source AMIs preconfigured to run WordPress, Joomla, Node.js, LAMP or similar – with a simple weekly automatic backup. That’s what Digital Ocean provide for a virtual machine instance, with specific storage and high Internet Transfer Out limits for a fixed price/month. In the case of the WordPress network on which my customers and this blog runs, that’s a 2-CPU server instance, 40GB of disk space and 4TB/month data traffic for $20/month all in. That sort of simplicity is why many startup developers have done an exit stage left from Rackspace and their ilk, and moved to Digital Ocean in their thousands; it’s predictable and good enough as an experimental sandpit.

The ceiling at AWS is much higher when the application slips into production – which is probably reason enough to put the development work there in the first place.

I have deployed an Amazon Workspace to complete my 12 years of Nutrition Data Analytics work using the Windows-only Tableau Desktop Professional – in an environment where I have no Windows PCs available to me. Just used it on my MacBook Air and on my iPad Mini to good effect. That will cost be just north of £21 ($35) for the month.

I think there’s a lot that can be done to accelerate adoption rates of AWS services in Enterprise IT shops, both in terms of direct engagement and with channels to market properly engaged. My real challenge is getting air time with anyone to show them how – and in the interim, getting some examples ready in case I can make it in to do so.

That said, I recommend the AWS training to anyone. There is some training made available the other side of applying to be a member of the Amazon Partner Network, but there are equally some great technical courses that anyone can take online. See http://aws.amazon.com/training/ for further details.

Help available to keep malicious users away from your good work

Picture of a Stack of Tins of Spam Meat

One thing that still routinely shocks me is the shear quantity of malicious activity that goes on behind the scenes of any web site i’ve put up. When we were building Internet Vulnerability Testing Services at BT, around 7 new exploits or attack vectors were emerging every 24 hours. Fortunately, for those of us who use Open Source software, the protections have usually been inherent in the good design of the code, and most (OpenSSL heartbleed excepted) have had no real impact with good planning. All starting with closing off ports, and restricting access to some key ones from only known fixed IP addresses (that’s the first thing I did when I first provisioned our servers in Digital Ocean Amsterdam – just surprised they don’t give a template for you to work from – fortunately I keep my own default rules to apply immediately).

With WordPress, it’s required an investment in a number of plugins to stem the tide. Basic ones like Comment Control, that  can lock down pages, posts, images and attachments from having comments added to them (by default, spammers paradise). Where you do allow comments, you install the WordPress provided Akismet, which at least classifies 99% of the SPAM attempts and sticks them in the spam folder straight away. For me, I choose to moderate any comment from someone i’ve not approved content from before, and am totally ruthless with any attempt at social engineering; the latter because if they post something successfully with approval a couple of times, their later comment spam with unwanted links get onto the web site immediately until I later notice and take them down. I prefer to never let them get to that stage in the first place.

I’ve been setting up a web site in our network for my daughter in law to allow her to blog abound Mental Health issues for Children, including ADHD, Aspergers and related afflictions. For that, I installed BuddyPress to give her user community a discussion forum, and went to bed knowing I hadn’t even put her domain name up – it was just another set of deep links into my WordPress network at the time.

By the morning, 4 user registrations, 3 of them with spoof addresses. Duly removed, and the ability to register usernames then turned off completely while I fix things. I’m going into install WP-FB-Connect to allow Facebook users to work on the site based on their Facebook login credentials, and to install WangGuard to stop the “Splogger” bots. That is free for us for the volume of usage we expect (and the commercial dimensions of the site – namely non-profit and charitable), and appears to do a great job  sharing data on who and where these attempts come from. Just got to check that turning these on doesn’t throw up a request to login if users touch any of the other sites in the WordPress network we run on our servers, whose user communities don’t need to logon at any time, at all.

Unfortunately, progress was rather slowed down over the weekend by a reviewer from Kenya who published a list of best 10 add-ins to BuddyPress, #1 of which was a Social Network login product that could authenticate with Facebook or Twitter. Lots of “Great Article, thanks” replies. In reality, it didn’t work with BuddyPress at all! Duly posted back to warn others, if indeed he lets that news of his incompetence in that instance back to his readers.

As it is, a lot of WordPress Plugins (there are circa 157 of them to do social site authentication alone) are of variable quality. I tend to judge them by the number of support requests received that have been resolved quickly in the previous few weeks – one nice feature of the plugin listings provided. I also have formal support contracts in with Cyberchimps (for some of their themes) and with WPMU Dev (for some of their excellent Multisite add-ons).

That aside, we now have the network running with all the right tools and things seem to be working reliably. I’ve just added all the page hooks for Google Analytics and Bing Web Tools to feed from, and all is okay at this stage. The only thing i’d like to invest in is something to watch all the various log files on the server and to give me notifications if anything awry is happening (like MySQL claiming an inability to connect to the WordPress database, or Apache spawning multiple instances and running out of memory – something I had in the early days when the Google bot was touching specific web pages, since fixed).

Just a shame that there are still so many malicious link spammers out there; they waste 30 minutes of my day every day just clearing their useless gunk out. But thank god that Google are now penalising these very effectively; long may that continue, and hopefully the realisation of the error of their ways will lead to being a more useful member of the worldwide community going forward.

Public Clouds, Google Cloud moves and Pricing

Google Cloud Platform Logo

I went to Google’s Cloud Platform Roadshow in London today, nominally to feed my need to try and rationalise the range of their Cloud offerings.  This was primarily for my potential future use of their infrastructure and to learn to what I could of any nuances present. Every provider has them, and I really want to do a good job to simplify the presentation for my own sales materials use – but not to oversimplify to make the advice unusable.

Technically overall, very, very, very impressive.

That said, i’m still in three minds about the way the public cloud vendors price their capacity. Google have gone to great lengths – they assure us – to simplify their pricing structure against industry norms. They were citing industry prices coming down by 6-8% per year, but the underlying hardware following Moores law much more closely – at 20-30% per annum lower.

With that, Google announced a whole raft of price decreases of between 35-85%, accompanied by simplifications to commit to:

  • No upfront payments
  • No Lock-in or Contracts
  • No Complexity

I think it’s notable that as soon as Google went public with that a few weeks back, they were promptly followed by Amazon Web Services, and more recently by Microsoft with their Azure platform. The outside picture is that they are all in a race, nip and tuck – well, all chasing the volume that is Amazon, but trying to attack from underneath, a usual industry playbook.

One graph came up, showing that when a single virtual instance is fired up, it costs around 7c per hour if used up to 25% of the month – after which the cost straight lines down. If that instance was up all month, then it was suggested that the discount of 30% would apply. That sort of suggests a monthly cost of circa $36.

Meanwhile, the Virtual Instance (aka Droplet) running Ubuntu Linux and my WordPress Network on Digital Ocean, with 30GB flash storage and a 3TB/month network bandwidth, currently comes out (with weekly backups) at a fixed $12 for me. One third the apparent Google price.

I’m not going to suggest they are in any way comparable. The Digital Ocean droplet was pretty naked when I ran it up for the first time. I had to very quickly secure it (setting up custom iptables to close off the common ports, ensure secure shell only worked from my home fixed IP address) and spend quite a time configuring WordPress and associated email infrastructure. But now it’s up, its there and the monthly cost very predictable. I update it regularly and remove comment spam volumes daily (ably assisted by a WordPress add-in). The whole shebang certainly doesn’t have the growth potential that Google’s offerings give me out of the box, but like many developers, it’s good enough for it’s intended purpose.

I wonder if Google, AWS, Microsoft and folks like Rackspace buy Netcraft’s excellent monthly hosting provider switching analysis. They all appear to be ignoring Digital Ocean (and certainly not appearing to be watching their churn rates to an extent most subscription based businesses usually watch like a hawk) while that company are outgrowing everyone in the industry at the moment. They are the one place that are absorbing developers, and taking thousands of existing customers away from all the large providers. In doing so, they’ve recently landed a funding round from VC Andreessen Horowitz (aka “A16Z” in the industry) to continue to push that growth. Their key audience, that of Linux developers, being the seeds from which many valuable companies and services of tomorrow will likely emerge.

I suspect there is still plenty time for the larger providers to learn from their simplicity – of both pricing, and the way in which pre-configured containers of common Linux-based software stacks (WordPress, Node.js, LAMP, email stacks, etc) can be deployed quickly and inexpensively. If indeed, they see Digital Ocean as a visible threat yet.

In the meantime, i’m trying to build a simple piece of work that can articulate how all the key Public Cloud vendor services are each structured, from the point of view of the time-pressured, overly busy IT Manager (the same as I did for the DECdirect Software catalogue way back when). I’m scheduled to have a review of AWS at the end of April to this end. The presence of a simple few spreads of comparative collateral appears to be the missing reference piece in the Industry to date.

Pricing: How low can you go?

Limbo Dancer under very low poleWhile I was at Demon Internet, and a good year before Amazon appeared in the UK, we used to promote a small local company called Bookpages, who were selling Books online. At one point, I heard that US-based Amazon had a meeting with the Directors of the company in London, so guessed they’d enter the UK soon – but kept absolutely quiet. In the event, they jumped into the UK market by buying Bookpages, inheriting all their management team – all a complete surprise to me. Just very glad that I had kept shtum throughout.

Around a year later, I called in to see the Business Development Director in Amazon Slough for a chat about advertising to our customers. I was offered a tour after our meeting; I ended up confronted with a football pitch size warehouse that looked exactly like this:

Amazon Book Warehouse
Having been used to walking around warehouses from my time in IT Distribution, I asked the Business Development Director how many days inventory was in the building. He said: 2 days. Like, wow – they’d fill and empty that warehouse 180 times a year; the scale was absolutely intimidating.
 
We finished the tour passing the packing/shipping area, where a flood of books were being served on conveyor belts to four or so teams; all items relentlessly being sealed into cardboard packing to the incessant bass of loud beat music, and sent over the loading bay into one of the waiting 40 ton Royal Mail lorries.
 
Genius
 
I’ve been a customer of Amazon ever since, and these days hold shares in the company. At some point i’ll get the bandwidth to read the The Everything Store: Jeff Bezos and the Age of Amazon, one book waiting for me on my iPad. There are several strokes of genius in their business model, one of which is their focus to live on the bottom rung of the value chain ladder. To suck all the oxygen out from potential competitors trying to attack them from underneath – which is the way most large companies get disrupted.
 
I found this great article that explains Amazon’s pricing strategy very eloquently. It’s also the first time I’ve heard that Apple rotate their stock faster than Amazon do, which is an amazing feat for a manufacturing company.
 
 

Amazon Web Services

The one surprise to me these days is the public perception of Amazon Web Services being the 100 pound industry gorilla selling Cloud Computing Capacity at lowest prices, that keep ratcheting down as their scale advantages allow them to do so. The largely unknown secret is that they are being completely murdered at the low end and with software developers by relative newcomer Digital Ocean, who have recently got VC funding from Andreessen Horowitz (A16Z).

Future Trouble at t’Mill?

The WordPress network from which this site is served is hosted on Digital Ocean in Amsterdam – cost $12/month for a Linux virtual server, 30GB of flash storage and 3TB of Network capacity per month, which includes the cost of backups and snapshots. When I talk to AWS and indeed to Google, it doesn’t take long to be given special offers paying the first $2000 of my hosting cost – which suggests their pricing is way higher than what i’m able to develop on already. Probably more sophisticated than I need right now, but I guess it’ll be some time before I need to scale to a size that will become interesting to them.

Amazon are far from alone. While folks like Rackspace are a leading proponent of OpenStack to commoditise Hosting Centre Infrastructure, Digital Ocean are walsing way with thousands of their previous customers; it is almost like they are paying no attention to Netcraft Hosting Provider Switching Stats – and at the same time, issuing profit warnings of their own.

I wonder if Amazon similarly start feeling the same heat in the months ahead – and if they are likely to address it before Digital Ocean go flying past.