IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Reinventing Healthcare

Comparison of US and UK healthcare costs per capita

A lot of the political effort in the UK appears to circle around a government justifying and handing off parts of our NHS delivery assets to private enterprises, despite the ultimate model (that of the USA healthcare industry) costing significantly more per capita. Outside of politicians lining their own pockets in the future, it would be easy to conclude that few would benefit by such changes; such moves appear to be both economically farcical and firmly against the public interest. I’ve not yet heard any articulation of a view that indicates otherwise. But less well discussed are the changes that are coming, and where the NHS is uniquely positioned to pivot into the future.

There is significant work to capture DNA of individuals, but these are fairly static over time. It is estimated that there are 10^9 data points per individual, but there are many other data points – which change against a long timeline – that could be even more significant in helping to diagnose unwanted conditions in a timely fashion. To flip the industry to work almost exclusively to preventative and away from symptom based healthcare.

I think I was on the right track with an interest in Microbiome testing services. The gotcha is that commercial services like uBiome, and public research like the American (and British) Gut Project, are one-shot affairs. Taking a stool, skin or other location sample takes circa 6,000 hours of CPU wall time to reconstruct the 16S rRNA gene sequences of a statistically valid population profile. Something I thought I could get to a super fast turnaround using excess capacity (spot instances – excess compute power you can bid to consume when available) at one or more of the large cloud vendors. And then to build a data asset that could use machine learning techniques to spot patterns in people who later get afflicted by an undesirable or life threatening medical condition.

The primary weakness in the plan is that you can’t suss the way a train is travelling by examining a photograph taken looking down at a static railway line. You need to keep the source sample data (not just a summary) and measure at regular intervals; an incidence of salmonella can routinely knock out 30% of your Microbiome population inside 3 days before it recovers. The profile also flexes wildly based on what you eat and other physiological factors.

The other weakness is that your DNA and your Microbiome samples are not the full story. There are many other potential leading indicators that could determine your propensity to become ill that we’re not even sampling. The questions of which of our 10^18 different data points are significant over time, and how regularly we should be sampled, are open questions

Experience in the USA is that in environments where regular preventative checkups of otherwise healthy individuals take place – that of Dentists – have managed to lower the cost of service delivery by 10% at a time where the rest of the health industry have seen 30-40% cost increases.

So, what are the measures that should be taken, how regularly and how can we keep the source data in a way that allows researchers to employ machine learning techniques to expose the patterns toward future ill-health? There was a good discussion this week on the A16Z Podcast on this very subject with Jeffrey Kaditz of Q Bio. If you have a spare 30 minutes, I thoroughly recommend a listen: https://soundcloud.com/a16z/health-data-feedback-loop-q-bio-kaditz.

That said, my own savings are such that I have to refocus my own efforts elsewhere back in the IT industry, and my MicroBiome testing service Business Plan mothballed. The technology to regularly sample a big enough population regularly is not yet deployable in a cost effective fashion, but will come. When it does, the NHS will be uniquely positioned to pivot into the sampling and preventative future of healthcare unhindered.