IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

The Moving Target that is Enterprise IT infrastructures

Docker Logo

A flurry of recent Open Source Enterprise announcements, one relating to Docker – allowing Linux containers containing all their needed components to be built, distributed and then run atop Linux based servers. With this came the inference that Virtualisation was likely to get relegated to legacy application loads. Docker appears to have support right across the board – at least for Linux workloads – covering all the major public cloud vendors. I’m still unsure where that leaves the other niche that is Windows apps.

The next announcement was that of Apache Mesos, which is the software originally built by ex-Google Twitter engineers – largely the replicate the Google Borg software used to fire up multi-server workloads across Google’s internal infrastructure. This used to good effect to manage Twitters internal infrastructure and to consign their “Fail Whale” to much rarer appearances. At the same time, Google open sourced a version of their software – I’ve not yet made out if it’s derived from the 10+ year old Borg or more recent Omega projects – to do likewise, albeit at smaller scale than Google achieve inhouse. The one thing that bugs me is that I can never remember it’s name (i’m off trying to find reference to it again – and now I return 15 minutes later!).

“Google announced Kubernetes, a lean yet powerful open-source container manager that deploys containers into a fleet of machines, provides health management and replication capabilities, and makes it easy for containers to connect to one another and the outside world. (For the curious, Kubernetes (koo-ber-nay’-tace) is Greek for “helmsman” of a ship)”.

That took some finding. Koo-ber-nay-tace. No exactly memorable.

However, it looks like it’ll be a while before these packaging, deployment and associated management technologies get ingrained in Enterprise IT workloads. A lot of legacy systems out there are simply not architected to run on scale-out infrastructures yet, and it’s a source of wonder what the major Enterprise software vendors are running in their own labs. If indeed they have an appetite to disrupt themselves before others attempt to.

I still cringe with how one ERP system I used to use had the cost collection mechanisms running as a background batch process, and the margins of the running business went all over the place like a skidding car as orders were loaded. Particularly at end of quarter customer spend spikes, where the complexity of relational table joins had a replicated mirror copy of the transaction system consistently running 20-25 minutes behind the live system. I should probably cringe even more given there’s no obvious attempt by startups to fundamentally redesign an ERP system from the ground up using modern techniques. At least yet.

Startups appear to be much more heavily focussed on much lighter mobile based applications – of which there are a million different bets chasing VC money. Moving Enterprise IT workloads into much more cost effective (but loosely coupled) public cloud based infrastructure – and that take full advantage of its economics – is likely to take a little longer. I sometimes agonise over what change(s) would precipitate that transition – and whether that’s a monolith app, or a network of simple ones daisy chained together.

I think we need a 2014 networked version of Silicon Office or Hypercard to trigger some progress. Certainly their abject simplicity is no more, and we’re consigned to the lower level, piecemeal building bricks – like JavaScript – which is what life was like in assembler before high level languages liberated us. Some way to go.

What if Quality Journalism isn’t?

Read all about it

Carrying on with the same theme as yesterdays post – the fact that content is becoming disaggregated from a web sites home page – I read an excellent blog post today: What if Quality Journalism isn’t? In this, the author looks at the seemingly divergent claims from the New York Times, who claim:

  • They are “winning” at Journalism
  • Readership is falling, both on web and mobile platforms
  • therefore they need to pursue strategies to grow their audience

The author asks “If its product is ‘the world’s best journalism‘, why does it have a problem growing its audience?”. You can’t be the world’s best and fail at the same time. Indeed. And then goes into a deeper analysis.

I like the analogue of the supermarket of intent (Amazon) versus a supermarket of interest (social) versus Niche. The central issue is how to curate articles of interest to a specific subscriber, without filling their delivery with superfluous (to the reader) content. This where Newspapers (in the authors case) typically contain 70% or more of wasted content to a typical specific user.

One comment under the article suggests one approach: existence of an open source aggregation model for the municipal bond market on Twitter via #muniland… journos from 20+ pubs, think tanks, govts, law firms, market commentators hash their story and all share.

Deep linking to useful, pertinent and interesting content is probably a big potential area if alternative approaches can crack it. Until then, i’m having to rely on RSS feeds of known authors I respect, or from common watering holes, or from the occasional flash of brilliance that crosses my twitter stream at times i’m watching it.

Just need to update Aaron Swartz’s code to spot water-cooler conversations on Twitter among specific people or sources I respect. That would probably do most of the leg work to enlighten me more productively, and without subjecting myself to pages of search engine discovery.

Explaining Distributed Data Consistency to IT novices? Well, …

Greek Shepherd

it’s all greek to me. Bruce Stidston cited a post on Google+ where Yonatan Zunger, Chief Architect of Google+, tried to explain Data Consistency by way of Greeks enacting laws onto statute books on disparate islands. Very long post here. It highlights the challenges of maintaining data consistency when pieces of your data are distributed over many locations, and the logistics of trying to keep them all in sync – in a way that should be understandable to the lay – albeit patient – reader.

The treatise missed out the concept of two-phased commit, which is a way of doing handshakes between two (identical copies) of a database to ensure a transaction gets played successfully on both the master and the replica sited elsewhere on a network. So, if you get some sort of failure mid transaction, both sides get returned to a consistent state without anything going down the cracks. Important if that data is monetary balance transfers between bank accounts for example.

The thing that impressed me most – and which i’d largely taken for granted – is how MongoDB (the most popular Open Source NoSQL Database in the world) can handle virtually all the use cases cited in the article out of the box, with no add-ons. You can specify “happy go lucky”, majority or all replicas consistent before confirming write completion. And if a definitive “Tyrant” fails, there’s an automatic vote among the surviving instances for which secondary copy becomes the new primary (and on rejoining, the changes are journaled back to consistency). And those instances can be distributed in different locations on the internet.

Bruce contended that Google may not like it’s blocking mechanics (which will slow down access while data is written) to retain consistency on it’s own search database. However, I think Google will be very read heavy, and it won’t usually be a disaster if changes are journaled onto new Google search results to its readers. No money to go between the cracks in their case, any changes just appear the next time you enact the same search; one very big moving target.

Ensuring money doesn’t go down the cracks is what Blockchains design out (majority votes, then change declines to update attempts after that’s achieved). That’s why it can take up to 10 minutes for a Bitcoin transaction to get verified. I wrote introductory pieces about Bitcoin and potential Blockchain applications some time back if those are of interest.

So, i’m sure there must be a more pithy summary someone could draw, but it would add blockchains to the discussion, and probably relate some of the artistry behind hashes and Git/Github to manage large, multiuser, multiple location code, data and writing projects. However, that’s for the IT guys. They should know this stuff, and know what to apply in any given business context.

Footnote: I’ve related MongoDB as that is the one NoSQL database I have accreditations in, having completed two excellent online courses with them (while i’m typically a senior manager, I like to dip into new technologies to understand their capabilities – and to act as a bullshit repellent!). Details of said courses here. The same functionality may well be available with other NoSQL databases.

Starting with the end in mind: IT Management Heat vs Light

A very good place to startOne source of constant bemusement to me is the habit of intelligent people to pee in the industry market research bathwater, and then to pay handsomely to drink a hybrid mix of the result collected across their peers.

Perhaps betrayed by an early experience of one research company coming in to present to the management of the vendor I was working at, and finding in the rehearsal their conjecture that sales of specific machine sizes had badly dipped in the preceding quarter. Except they hadn’t; we’d had the biggest growth in sales of the highlighted machines in our history in that timeframe. When I mentioned my concern, the appropriate slides were corrected in short order, and no doubt the receiving audience impressed with the skill in their analysis that built a forecast starting with an amazingly accurate, perceptive (and otherwise publicly unreported) recent history.

I’ve been doubly nervous ever since – always relating back to the old “Deep Throat” hints given in “All the Presidents Men” – that of, in every case, “to follow the money”.

Earlier today, I was having some banter on one of the boards of “The Motley Fool” which referenced the ways certain institutions were imposing measures on staff – well away from a useful business use that positively supported better results for their customers. Well, except of providing sound bites to politicians. I can sense that in Education, in some elements of Health provision, and rather fundamentally in the Police service. I’ve even done a drains-up some time ago that reflected on the way UK Police are measured, and tried trace the rationale back to source – which was a senior politician imploring them to reduce crime; blog post here. The subtlety of this was rather lost; the only control placed in their hands was that of compiling the associated statistics, and to make their behaviours on the ground align supporting that data collection, rather than going back to core principles of why they were there, and what their customers wanted of them.

Jeff Bezos (CEO of Amazon) has the right idea; everything they do aligns with the ultimate end customer, and everything else works back from there. Competition is something to be conscious of, but only to the extent of understanding how you can serve your own customers better. Something that’s also the central model that W. Edwards Deming used to help transform Japanese Industry, and in being disciplined to methodically improve “the system” without unnecessary distractions. Distractions which are extremely apparent to anyone who’s been subjected to his “Red Beads” experiment. But the central task is always “To start with the end in mind”.

With that, I saw a post by Simon Wardley today where Gartner released the results of a survey on “Top 10 Challenges for I&O Leaders”, which I guess is some analogue of what used to be referred to as “CIOs”. Most of which felt to me like a herd mentality – and divorced from the sort of issues i’d have expected to be present. In fact a complete reenactment of this sort of dialogue Simon had mentioned before.

Simon then cited the first 5 things he thought they should be focussed on (around Corrective Action), leaving the remainder “Positive Action” points to be mapped based on that appeared upon that foundation. This in the assumption that those actions would likely be unique to each organisation performing the initial framing exercise.

Simon’s excellent blog post is: My list vs Gartner, shortly followed by On Capabilities. I think it’s a great read. My only regret is that, while I understand his model (I think!), i’ve not had to work on the final piece between his final strategic map (for any business i’m active in) and articulating a pithy & prioritised list of actions based on the diagram created. And I wish he’d get the bandwidth to turn his Wardley Maps into a Book.

Until then, I recommend his Bits & Pieces Blog; it’s a quality read that deserves good prominence on every IT Manager’s (and IT vendors!) RSS feed.

Sometimes a picture is “How on earth did you do that”?

IBM3270ALLIN1

People often remember a startling or surprising first impression. Riverdance when they first appeared during the voting interval during Eurovision 1994. 19-year old Everton substitute Wayne Rooney being put on the pitch against a season-long unbeaten Arsenal side, and scoring. A young David Beckham doing likewise against Wimbledon from the half way line. Or Doug Flutie, Quarterback for Boston College, throwing the winning touchdown in a Rose Bowl final from an incredible distance with no time left on the clock. There is even a road in Boston called “Flutie Pass” named in memory of that sensational hail mary throw.

There are always lots of pressures on IT Managers and their staff, with tightening budgets, constrained resources and a precious shortage of time. We used to have a task to try and minimise the friction these folks had in buying Enterprise IT products and services from us or our reseller channels. A salesperson or vendor was normally the last person they wanted to have a dependency on for basic, routine “stuff”, especially for items they should be able to work out for themselves. At least if given the right information in lucid form, concise and free of surprises – immediately available at their fingertips.

The picture was one of the ones we put in the DECdirect Software Catalogue. It shows an IBM 3278 terminal, hooked up to an IBM Mainframe, with Digital’s VAX based ALL-IN-1 Office Automation Suite running on it. At the time, this was a startling revelation; the usual method for joining an IBM system to a DEC one at the time was to make the DEC machine look like a remotely connected IBM 2780 card reader. The two double page spreads following that picture showed how to piece this, and other forms of connections to IBM mainframes, together.

The DECdirect Software catalogue had an aim of being able to spit out all the configuration rules, needed part numbers and matching purchase prices with a minimal, simple and concise read. Our target for our channel salesforce(s) was to enable them to extract a correct part number and price for any of our 550 products – across between 20-48 different pricing tiers each – within their normal attention span. Which we assumed was 30 seconds. Given appropriate focus, Predictability, Consistency and the removal of potential surprises can be designed in.

In the event, that business (for which I was the first employee in, working alongside 8 shared telesellers and 2 tech support staff) went 0-$100m in 18 months, with over 90% of the order volume coming in directly from customers, correctly priced at source. That got me a 2-level promotion and running the UK Software Products Business, 16 staff and the country software P&L as a result.

One of my colleagues in DEC Finland did a similar document for hardware options, entitled “Golden Eggs“. Everything in one place, with all the connections on the back of each system nicely documented, and any constraints right in front of you. A work of great beauty, and still maintained to this day for a wide range of other systems and options. The nearest i’ve seen more recently are sample architecture diagrams published by Amazon Web Services – though the basics for IT Managers seeing AWS (or other public cloud vendors offerings) for the first time are not yet apparent to me.

Things in the Enterprise IT world are still unnecessarily complicated, and the ability to stand in the end users shoes for a limited time bears real fruits. I’ve repeated that in several places before and since then with pretty spectacular results; it’s typically only a handful of things to do well in order to liberate end users, and to make resellers and other supply channels insanely productive. All focus then directed on keeping customers happy and their objectives delivered on time, and more often that not, under budget.

One of my friends (who works at senior level in Central Government) lamented to me today that “The (traditional vendor) big players are all trying to convince the world of their cloudy goodness, unfortunately using their existing big contract corporate teams who could not sell life to a dying man”.

I’m sure some of the Public Cloud vendors would be more than capable to arm people like him appropriately. I’d love to help a market leading one do it.

Footnote: I did a previous post on what Vendors, Distributors and Resellers want here.

Officially Certified: AWS Business Professional

AWS Business Professional Certification

That’s added another badge, albeit the primary reason was to understand AWS’s products and services in order to suss how to build volumes via resellers for them – just in case I can get the opportunity to be asked how i’d do it. However, looking over the fence at some of the technical accreditation exams, I appear to know around half of the answers there already – but need to do those properly and take notes before attempting those.

(One of my old party tricks used to be that I could make it past the entrance exam required for entry into technical streams at Linux related conferences – a rare thing for a senior manager running large Software Business Operations or Product Marketing teams. Being an ex programmer who occasionally fiddles under the bonnet on modern development tools is a useful thing – not least to feed an ability to be able to spot bullshit from quite a distance).

The only AWS module I had any difficulty with was the pricing. One of the things most managers value is simplicity and predictability, but a lot of the pricing of core services have pricing dependencies where you need to know data sizes, I/O rates or the way your demand goes through peaks and troughs in order to arrive at an approximate monthly price. While most of the case studies amply demonstrate that you do make significant savings compared to running workloads on your own in-house infrastructure, I guess typical values for common use cases may be useful. For example, if i’m running a SAP installation of specific data and access dimensions, what operationally are typically running costs – without needing to insert probes all over a running example to estimate it using the provided calculator?

I’d come back from a 7am gym session fairly tired and made the mistake of stepping through the pricing slides without making copious notes. I duly did all that module again and did things properly the next time around – and passed it to complete my certification.

The lego bricks you snap together to design an application infrastructure are simple in principle, loosely connected and what Amazon have built is very impressive. The only thing not provided out of the box is the sort of simple developer bundle of an EC2 instance, some S3 and MySQL based EBD, plus some open source AMIs preconfigured to run WordPress, Joomla, Node.js, LAMP or similar – with a simple weekly automatic backup. That’s what Digital Ocean provide for a virtual machine instance, with specific storage and high Internet Transfer Out limits for a fixed price/month. In the case of the WordPress network on which my customers and this blog runs, that’s a 2-CPU server instance, 40GB of disk space and 4TB/month data traffic for $20/month all in. That sort of simplicity is why many startup developers have done an exit stage left from Rackspace and their ilk, and moved to Digital Ocean in their thousands; it’s predictable and good enough as an experimental sandpit.

The ceiling at AWS is much higher when the application slips into production – which is probably reason enough to put the development work there in the first place.

I have deployed an Amazon Workspace to complete my 12 years of Nutrition Data Analytics work using the Windows-only Tableau Desktop Professional – in an environment where I have no Windows PCs available to me. Just used it on my MacBook Air and on my iPad Mini to good effect. That will cost be just north of £21 ($35) for the month.

I think there’s a lot that can be done to accelerate adoption rates of AWS services in Enterprise IT shops, both in terms of direct engagement and with channels to market properly engaged. My real challenge is getting air time with anyone to show them how – and in the interim, getting some examples ready in case I can make it in to do so.

That said, I recommend the AWS training to anyone. There is some training made available the other side of applying to be a member of the Amazon Partner Network, but there are equally some great technical courses that anyone can take online. See http://aws.amazon.com/training/ for further details.

SaaS Valuations, and the death of Rubber Price Books and Golf Courses

Software Services Road Signs

Questions appear to being asked in VC circles about sky-high Software-as-a-Service company valuations – including one suggestion i’ve seen that it should be based on customer acquisition cost (something I think is insane – acquisition costs are far higher than i’d ever feel comfortable with at the moment). One lead article is this one from Andreessen Horowitz (A16Z) – which followed similar content as that presented on their podcast last week.

There are a couple of observations here. One is that they have the ‘normal’ Enterprise software business model misrepresented. If a new license costs $1000, then subsequent years maintenance is typically in the 20-23% of license cost range; the average life of a licensed product is reckoned to be around 5 years. My own analogue for a business ticking along nicely was having license revenue from new licenses, and support revenue from maintenance (ie: 20% of license cost, for 5 years) around balanced. Traditionally, all profit is on the support revenue; most large scale enterprise software vendors, in my experience, assume that the license cost (less the first year maintenance revenue) represents cost of sales. That’s why CA, IBM and Oracle salespeople drive around in nice cars.

You will also find vendors routinely increasing maintenance costs by the retail price index as well.

The other characteristic, for SaaS companies with a “money in this financial year” mindset, is how important it is to garner as many sales as is humanly possible at the start of a year; a sale made in month 1 will give you 12 months of income in the current financial year, whereas the same sale in month 12 will put only 1 months revenue in the current fiscal. That said, you can normally see the benefits scheduled to arrive by looking at the deferred revenue on the income statement.

Done correctly, the cost of sales of a SaaS vendor should be markedly less than that of a licensed software vendor. Largely due to an ability to run free trials (at virtual zero marginal cost) and to allow customers to design in an SaaS product as part of a feasibility study – and to provision immediately if it suits the business need. The same is true of open source software; you don’t pay until you need support turned on for a production application.

There is also a single minded focus to minimise churn. I know when I was running the Individual Customer Unit at Demon (responsible for all Consumer and SME connectivity sales), I donated £68,000 of the marketing budget one month to pay for software that measured the performance of the connectivity customers experienced – from their end. Hence statistics on all connectivity issues were fed back next time a successful connection was made, and as an aggregate over several 10’s of 1000’s of customers, we could isolate and remove root causes – and hence improve the customer experience. There really is no point wasting marketing spend on a service that doesn’t do a great job for it’s existing users, long before you even consider chasing recruitment of new ones.

The cost per customer acquired was £30 each, or £20 nett of churn, for customers who were spending £120/year for our service.

The more interesting development is if someone can finally break the assumption that to sell Enterprise software, you need to waste so much on customer acquisition costs. That’s a rubber price book and golf course game, and I think the future trend to use of Public Cloud Services – when costs will go over a cliff and way down from todays levels – will be it’s death. Instead, much greater focus on customer satisfaction at all times, which is really what it should have been all the way along.

Having been doing my AWS Accreditations today, I have plenty of ideas to simplify things out to fire up adoption in Enterprise clients. Big potential there.

Customer, Customer, Customer…

Jeff Bezos QuoteI’ve been internalising some of the Leadership principles that Amazon expect to see in every employee, as documented here. All of these explain a lot about Amazon’s worldview, but even the very first one is quite a unique in the IT industry. It probably serves a lesson that most other IT vendors should be more fixated on than I usually experience.

In times when I looked after two Enterprise Systems vendors, it was a never ending source of amusement that no marketing plan would be considered complete without at least one quarterly “competitive attack” campaign. Typically, HP, IBM and Sun (as was, Oracle these days) would expect to fund at least one campaign that aimed squarely into the base of customers of the other vendors (and the reseller channels that served them), typically pushing superior speeds and feeds. Usually selling their own proprietary, margin rich servers and storage to their own base, while tossing low margin x86 servers running Linux to try and unseat proprietary products of the other vendors. I don’t recall a single one working, nor one customer that switched as a result of these efforts.

One thing that DEC used to do well was, when a member of staff from a competitor moved to a job inside the company, to make it a capital offence for anyone to try and seek inside knowledge from that person. The corporate edict was to rely on publicly available data only, and typically to sell on your own strengths. The final piece being to ensure you satisfied your existing customers before ever trying to chase new ones.

Microsoft running “Scroogled” campaigns are a symptom (while Steve Ballmer was in charge) of losing their focus. I met Bill Gates in 1983, and he was a walking encyclopedia of what worked well – and not so well – in competitive PC software products of the day. He could keep going for 20 minutes describing the decision making process of going for a two-button mouse for Windows, and the various traps other vendors had with one or three button equivalents. At the time, it followed through into Microsoft’s internal sales briefing material – sold on their own strengths, and acknowledging competitors with a very balanced commentary. In time, things loosened up and tripping up competitors became a part of their playbook, something I felt a degree of sadness to see develop.

Amazon are much more specific. Start with the customer and work back from there.

Unfortunately, I still see server vendor announcements piling into technologies like “OpenStack” and “Software Defined Networking” where the word “differentiation” features heavily in the announcement text.  This suggests to me that the focus is on competitive vendor positioning, trying to justify the margins required to sustain their current business model, and detached from a specific focus of how a customer needs (and their business environment) are likely to evolve into the future.

With that, I suspect organisations with a laser like focus on the end customer, and who realise which parts of the stack are commoditising (and to follow that to it’s ultimate conclusion), are much more likely to be present when the cost to serve steps off the clifftop and heads down. The real battle is on higher order entities running on the commodity base.

I saw an announcement from Arrow ECS in Computer Reseller News this week that suggested a downturn in their Datacentre Server and Storage Product orders in the previous quarter. I wonder if this is the first sign of the switching into gear of the inevitable downward pricing trend across the IT industry, and especially for its current brand systems and storage vendors.

IT Hardware Vendors clinging onto “Public” and “Hybrid” cloud strategies are, I suspect, the final attempt to hold onto their existing business models and margins while the world migrates to commodity, public equivalents (see my previous post about “Enterprise IT and the Hall of Marbles“).

I also suspect that given their relentless focus on end customer needs, and working back from there, that Amazon Web Services will still be the market leaders as that new landscape unfolds. Certainly shows little sign of slowing down.