Self improvement (or the continued delusions of Ian Waring, 2020 edition)

My Christmas holiday reading ranged from MindF*ck by Christopher Wylie to a Satya Nadella recommendation: Mindset by Dr Carol S Dweck. Wylie’s book is A1, and all the lessons of December 12th (the latest UK General Election) are there. The antidotes are much more wide ranging.

The latter one was one of those books where I think the pieces I need to remember will fit on less than a sheet of A4.

Most of the book was contrasting the difference between “Fixed Mindset” and “Growth Mindset”. However, look under the surface, the main points are:

  • Blame is for losers
  • Prima Donnas “anyone’s fault but mine” are an example of this, not good for team cohesion
  • People with a growth mindset appear to be relentlessly curious and ask questions to understand patterns, not just learn by rote memory
  • A repeat of the old Arnold Palmer Golf Adage; “the more I practice, the luckier I get”
  • Be humble
  • “A managers pick A employees, B managers pick C employees”.
  • Also struck in my past that the best managers don’t issue edicts, but ask lots of questions – and trust the skills of their employees – instead

I recall one 23-year old VP at British Telecom; whenever confronted with a new service, the immediate question was “What is the business model?” and he was straight under the surface to understand how things work.

The other was a personal experience in May 1983, when none other than Bill Gates visited Digital Equipment (where I worked) to show us a new system Microsoft were building called “Windows”. There were 14 of us sitting around a conference table waiting for the senior folks to escape a board meeting – and Gates sitting there with a Compaq+ PC and a two button mouse. Curiousity got the better of me, so I said “Tell me. The Apple Lisa has a one button mouse, and Visi-On (a Windowing system being developed by the authors of Visicalc) has three. I notice your mouse has two buttons. Why?”.

He duly went off for a good few minutes describing how all the competitor products interacted with different third party applications, and the problems each resulted in. Very deep, really thoroughly thought through. The senior folks duly arrived, and frustrated him with a lack of commitment to using MS-DOS on our PCs, Windows or not.

I was told afterwards that he told the salesguy who brought him in: “There was only one guy in the room who knew what he was talking about. Hire him”. I interviewed with Scott Oki (VP International at Microsoft at the time) and David Fraser (UK MD), but elected not to take the role. Many years later, I took Paul Maritz – then CEO of VMware, but previously VP of Windows at Microsoft) to see my CEO (Mike Norris of Computacenter; while waiting for our slot, I asked him where Scott Oki was these days. Answer – he owns several golf courses on the West Coast of the USA 🙂

The main thing I always reflect on is in situations when i’m interviewing job role applicants. There’s always this situation at the end when the candidate is asked “Do you have any other questions for me/us?”. While there are a few attitude qualification questions – plus some evidence that they set their own performance standards – that final question is almost always the most important qualifier of all. Demonstrate humility and curiosity, then you’re the one I want to work with, and improve together.

A small(?) task of running up a Linux server to run a Django website

Django 1.11 on Ubuntu 16.04

I’m conscious that the IT world is moving in the direction of “Serverless” code, where business logic is loaded to a service and the infrastructure underneath abstracted away. In that way, it can be woken up from dormant and scaled up and down automatically, in line with the size of the workload being put on it. Until then, I wanted (between interim work assignments) to set up a home project to implement a business idea I had some time back.

In doing this, i’ve had a tour around a number of bleeding edge attempts. As a single page app written in JavaScript on Amazon AWS with Cognito and DynamoDB. Then onto Polymer Web Components, which I stopped after it looked like Apple were unlikely to have support in Safari on iOS in the short term. Then onto Firebase on Google Cloud, which was fine until I thought I needed a relational DB for my app (I am experienced on MongoDB from 2013, but NoSQL schemas aren’t the right fit for my app). And then to Django, which seemed to be gaining popularity these days, not least as it’s based on Python and is designed for fast development of database driven web apps.

I looked for the simplest way to run up a service on all the main cloud vendors. After half a day of research, elected to try Django on Digital Ocean, where a “one click install” was available. This looked the simplest way to install Django on any of the major cloud vendors. It took 30 minutes end to end to run the instance up, ready to go; that was until I realised it was running an old version of Django (1.08), and used Python 2.7 — which is not supported by the (then) soon to be released 2.0 version of Django. So, off I went trying to build everything ground up.

The main requirement was that I was developing on my Mac, but the production version in the cloud on a Linux instance — so I had to set up both. I elected to use PostgreSQL as the database, Nginx with Gunicorn as the web server stack, used Lets Encrypt (as recommended by the EFF) for certificates and Django 1.11 — the latest version when I set off. Local development environment using Microsoft Visual Studio Code alongside GitHub.

One of the nuances on Django is that users are normally expected to login with a username different from their email address. I really wanted my app to use a persons email address as their only login username, so I had to put customisations into the Django set-up to achieve that along the way.

A further challenge is that target devices used by customers are heavily weighted to mobile phones on other sites I run, so I elected to use Google’s Material user interface guidelines. The Django implementation is built on an excellent framework i’ve used in another project, as built by four Stanford graduates  — MaterializeCSS — and supplemented by a lot of custom work on template tags, forms and layout directives by Mikhail Podgurskiy in a package called django-material (see: http://forms.viewflow.io/).

The mission was to get all the above running before I could start adding my own authentication and application code. The end result is an application that will work nicely on phones, tablets or PCs, resizing automatically as needed.

It turned out to be a major piece of work just getting the basic platform up and running, so I noted all the steps I took (as I went along) just in case this helps anyone (or the future me!) looking to do the same thing. If it would help you (it’s long), just email me at [email protected]. I’ve submitted it back to Digital Ocean, but happy to share the step by step recipe.

Alternatively, hire me to do it for you!

WTF – Tim O’Reilly – Lightbulbs On!

What's the Future - Tim O'Reilly

Best Read of the Year, not just for high technology, but for a reasoned meaning behind political events over the last two years, both in the UK and the USA. I can relate it straight back to some of the prescient statements made by Jeff Bezos about Amazon “Day 1” disciplines: the best defence against an organisations path to oblivion being:

  1. customer obsession
  2. a skeptical view of proxies
  3. the eager adoption of external trends, and
  4. high-velocity decision making

Things go off course when interests divide in a zero-sum way between different customer groups that you serve, and where proxies indicating “success” diverge from a clearly defined “desired outcome”.

The normal path is to start with your “customer” and give an analogue of what indicates “success” for them in what you do; a clear understanding of the desired outcome. Then the measures to track progress toward that goal, the path you follow to get there (adjusting as you go), and a frequent review that steps still serve the intended objective. 

Fake News on Social Media, Finance Industry Meltdowns, unfettered slavery to “the market” and to “shareholder value” have all been central to recent political events in both the UK and the USA. Politicians of all colours were complicit in letting proxies for “success” dissociate fair balance of both wealth and future prospects from a vast majority of the customers they were elected to serve. In the face of that, the electorate in the UK bit back – as they did for Trump in the US too.

Part 3 of the book, entitled “A World Ruled by Algorithms” – pages 153-252 – is brilliant writing on our current state and injustices. Part 4 (pages 255-350) entitled “It’s up to us” maps a path to brighter times for us and our descendants.

Tim says:

The barriers to fresh thinking are even higher in politics than in business. The Overton Window, a term introduced by Joseph P. Overton of the Mackinac Center for Public Policy,  says that an ideas political viability falls within a window framing a range of policies considered politically acceptable in the current climate of public opinion. There are ideas that a politician simply cannot recommend without being considered too extreme to gain or keep public office.

In the 2016 US presidential election, Donald Trump didn’t just  push the Overton Window far too to right, he shattered it, making statement after statement that would have been disqualifying for any previous candidate. Fortunately, once the window has come unstuck, it is possible to move it radically new directions.

He then says that when such things happen, as they did at the time of the Great Depression, the scene is set to do radical things to change course for the ultimate greater good. So, things may well get better the other side of Trumps outrageous pandering to the excesses of the right, and indeed after we see the result of our electorates division over BRexit played out in the next 18 months.

One final thing that struck me was how one political “hot potato” issue involving Uber in Taiwan got very divided and extreme opinions split 50/50 – but nevertheless got reconciled to everyone’s satisfaction in the end. This using a technique called Principal Component Analysis (PCA) and a piece of software called “Pol.is”. This allows folks to publish assertions, vote and see how the filter bubbles evolve through many iterations over a 4 week period. “I think Passenger Liability Insurance should be mandatory for riders on UberX private vehicles” (heavy split votes, 33% both ends of the spectrum) evolved to 95% agreeing with “The Government should leverage this opportunity to challenge the taxi industry to improve their management and quality control system, so that drivers and riders would enjoy the same quality service as Uber”. The licensing authority in Taipei duly followed up for the citizens and all sides of that industry. 

I wonder what the BRexit “demand on parliament” would have looked like if we’d followed that process, and if indeed any of our politicians could have encapsulated the benefits to us all on either side of that question. I suspect we’d have a much clearer picture than we do right now.

In summary, a superb book. Highly recommended.

Your DNA – a Self Testing 101

23andMe testing kitYour DNA is a string of protein pairs that encapsulate your “build” instructions, as inherited from your birth parents. While copies of it are packed tightly into every cell in, and being given off, your body, it is of considerable size; a machine representation of it is some 2.6GB in length – the size of a blue-ray DVD.

The total entity – the human genome – is a string of C-G and A-T protein pairs. The exact “reference” structure, given the way in which strands are structured and subsections decoded, was first successfully concluded in 2003. It’s absolute accuracy has gradually improved regularly as more DNA samples have been analysed down the years since.

A sequencing machine will typically read short lengths of DNA chopped up into pieces (in a random pile, like separate pieces of a jigsaw), and by comparison against a known reference genome, gradually piece together which bit fits where; there are known ‘start’ and ‘end’ segment patterns along the way. To add a bit of complexity, the chopped read may get scanned backwards, so a lot of compute effort to piece a DNA sample into what it looks like if we were able to read it uninterrupted from beginning to end.

At the time of writing (July 2017), we’re up to version 38 of the reference human genome. 23andMe currently use version 37 for their data to surface inherited medical traits. Most of the DNA sampling industry trace family history reliably using version 36, and hence most exports to work with common DNA databases automatically “downgrade” to that version for best consistency.

DNA Structure

DNA has 46 sections (known as Chromosomes); 23 of them come from your birth father, 23 from your birth mother. While all humans have over 99% commonality, the 1% difference make every one of us (or a pair of identical twins) statistically unique.

The cost to sample your own DNA – or that of a relative – is these days in the range of £79-£149. The primary one looking for inherited medical traits is 23andMe. The biggest volume for family tree use is AncestryDNA. That said, there are other vendors such as Family Tree DNA (FTDNA) and MyHeritage that also offer low cost testing kits.

The Ancestry DNA database has some 4 million DNA samples to match against, 23andMe over 1 million. The one annoyance is that you can’t export your own data from these two and then insert it in the other for matching purposes (neither have import capabilities). However, all the major vendors do allow exports, so you can upload your data from AncestryDNA or 23andMe into FTDNA, MyHeritage and to the industry leading cross-platform GEDmatch DNA databases very simply.

Exports create a ZIP file. With FTDNA, MyHeritage and GEDmatch, you request an import, and these prompt for the name of that ZIP file itself; you have no need to break it open first at all.

On receipt of the testing kit, register the code on the provided sample bottle on their website. Just avoid eating/drinking for 30 minutes, spit into the provided tube up to the level mark, seal, put back in the box, seal it and pop it in a postbox. Results will follow in your account on their website in 2-4 weeks.

Family Tree matching

Once you receive your results, Ancestry and 23andMe will give you details of any suggested family matches on their own databases. The primary warning here is that matches will be against your birth mother and whoever made her pregnant; given historical unavailability of effective birth control mechanisms and the secrecy of adoption processes, this has been known to surface unexpected home truths. Relatives trace up and down the family tree from those two reference points. A quick gander of self help forums on social media can be entertaining, or a litany of horror stories – alongside others of raw delight. Take care, be ready for the unexpected:

My first social media experience was seeing someone confirm a doctor as her birth father. Her introductory note to him said that he may remember her Mum, as she used to be his nursing assistant.

Another was to a man, who once identified admitted to knowing her birth mother in his earlier years – but said it couldn’t be him “as he’d never make love with someone that ugly”.

Outside of those, fairly frequent outright denials questioning the fallibility of the science behind DNA testing, none of which stand up to any factual scrutiny. But among the stories, there are also stories of delight in all parties when long lost, separated or adopted kids locate, and successfully reconnect, with one or both birth parents and their families.

Loading into other databases, such as GEDmatch

In order to escape the confines of vendor specific DNA databases, you can export data from almost any of the common DNA databases and reload the resulting ZIP file into GEDmatch. Once imported, there’s quite a range of analysis tools sitting behind a fairly clunky user interface.

The key discovery tool is the “one to many” listing, which does a comparison of your DNA against everyone elses in the GEDmatch database – and lists partial matches in order of closeness to your own data. It does this using a unit of measure called “centiMorgans”, abbreviated “cM”. Segments that show long exact matches are totted up, giving a total proportion of DNA you share. If you matched yourself or an identical twin, you’d match a total of circa 6800cM. Half your DNA comes from each birth parent, so they’d show as circa 3400cM. From your grandparents, half again. As your family tree extends both upwards and sideways (to uncles, aunts, cousins, their kids, etc), the numbers will increasingly dilute by half each step; you’ll likely be in the thousands of potential matches 4 or 5 steps away from your own data:

If you want to surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably, then only matches of greater than 1300cM are likely to have statistical significance. Any lower than that is an increasingly difficult struggle to fill out a family tree, usually persued by asking other family members to get their DNA tested; it is fairly common for GEDmatch to give you details (including email addresses) of 1-2,000 closest matches, albeit sorted in descending ‘close-ness’ order for you).

As one example from GEDmatch, the highlighted line shows a match against one of the subjects parents (their screen name and email address cropped off this picture):

GEDmatch parent

There are more advanced techniques to use a Chromosome browser to pinpoint whether a match comes down a male line or not (to help understand which side of the tree relationships a match is more likely to reside on), but these are currently outside my own knowledge (and current personal need).

Future – take care

One of the central tenets of the insurance industry is to scale societal costs equitably across a large base of folks who may, at random, have to take benefits from the funding pool. To specifically not prejudice anyone whose DNA may give indications of inherited risks or pre-conditions that may otherwise jeopardise their inclusion in cost effective health insurance or medical coverage.

Current UK law specifically makes it illegal for any commercial company or healthcare enterprise to solicit data, including DNA samples, where such provision may prejudice the financial cost, or service provision, to the owner of that data. Hence, please exercise due care with your DNA data, and with any entity that can associate that data with you as a uniquely identifiable individual. Wherever possible, only have that data stored in locations in which local laws, and the organisations holding your data, carry due weight or agreed safe harbour provisions.

Country/Federal Law Enforcement DNA records.

The largest DNA databases in many countries are held, and administered, for police and criminal justice use. A combination of crime scene samples, DNA of known convicted individuals, as well as samples to help locate missing people. The big issue at the time of writing is that there’s no ability to volunteer any submission for matching against missing person or police held samples, even though those data sets are fairly huge.

Access to such data assets are jealously guarded, and there is no current capability to volunteer your own readings for potential matches to be exposed to any case officer; intervention is at the discretion of the police, and they usually do their own custom sampling process and custom lab work. Personally, a great shame, particularly for individuals searching for a missing relative and seeking to help enquiries should their data help identify a match at some stage.

I’d personally gladly volunteer if there were appropriate safeguards to keep my physical identity well away from any third party organisation; only to bring the match to the attention of a case officer, and to leave any feedback to interested relatives only at their professional discretion.

I’d propose that any matches over 1300 cM (CentiMorgans) get fed back to both parties where possible, or at least allow cases to get closed. That would surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably.

At the moment, police typically won’t take volunteer samples unless a missing person is vulnerable. Unfortunately not yet for tracing purposes.

Come join in – £99 is all you need to start

Whether for medical traits knowledge, or to help round out your family trees, now is a good time to get involved cost effectively. Ancestry currently add £20 postage to their £79 testing kit, hence £99 total. 23andMe do ancestry matching, Ethnicity and medical analyses too for £149 or so all in. However, Superdrug are currently selling their remaining stock of 23andMe testing kits (bought when the US dollar rate was better than it now is) for £99. So – quick, before stock runs out!

Either will permit you to load the raw data, once analysed, onto FTDNA, MyHeritage and GEDmatch when done too.

Never a better time to join in.

The Next Explosion – the Eyes have it

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

One of the early lessons you pick up looking at product lifecycles is that some people hold out buying any new technology product or service longer than anyone else. You make it past the techies, the visionaries, the early majority, late majority and finally meet the laggards at the very right of the diagram (PDF version here). The normal way of selling at that end of the bell curve is to embed your product in something else; the person who swore they’d never buy a Microprocessor unknowingly have one inside the controls on their Microwave, or 50-100 ticking away in their car.

In 2016, Google started releasing access to its Vision API. They were routinely using their own Neural networks for several years; one typical application was taking the video footage from their Google Maps Streetview cars, and correlating house numbers from video footage onto GPS locations within each street. They even started to train their own models to pick out objects in photographs, and to be able to annotate a picture with a description of its contents – without any human interaction. They have also begun an effort to do likewise describing the story contained in hundreds of thousands of YouTube videos.

One example was to ask it to differentiate muffins and dogs:

This is does with aplomb, with usually much better than human performance. So, what’s next?

One notable time in Natural History was the explosion in the number of species on earth that  occured in the Cambrian period, some 534 million years ago. This was the time when it appears life forms first developed useful eyes, which led to an arms race between predators and prey. Eyes everywhere, and brains very sensitive to signals that come that way; if something or someone looks like they’re staring at you, sirens in your conscience will be at full volume.

Once a neural network is taught (you show it 1000s of images, and tell it which contain what, then it works out a model to fit), the resulting learning can be loaded down into a small device. It usually then needs no further training or connection to a bigger computer nor cloud service. It can just sit there, and report back what it sees, when it sees it; the target of the message can be a person or a computer program anywhere else.

While Google have been doing the heavy lifting on building the learning models in the cloud, Apple have slipped in with their own CloudML data format, a sort of PDF for the resulting machine learning data formats. Then using the Graphics Processing Units on their iPhone and iPad devices to run the resulting models on the users device. They also have their ARkit libraries (as in “Augmented Reality”) to sense surfaces and boundaries live on the embedded camera – and to superimpose objects in the field of view.

With iOS 11 coming in the autumn, any handwritten notes get automatically OCR’d and indexed – and added to local search. When a document on your desk is photo’d from an angle, it can automatically flatten it to look like a hi res scan of the original – and which you can then annotate. There are probably many like features which will be in place by the time the new iPhone models arrive in September/October.

However, tip of the iceberg. When I drive out of the car park in the local shopping centre here, the barrier automatically raises given the person with the ticket issued to my car number plate has already paid. And I guess we’re going to see a Cambrian explosion as inexpensive “eyes” get embedded in everything around us in our service.

With that, one example of what Amazon are experimenting with in their “Amazon Go” shop in Seattle. Every visitor a shoplifter: https://youtu.be/NrmMk1Myrxc

Lots more to follow.

PS: as a footnote, an example drawing a ruler on a real object. This is 3 weeks after ARkit got released. Next: personalised shoe and clothes measurements, and mail order supply to size: http://www.madewitharkit.com/post/162250399073/another-ar-measurement-app-demo-this-time

Danger, Will Robinson, Danger

One thing that bemused the hell out of me – as a Software guy visiting prospective PC dealers in 1983 – was our account manager for the North UK. On arrival at a new prospective reseller, he would take a tape measure out, and measure the distance between the nearest Directors Car Parking Slot, and their front door. He’d then repeat the exercise for the nearest Visitors Car Parking Spot and the front door. And then walk in for the meeting to discuss their application to resell our range of Personal Computers.

If the Directors slot was closer to the door than the Visitor slot, the meeting was a very short one. The positioning betrayed the senior managements attitude to customers, which in countless cases I saw in other regions (eventually) to translate to that Company’s success (or otherwise). A brilliant and simple leading indicator.

One of the other red flags when companies became successful was when their own HQ building became ostentatious. I always wonder if the leaders can manage to retain their focus on their customers at the same time as building these things. Like Apple in a magazine today:

Apple HQ

And then Salesforce, with the now tallest building in San Francisco:

Salesforce Tower

I do sincerely hope the focus on customers remains in place, and that none of the customers are adversely upset with where each company is channeling it’s profits. I also remember a Telco Equipment salesperson turning up at his largest customer in his new Ferrari, and their reaction of disgust that unhinged their long term relationship; he should have left it at home and driven in using something more routine.

Modesty and Frugality are usually a better leading indicator of delivering good value to folks buying from you. As are all the little things that demonstrate that the success of the customer is your primary motivation.

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Crossing the Chasm on One Page of A4 … and Wardley Maps

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

The core essence of most management books I read can be boiled down to occupy a sheet of A4. There have also been a few big mistakes along the way, such as what were considered at the time to be seminal works, like Tom Peter’s “In Search of Excellence” — that in retrospect was an example summarised as “even the most successful companies possess DNA that also breed the seeds of their own destruction”.

I have much simpler business dynamics mapped out that I can explain to fast track employees — and demonstrate — inside an hour; there are usually four graphs that, once drawn, will betray the dynamics (or points of failure) afflicting any business. A very useful lesson I learnt from Microsoft when I used to distribute their software. But I digress.

Among my many Business books, I thought the insights in Geoffrey Moores Book “Crossing the Chasm” were brilliant — and useful for helping grow some of the product businesses i’ve run. The only gotcha is that I found myself keeping on cross referencing different parts of the book when trying to build a go-to-market plan for DEC Alpha AXP Servers (my first use of his work) back in the mid-1990’s — the time I worked for one of DEC’s Distributors.

So, suitably bored when my wife was watching J.R. Ewing being mischievous in the first UK run of “Dallas” on TV, I sat on the living room floor and penned this one page summary of the books major points. Just click it to download the PDF with my compliments. Or watch the author himself describe the model in under 14 minutes at an O’Reilly Strata Conference here. Or alternatively, go buy the latest edition of his book: Crossing the Chasm

My PA (when I ran Marketing Services at Demon Internet) redrew my hand-drawn sheet of A4 into the Microsoft Publisher document that output the one page PDF, and that i’ve referred to ever since. If you want a copy of the source file, please let me know — drop a request to: [email protected].

That said, i’ve been far more inspired by the recent work of Simon Wardley. He effectively breaks a service into its individual components and positions each on a 2D map;  x-axis dictates the stage of the components evolution as it does through a Chasm-style lifecycle; the y-axis symbolises the value chain from raw materials to end user experience. You then place all the individual components and their linkages as part of an end-to-end service on the result. Having seen the landscape in this map form, then to assess how each component evolves/moves from custom build to commodity status over time. Even newest components evolve from chaotic genesis (where standards are not defined and/or features incomplete) to becoming well understood utilities in time.

The result highlights which service components need Agile, fast iterating discovery and which are becoming industrialised, six-sigma commodities. And once you see your map, you can focus teams and their measures on the important changes needed without breeding any contradictory or conflict-ridden behaviours. You end up with a well understood map and – once you overlay competitive offerings – can also assess the positions of other organisations that you may be competing with.

The only gotcha in all of this approach is that Simon hasn’t written the book yet. However, I notice he’s just provided a summary of his work on his Bits n Pieces Blog yesterday. See: Wardley Maps – set of useful Posts. That will keep anyone out of mischief for a very long time, but the end result is a well articulated, compelling strategy and the basis for a well thought out, go to market plan.

In the meantime, the basics on what is and isn’t working, and sussing out the important things to focus on, are core skills I can bring to bear for any software, channel-based or internet related business. I’m also technically literate enough to drag the supporting data out of IT systems for you where needed. Whether your business is an Internet-based startup or an established B2C or B2B Enterprise focussed IT business, i’d be delighted to assist.

Apple Watch: My first 48 hours

To relate my first impressions of my Apple Watch (folks keep asking).  I bought the Stainless Steel one with a Classic Black Strap.

The experience in the Apple Store was a bit too focussed on changing the clock face design; the experience of using it, for accepting the default face to start with, and using it for real, is (so far) much more pleasant. But take it off the charger, put it on, and you get:

Apple Watch PIN Challenge

Tap in your pin, then the watch face is there:

Apple Watch Clock Face

There’s actually a small (virtual) red/blue LED just above the “60” atop the clock – red if a notification has come in, turning into a blue padlock if you still need to enter your PIN, but otherwise what you see here. London Time, 9 degrees centigrade, 26th day of the current month, and my next calendar appointment underneath.

For notifications it feels deserving of my attention, it not only lights the LED (which I only get so see if I flick my wrist up to see the watch face), but it also goes tap-tap-tap on my wrist. This optionally also sounds a small warning, but that’s something I switched off pretty early on. The taptic hint is nice, quiet and quite subtle.

Most of the set-up for apps and settings is done on the Apple iPhone you have paired up to the watch. Apps reside on the phone, and ones you already have that can talk to your watch are listed already. You can then select which ones you want to appear on the watches application screen, and a subset you want to have as “glances” for faster access. The structure looks something like this:

Apple Watch No NotificationsApple Watch Clock Face

Apple Watch Heart Rate Apple Watch Local Weather Amazon Stock Quote Apple Watch Dark Sky

 

Hence, swipe down from the top, you see the notification stream, swipe back up, you’re back to the clock face. Swipe up from the bottom, you get the last “glance” you looked at. In my case, I was every now and then seeing how my (long term buy and hold) shares in Amazon were doing after they announced the size of their Web Services division. The currently selected glance stays in place for next time I swipe up unless I leave the screen having moved along that row.

If I swipe from left to right, or right to left, I step over different “glances”. These behave like swiping between icon screens on an iPhone or iPad; if you want more detail, you can click on them to invoke the matching application. I have around 12 of these in place at the moment. Once done, swipe back up, and back to the clock face again. After around 6 seconds, the screen blacks out – until the next time you swing the watch face back into view, at which point it lights up again. Works well.

You’ll see it’s monitoring my heart rate, and measuring my movement. But in the meantime, if I want to call or message someone, I can hit the small button on the side and get a list of 12 commonly called friends:

Apple Watch Friends

Move the crown around, click the picture, and I can call or iMessage them directly. Text or voice clip. Yes, directly on the watch, even if my iPhone is upstairs or atop the cookery books in the kitchen; it has a microphone and a speaker, and works from anywhere over local WiFi. I can even see who is phoning me and take their calls on the watch.

If I need to message anyone else, I can press the crown button in and summon Siri; the accuracy of Siri is remarkable now. One of my sons sent an iMessage to me when I was sitting outside the Pharmacy in Boots, and I gave a full sentence reply (verbally) then told it to send – 100% accurately despite me largely whispering into the watch on my wrist. Must have looked strange.

There are applications on the watch but these are probably a less used edge case; in my case, the view on my watch looks just like the layout i’ve given in the iPhone Watch app:

Apple Watch Applications

So, I can jump in to invoke apps that aren’t set as glances. My only surprise so far was finding that FaceBook haven’t yet released their Watch or Messenger apps, though Instagram (which they also own) is there already. Eh, tap tap on my wrist to tell me Paula Radcliffe had just completed her last London Marathon:

BBC News Paula Radcliffeand a bit later:

Everton 3 Man Utd 0

Oh dear, what a shame, how sad (smirk – Aston Villa fan typing). But if there’s a flurry of notifications, and you just want to clear the lot off in one fell swoop, just hard press the screen and…

Clear All Notificatios

Tap the X and zap, all gone.

There are a myriad of useful apps; I have Dark Sky (which gives you a hyper local forecast of any impending rain), City Mapper (helps direct you around London on all different forms of Transport available), Uber, and several others. They are there in the application icons, but also enabled from the Watch app on my phone (Apps, then the subset selected as Glances):

Ians Watch Apps Ians Watch Glances

With that, tap tap on my wrist:

Apple Watch Stand Up!

Hmmm – i’ve been sitting for too long, so time to get off my arse. It will also assess my exercise in the day and give me some targets to achieve – which it’ll then display for later admiration. Or disgust.

There is more to come. I can already call a Uber taxi directly from the watch. The BBC News Glance rotates the few top stories if selected. Folks in the USA can already use it to pay at any NFC cash terminal with a single click (if the watch comes off your wrist, it senses this and will insist on a PIN then). Twitter gives notifications and has a glance that reports the top trend hashtag when viewed.

So far, the battery is only getting from 100% down to 30% in regular use from 6:00am in the morning until 11:30pm at night, so looking good. Boy, those Amazon shares are going up; that’ll pay for my watch many times over:

Watch on Arm

Overall, impressed so far, very happy with it, and i’m sure the start of a world where software steps submerge into a world of simple notifications and responses to same. And i’m sure Jane (my wife) will want one soon. Just have to wean her out of her desire for the £10,000+ gold one to match her gold coloured MacBook.