Simple Mistakes – User Experience 101 failure

Things have been very busy at work recently, but I surfaced on Saturday to take my wife into Reading to collect goods she’d ordered from Boots earlier in the week. Just replenishing a range of items so she didn’t start running out from Monday. She’d been advised to pick everything up anytime after midday from their town centre store.

You’ve probably guessed it – no sign of the order. She was just pointed at the customer service phone number on her copy of the order sheet, and asked to call them. Which she duly did outside, only to find that hundreds of customers who’d paid for their orders in the week with Mastercard were similarly in the same position. So she asked if she could cancel the order and at least buy the same things in the store while she was in town. Answer: the operator wasn’t sure if she was even allowed to cancel, but would ask and email back. That email never arrived; the only one that did a day later confirmed her items had now dispatched for the store but still no indication of arrival date – and the next 15 mile round trip needed.

I’m reminded of two things learnt from both Amazon and in producing strategy maps using the Wardley Mapping technique. The common thing when you’re involved in any product, project or business process design is that you start with the customer and optimise for their most delightful experience – then work back from there. And you only start trying to be unique if it’s directly visible to that user experience in some tangible way. Both facets together are still an incredibly important gap that I see folks miss all the time (I see that in projects at work, but that’s another story).

During the week, it was announced Amazon had purchased PillPack – a small new England company – and it sent the shares of all the big Pharmacy Chains in the USA tumbling (in market cap terms, around $13 Billion in a day). So, what do they do? Simple:

If you have a regular prescription, they put all the tablets you’re supposed to consume in a time/date labelled packet. These are printed and filled in a roll, output in chronological order – then loaded into a dispenser you receive in the mail (overnight if needed urgently):Simple! And then a set of services where they maintain your repeat needs with your doctor directly, so all the grunt work in ensuring you get your meds is done for you. They even allow you to set your holiday location if you’re away and ship there if needed to ensure you never have an unwanted gap.

Compare that to the run around most folks are exposed to with regular prescriptions and in understanding what to take and when. Instead you have a friendly, subscription based business serving your needs. (For some reason, Wall Street currently obsess about subscription based businesses – they value their stock not on Price to Earnings ratios but on Price to Sales Revenue multiples instead – and Amazon are in the thick of that too).

Personal experience here with Amazon (we’re Prime members) is that there is any problem with an order and we ask to cancel, it just happens and money immediately reimbursed. You can see why all those retail pharmacy shares took a hit with the PillPack buyout announcement by Amazon; you can see the end user experience is about to get radically better, and probably first in a number of Amazon initiatives in the Health Industry that will follow a path of relentless, customer obsessed, focus. 

Amazon already have a joint venture with Goldman Sachs and Berkshire Hathaway to work out how to provide cost effective health benefits to their combined employee populations. Something they’ll no doubt open outside the company too in time. That’s when life for CVS, Walgreens, Target and so forth (plus Boots in the UK) will get very interesting. Bring it on!

IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Does your WordPress website go over a cliff in July 2018?

Secure connections, faster web sites, better Google search rankings – and well before Google throw a switch that will disadvantage many other web sites in July 2018. I describe the process to achieve this for anyone running a WordPress Multisite Network below. Or I can do this for you.

Many web sites that handle financial transactions use a secure connection; this gives a level of guarantee that you are posting your personal or credit card details directly to a genuine company. But these “HTTPS” connections don’t just protect user data, but also ensure that the user is really connecting to the right site and not an imposter one. This is important because setting up a fake version of a website users normally trust is a favourite tactic of hackers and malicious actors. HTTPS also ensures that a malicious third party can’t hijack the connection and insert malware or censor information.

Back in 2014, Google asked web site owners if they could make their sites use HTTPS connections all the time, and provided both a carrot and a stick as incentives. On the one hand, they promised that future versions of their Chrome Browser would explicitly call out sites that were presenting insecure pages, so that users knew where to tread very carefully. On the upside, they suggested that they would positively discriminate secure sites over insecure ones in future Google searches.

The final step in this process comes in July 2018:

New HTTP Treatment by Chrome from July 2018

The logistics of achieving “HTTPS” connections for many sites is far from straight forward. Like many service providers, I host a WordPress network, that aims individual customer domain names at a single Linux based server. That in turn looks to see which domain name the inbound connection request has come from, and redirects onto that website customers own subdirectory structure for the page content, formatting and images.

The main gotcha is that if I tell my server that its certified identity is “www.software-enabled.com”, an inbound request from “www.ianwaring.com”, or “www.obesemanrowing.org.uk”, will get very confused. It will look like someone has hijacked the sites, and the users browser session will gain some very pointed warnings suggesting a malicious traffic subversion attempt.

A second gotcha – even if you solve the certified identity problem – is that a lot of the content of a typical web site contains HTTP (not HTTPS) links to other pages, pictures or video stored within the same site. It would normally be a considerable (and error prone) process to change http: to https: links on all pages, not least as the pages themselves for all the different customer sites are stored by WordPress inside a complex MySQL database.

What to do?

It took quite a bit of research, but cracked it in the end. The process I used was:

  1. Set up each customer domain name on the free tier of the CloudFlare content delivery network. This replicates local copies of the web sites static pages in locations around the world, each closer to the user than the web site itself.
  2. Change the customer domain name’s Name Servers to the two cited by CloudFlare in step (1). It may take several hours for this change to propagate around the Internet, but no harm continuing these steps.
  3. Repeat (1) and (2) for each site on the hosted WordPress network.
  4. Select the WordPress “Network Admin” dashboard, and install two plug-ins; iControlWP’s “CloudFlare Flexible SSL”, and then WebAware’s “SSL Insecure Content Fixer”. The former handles the connections to the CloudFlare network (ensuring routing works without unexpected redirect loops); the latter changes http: to https: connections on the fly for references to content within each individual customer website. Network Enable both plugins. There is no need to install the separate CloudFlare WordPress plugin.
  5. Once CloudFlare’s web site shows all the domain names as verified that they are being managed by CloudFlare’s own name servers with their own certificates assigned (they will get a warning or a tick against each), step through the “Crypto” screen on each one in turn – switching on “Always use https” redirections.

At this point, whether users access the websites using http: or https: (or don’t mention either), each will come up with a padlocked, secure, often greened address bar with “https:” in front of the web address of the site. Job done.

Once the HTTP redirects to HTTPS appear to be working, and all the content is being displayed correctly on pages, I go down the Crypto settings on the CloudFlare web site and enable “opportunistic encryption” and “HTTPS rewrites”.

In the knowledge that Google also give faster sites better rankings in search results over slow ones, there is also a “Speed” section in the CloudFlare web site. On this, i’ve told it to compress CSS, JavaScript and HTML pages – termed “Auto Minify” – to minimise the amount of data transmitted to the users browser but to still render it correctly. This, in combination with my use of a plug-in to use Google’s AMP (Accelerated Mobile Pages) shortcuts – which in turn can give 3x load speed improvements on mobile phones – all the customer sites are really flying.

CloudFlare do have a paid offering called “Argo Smart Routing” that further speeds up delivery of web site content. Folks are shown to be paying $5/month and seeing page loads in 35% of the time prior to this being enabled. You do start paying for the amount of traffic you’re releasing into the Internet at large, but the pricing tiers are very generous – and should only be noticeable for high traffic web sites.

So, secure connections, faster web sites, better Google search rankings – and well before Google throw the switch that will disadvantage many other web sites in July 2018. I suspect having hundreds of machines serving the content on CloudFlare’s Content Delivery Network will also make the site more resilient to distributed denial of service flood attack attempts, if any site I hosted ever got very popular. But I digress.

If you would like me to do this for you on your WordPress site(s), please get in touch here.

A small(?) task of running up a Linux server to run a Django website

Django 1.11 on Ubuntu 16.04

I’m conscious that the IT world is moving in the direction of “Serverless” code, where business logic is loaded to a service and the infrastructure underneath abstracted away. In that way, it can be woken up from dormant and scaled up and down automatically, in line with the size of the workload being put on it. Until then, I wanted (between interim work assignments) to set up a home project to implement a business idea I had some time back.

In doing this, i’ve had a tour around a number of bleeding edge attempts. As a single page app written in JavaScript on Amazon AWS with Cognito and DynamoDB. Then onto Polymer Web Components, which I stopped after it looked like Apple were unlikely to have support in Safari on iOS in the short term. Then onto Firebase on Google Cloud, which was fine until I thought I needed a relational DB for my app (I am experienced on MongoDB from 2013, but NoSQL schemas aren’t the right fit for my app). And then to Django, which seemed to be gaining popularity these days, not least as it’s based on Python and is designed for fast development of database driven web apps.

I looked for the simplest way to run up a service on all the main cloud vendors. After half a day of research, elected to try Django on Digital Ocean, where a “one click install” was available. This looked the simplest way to install Django on any of the major cloud vendors. It took 30 minutes end to end to run the instance up, ready to go; that was until I realised it was running an old version of Django (1.08), and used Python 2.7 — which is not supported by the (then) soon to be released 2.0 version of Django. So, off I went trying to build everything ground up.

The main requirement was that I was developing on my Mac, but the production version in the cloud on a Linux instance — so I had to set up both. I elected to use PostgreSQL as the database, Nginx with Gunicorn as the web server stack, used Lets Encrypt (as recommended by the EFF) for certificates and Django 1.11 — the latest version when I set off. Local development environment using Microsoft Visual Studio Code alongside GitHub.

One of the nuances on Django is that users are normally expected to login with a username different from their email address. I really wanted my app to use a persons email address as their only login username, so I had to put customisations into the Django set-up to achieve that along the way.

A further challenge is that target devices used by customers are heavily weighted to mobile phones on other sites I run, so I elected to use Google’s Material user interface guidelines. The Django implementation is built on an excellent framework i’ve used in another project, as built by four Stanford graduates  — MaterializeCSS — and supplemented by a lot of custom work on template tags, forms and layout directives by Mikhail Podgurskiy in a package called django-material (see: http://forms.viewflow.io/).

The mission was to get all the above running before I could start adding my own authentication and application code. The end result is an application that will work nicely on phones, tablets or PCs, resizing automatically as needed.

It turned out to be a major piece of work just getting the basic platform up and running, so I noted all the steps I took (as I went along) just in case this helps anyone (or the future me!) looking to do the same thing. If it would help you (it’s long), just email me at [email protected]. I’ve submitted it back to Digital Ocean, but happy to share the step by step recipe.

Alternatively, hire me to do it for you!

WTF – Tim O’Reilly – Lightbulbs On!

What's the Future - Tim O'Reilly

Best Read of the Year, not just for high technology, but for a reasoned meaning behind political events over the last two years, both in the UK and the USA. I can relate it straight back to some of the prescient statements made by Jeff Bezos about Amazon “Day 1” disciplines: the best defence against an organisations path to oblivion being:

  1. customer obsession
  2. a skeptical view of proxies
  3. the eager adoption of external trends, and
  4. high-velocity decision making

Things go off course when interests divide in a zero-sum way between different customer groups that you serve, and where proxies indicating “success” diverge from a clearly defined “desired outcome”.

The normal path is to start with your “customer” and give an analogue of what indicates “success” for them in what you do; a clear understanding of the desired outcome. Then the measures to track progress toward that goal, the path you follow to get there (adjusting as you go), and a frequent review that steps still serve the intended objective. 

Fake News on Social Media, Finance Industry Meltdowns, unfettered slavery to “the market” and to “shareholder value” have all been central to recent political events in both the UK and the USA. Politicians of all colours were complicit in letting proxies for “success” dissociate fair balance of both wealth and future prospects from a vast majority of the customers they were elected to serve. In the face of that, the electorate in the UK bit back – as they did for Trump in the US too.

Part 3 of the book, entitled “A World Ruled by Algorithms” – pages 153-252 – is brilliant writing on our current state and injustices. Part 4 (pages 255-350) entitled “It’s up to us” maps a path to brighter times for us and our descendants.

Tim says:

The barriers to fresh thinking are even higher in politics than in business. The Overton Window, a term introduced by Joseph P. Overton of the Mackinac Center for Public Policy,  says that an ideas political viability falls within a window framing a range of policies considered politically acceptable in the current climate of public opinion. There are ideas that a politician simply cannot recommend without being considered too extreme to gain or keep public office.

In the 2016 US presidential election, Donald Trump didn’t just  push the Overton Window far too to right, he shattered it, making statement after statement that would have been disqualifying for any previous candidate. Fortunately, once the window has come unstuck, it is possible to move it radically new directions.

He then says that when such things happen, as they did at the time of the Great Depression, the scene is set to do radical things to change course for the ultimate greater good. So, things may well get better the other side of Trumps outrageous pandering to the excesses of the right, and indeed after we see the result of our electorates division over BRexit played out in the next 18 months.

One final thing that struck me was how one political “hot potato” issue involving Uber in Taiwan got very divided and extreme opinions split 50/50 – but nevertheless got reconciled to everyone’s satisfaction in the end. This using a technique called Principal Component Analysis (PCA) and a piece of software called “Pol.is”. This allows folks to publish assertions, vote and see how the filter bubbles evolve through many iterations over a 4 week period. “I think Passenger Liability Insurance should be mandatory for riders on UberX private vehicles” (heavy split votes, 33% both ends of the spectrum) evolved to 95% agreeing with “The Government should leverage this opportunity to challenge the taxi industry to improve their management and quality control system, so that drivers and riders would enjoy the same quality service as Uber”. The licensing authority in Taipei duly followed up for the citizens and all sides of that industry. 

I wonder what the BRexit “demand on parliament” would have looked like if we’d followed that process, and if indeed any of our politicians could have encapsulated the benefits to us all on either side of that question. I suspect we’d have a much clearer picture than we do right now.

In summary, a superb book. Highly recommended.

Your DNA – a Self Testing 101

23andMe testing kitYour DNA is a string of protein pairs that encapsulate your “build” instructions, as inherited from your birth parents. While copies of it are packed tightly into every cell in, and being given off, your body, it is of considerable size; a machine representation of it is some 2.6GB in length – the size of a blue-ray DVD.

The total entity – the human genome – is a string of C-G and A-T protein pairs. The exact “reference” structure, given the way in which strands are structured and subsections decoded, was first successfully concluded in 2003. It’s absolute accuracy has gradually improved regularly as more DNA samples have been analysed down the years since.

A sequencing machine will typically read short lengths of DNA chopped up into pieces (in a random pile, like separate pieces of a jigsaw), and by comparison against a known reference genome, gradually piece together which bit fits where; there are known ‘start’ and ‘end’ segment patterns along the way. To add a bit of complexity, the chopped read may get scanned backwards, so a lot of compute effort to piece a DNA sample into what it looks like if we were able to read it uninterrupted from beginning to end.

At the time of writing (July 2017), we’re up to version 38 of the reference human genome. 23andMe currently use version 37 for their data to surface inherited medical traits. Most of the DNA sampling industry trace family history reliably using version 36, and hence most exports to work with common DNA databases automatically “downgrade” to that version for best consistency.

DNA Structure

DNA has 46 sections (known as Chromosomes); 23 of them come from your birth father, 23 from your birth mother. While all humans have over 99% commonality, the 1% difference make every one of us (or a pair of identical twins) statistically unique.

The cost to sample your own DNA – or that of a relative – is these days in the range of £79-£149. The primary one looking for inherited medical traits is 23andMe. The biggest volume for family tree use is AncestryDNA. That said, there are other vendors such as Family Tree DNA (FTDNA) and MyHeritage that also offer low cost testing kits.

The Ancestry DNA database has some 4 million DNA samples to match against, 23andMe over 1 million. The one annoyance is that you can’t export your own data from these two and then insert it in the other for matching purposes (neither have import capabilities). However, all the major vendors do allow exports, so you can upload your data from AncestryDNA or 23andMe into FTDNA, MyHeritage and to the industry leading cross-platform GEDmatch DNA databases very simply.

Exports create a ZIP file. With FTDNA, MyHeritage and GEDmatch, you request an import, and these prompt for the name of that ZIP file itself; you have no need to break it open first at all.

On receipt of the testing kit, register the code on the provided sample bottle on their website. Just avoid eating/drinking for 30 minutes, spit into the provided tube up to the level mark, seal, put back in the box, seal it and pop it in a postbox. Results will follow in your account on their website in 2-4 weeks.

Family Tree matching

Once you receive your results, Ancestry and 23andMe will give you details of any suggested family matches on their own databases. The primary warning here is that matches will be against your birth mother and whoever made her pregnant; given historical unavailability of effective birth control mechanisms and the secrecy of adoption processes, this has been known to surface unexpected home truths. Relatives trace up and down the family tree from those two reference points. A quick gander of self help forums on social media can be entertaining, or a litany of horror stories – alongside others of raw delight. Take care, be ready for the unexpected:

My first social media experience was seeing someone confirm a doctor as her birth father. Her introductory note to him said that he may remember her Mum, as she used to be his nursing assistant.

Another was to a man, who once identified admitted to knowing her birth mother in his earlier years – but said it couldn’t be him “as he’d never make love with someone that ugly”.

Outside of those, fairly frequent outright denials questioning the fallibility of the science behind DNA testing, none of which stand up to any factual scrutiny. But among the stories, there are also stories of delight in all parties when long lost, separated or adopted kids locate, and successfully reconnect, with one or both birth parents and their families.

Loading into other databases, such as GEDmatch

In order to escape the confines of vendor specific DNA databases, you can export data from almost any of the common DNA databases and reload the resulting ZIP file into GEDmatch. Once imported, there’s quite a range of analysis tools sitting behind a fairly clunky user interface.

The key discovery tool is the “one to many” listing, which does a comparison of your DNA against everyone elses in the GEDmatch database – and lists partial matches in order of closeness to your own data. It does this using a unit of measure called “centiMorgans”, abbreviated “cM”. Segments that show long exact matches are totted up, giving a total proportion of DNA you share. If you matched yourself or an identical twin, you’d match a total of circa 6800cM. Half your DNA comes from each birth parent, so they’d show as circa 3400cM. From your grandparents, half again. As your family tree extends both upwards and sideways (to uncles, aunts, cousins, their kids, etc), the numbers will increasingly dilute by half each step; you’ll likely be in the thousands of potential matches 4 or 5 steps away from your own data:

If you want to surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably, then only matches of greater than 1300cM are likely to have statistical significance. Any lower than that is an increasingly difficult struggle to fill out a family tree, usually persued by asking other family members to get their DNA tested; it is fairly common for GEDmatch to give you details (including email addresses) of 1-2,000 closest matches, albeit sorted in descending ‘close-ness’ order for you).

As one example from GEDmatch, the highlighted line shows a match against one of the subjects parents (their screen name and email address cropped off this picture):

GEDmatch parent

There are more advanced techniques to use a Chromosome browser to pinpoint whether a match comes down a male line or not (to help understand which side of the tree relationships a match is more likely to reside on), but these are currently outside my own knowledge (and current personal need).

Future – take care

One of the central tenets of the insurance industry is to scale societal costs equitably across a large base of folks who may, at random, have to take benefits from the funding pool. To specifically not prejudice anyone whose DNA may give indications of inherited risks or pre-conditions that may otherwise jeopardise their inclusion in cost effective health insurance or medical coverage.

Current UK law specifically makes it illegal for any commercial company or healthcare enterprise to solicit data, including DNA samples, where such provision may prejudice the financial cost, or service provision, to the owner of that data. Hence, please exercise due care with your DNA data, and with any entity that can associate that data with you as a uniquely identifiable individual. Wherever possible, only have that data stored in locations in which local laws, and the organisations holding your data, carry due weight or agreed safe harbour provisions.

Country/Federal Law Enforcement DNA records.

The largest DNA databases in many countries are held, and administered, for police and criminal justice use. A combination of crime scene samples, DNA of known convicted individuals, as well as samples to help locate missing people. The big issue at the time of writing is that there’s no ability to volunteer any submission for matching against missing person or police held samples, even though those data sets are fairly huge.

Access to such data assets are jealously guarded, and there is no current capability to volunteer your own readings for potential matches to be exposed to any case officer; intervention is at the discretion of the police, and they usually do their own custom sampling process and custom lab work. Personally, a great shame, particularly for individuals searching for a missing relative and seeking to help enquiries should their data help identify a match at some stage.

I’d personally gladly volunteer if there were appropriate safeguards to keep my physical identity well away from any third party organisation; only to bring the match to the attention of a case officer, and to leave any feedback to interested relatives only at their professional discretion.

I’d propose that any matches over 1300 cM (CentiMorgans) get fed back to both parties where possible, or at least allow cases to get closed. That would surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably.

At the moment, police typically won’t take volunteer samples unless a missing person is vulnerable. Unfortunately not yet for tracing purposes.

Come join in – £99 is all you need to start

Whether for medical traits knowledge, or to help round out your family trees, now is a good time to get involved cost effectively. Ancestry currently add £20 postage to their £79 testing kit, hence £99 total. 23andMe do ancestry matching, Ethnicity and medical analyses too for £149 or so all in. However, Superdrug are currently selling their remaining stock of 23andMe testing kits (bought when the US dollar rate was better than it now is) for £99. So – quick, before stock runs out!

Either will permit you to load the raw data, once analysed, onto FTDNA, MyHeritage and GEDmatch when done too.

Never a better time to join in.

The Next Explosion – the Eyes have it

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

One of the early lessons you pick up looking at product lifecycles is that some people hold out buying any new technology product or service longer than anyone else. You make it past the techies, the visionaries, the early majority, late majority and finally meet the laggards at the very right of the diagram (PDF version here). The normal way of selling at that end of the bell curve is to embed your product in something else; the person who swore they’d never buy a Microprocessor unknowingly have one inside the controls on their Microwave, or 50-100 ticking away in their car.

In 2016, Google started releasing access to its Vision API. They were routinely using their own Neural networks for several years; one typical application was taking the video footage from their Google Maps Streetview cars, and correlating house numbers from video footage onto GPS locations within each street. They even started to train their own models to pick out objects in photographs, and to be able to annotate a picture with a description of its contents – without any human interaction. They have also begun an effort to do likewise describing the story contained in hundreds of thousands of YouTube videos.

One example was to ask it to differentiate muffins and dogs:

This is does with aplomb, with usually much better than human performance. So, what’s next?

One notable time in Natural History was the explosion in the number of species on earth that  occured in the Cambrian period, some 534 million years ago. This was the time when it appears life forms first developed useful eyes, which led to an arms race between predators and prey. Eyes everywhere, and brains very sensitive to signals that come that way; if something or someone looks like they’re staring at you, sirens in your conscience will be at full volume.

Once a neural network is taught (you show it 1000s of images, and tell it which contain what, then it works out a model to fit), the resulting learning can be loaded down into a small device. It usually then needs no further training or connection to a bigger computer nor cloud service. It can just sit there, and report back what it sees, when it sees it; the target of the message can be a person or a computer program anywhere else.

While Google have been doing the heavy lifting on building the learning models in the cloud, Apple have slipped in with their own CloudML data format, a sort of PDF for the resulting machine learning data formats. Then using the Graphics Processing Units on their iPhone and iPad devices to run the resulting models on the users device. They also have their ARkit libraries (as in “Augmented Reality”) to sense surfaces and boundaries live on the embedded camera – and to superimpose objects in the field of view.

With iOS 11 coming in the autumn, any handwritten notes get automatically OCR’d and indexed – and added to local search. When a document on your desk is photo’d from an angle, it can automatically flatten it to look like a hi res scan of the original – and which you can then annotate. There are probably many like features which will be in place by the time the new iPhone models arrive in September/October.

However, tip of the iceberg. When I drive out of the car park in the local shopping centre here, the barrier automatically raises given the person with the ticket issued to my car number plate has already paid. And I guess we’re going to see a Cambrian explosion as inexpensive “eyes” get embedded in everything around us in our service.

With that, one example of what Amazon are experimenting with in their “Amazon Go” shop in Seattle. Every visitor a shoplifter: https://youtu.be/NrmMk1Myrxc

Lots more to follow.

PS: as a footnote, an example drawing a ruler on a real object. This is 3 weeks after ARkit got released. Next: personalised shoe and clothes measurements, and mail order supply to size: http://www.madewitharkit.com/post/162250399073/another-ar-measurement-app-demo-this-time

Danger, Will Robinson, Danger

One thing that bemused the hell out of me – as a Software guy visiting prospective PC dealers in 1983 – was our account manager for the North UK. On arrival at a new prospective reseller, he would take a tape measure out, and measure the distance between the nearest Directors Car Parking Slot, and their front door. He’d then repeat the exercise for the nearest Visitors Car Parking Spot and the front door. And then walk in for the meeting to discuss their application to resell our range of Personal Computers.

If the Directors slot was closer to the door than the Visitor slot, the meeting was a very short one. The positioning betrayed the senior managements attitude to customers, which in countless cases I saw in other regions (eventually) to translate to that Company’s success (or otherwise). A brilliant and simple leading indicator.

One of the other red flags when companies became successful was when their own HQ building became ostentatious. I always wonder if the leaders can manage to retain their focus on their customers at the same time as building these things. Like Apple in a magazine today:

Apple HQ

And then Salesforce, with the now tallest building in San Francisco:

Salesforce Tower

I do sincerely hope the focus on customers remains in place, and that none of the customers are adversely upset with where each company is channeling it’s profits. I also remember a Telco Equipment salesperson turning up at his largest customer in his new Ferrari, and their reaction of disgust that unhinged their long term relationship; he should have left it at home and driven in using something more routine.

Modesty and Frugality are usually a better leading indicator of delivering good value to folks buying from you. As are all the little things that demonstrate that the success of the customer is your primary motivation.

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Reinventing Healthcare

Comparison of US and UK healthcare costs per capita

A lot of the political effort in the UK appears to circle around a government justifying and handing off parts of our NHS delivery assets to private enterprises, despite the ultimate model (that of the USA healthcare industry) costing significantly more per capita. Outside of politicians lining their own pockets in the future, it would be easy to conclude that few would benefit by such changes; such moves appear to be both economically farcical and firmly against the public interest. I’ve not yet heard any articulation of a view that indicates otherwise. But less well discussed are the changes that are coming, and where the NHS is uniquely positioned to pivot into the future.

There is significant work to capture DNA of individuals, but these are fairly static over time. It is estimated that there are 10^9 data points per individual, but there are many other data points – which change against a long timeline – that could be even more significant in helping to diagnose unwanted conditions in a timely fashion. To flip the industry to work almost exclusively to preventative and away from symptom based healthcare.

I think I was on the right track with an interest in Microbiome testing services. The gotcha is that commercial services like uBiome, and public research like the American (and British) Gut Project, are one-shot affairs. Taking a stool, skin or other location sample takes circa 6,000 hours of CPU wall time to reconstruct the 16S rRNA gene sequences of a statistically valid population profile. Something I thought I could get to a super fast turnaround using excess capacity (spot instances – excess compute power you can bid to consume when available) at one or more of the large cloud vendors. And then to build a data asset that could use machine learning techniques to spot patterns in people who later get afflicted by an undesirable or life threatening medical condition.

The primary weakness in the plan is that you can’t suss the way a train is travelling by examining a photograph taken looking down at a static railway line. You need to keep the source sample data (not just a summary) and measure at regular intervals; an incidence of salmonella can routinely knock out 30% of your Microbiome population inside 3 days before it recovers. The profile also flexes wildly based on what you eat and other physiological factors.

The other weakness is that your DNA and your Microbiome samples are not the full story. There are many other potential leading indicators that could determine your propensity to become ill that we’re not even sampling. The questions of which of our 10^18 different data points are significant over time, and how regularly we should be sampled, are open questions

Experience in the USA is that in environments where regular preventative checkups of otherwise healthy individuals take place – that of Dentists – have managed to lower the cost of service delivery by 10% at a time where the rest of the health industry have seen 30-40% cost increases.

So, what are the measures that should be taken, how regularly and how can we keep the source data in a way that allows researchers to employ machine learning techniques to expose the patterns toward future ill-health? There was a good discussion this week on the A16Z Podcast on this very subject with Jeffrey Kaditz of Q Bio. If you have a spare 30 minutes, I thoroughly recommend a listen: https://soundcloud.com/a16z/health-data-feedback-loop-q-bio-kaditz.

That said, my own savings are such that I have to refocus my own efforts elsewhere back in the IT industry, and my MicroBiome testing service Business Plan mothballed. The technology to regularly sample a big enough population regularly is not yet deployable in a cost effective fashion, but will come. When it does, the NHS will be uniquely positioned to pivot into the sampling and preventative future of healthcare unhindered.