Loonshots

Loonshots by Safi Bahcall

I’m reading too many books on innovation in large organisations that could be drastically shortened and still convey useful lessons. This book covers what appears to be a simple lesson, oft repeated.

The lesson is that large businesses often let large (current) business lines dominate the allocation of resources. This is often to the detriment of small, more radical approaches to serve customers in new ways and with potential much larger future growth potential. And if that weren’t enough, those large and powerful business lines act as political anti-bodies to any potential threat to their internal franchise.

So you have to leadership able to keep them separate, generously funded and away from any internal political interference. Let the emergent group put a pirate flag on a mast atop a separate building, and not let on what they are doing – even to the rest of the organisation.

To many, that small emergent growth potential team is a typical startup. Probably the best book that explains behaviours and how Venture Capital places bets towards breeding disruptive growth is “Zero to One” by Peter Thiel. I feel I learnt more about driving disruptive businesses in that book over almost all others i’ve read.

Another approach is as described in ‘Zone to Win” by Geoffrey Moore. That centres on formalising the division of funding streams to products or operating divisions to support their expected size or operating mechanics in future time horizons – if indeed these are predictable.

Loonshots is largely about leaving “wild duck” ideas alone and cites examples where doing this (and overcoming several large setbacks on the way) has led to companies disrupting previous industry leaders. So, largely familiar stories with the usual examples.

However, for a large organisation, one example of simple genius really sticks out in my mind: Honda. The core Agile based R&D is in one operating company, and the six-sigma Manufacturing line in another. That is probably the ultimate and most effective piece of operational and management simplicity out there.

Quality Journalism – UK Oxymoron?

I’m writing this the day that John McCain died in the USA – and the most compelling eulogy came from Barack Obama. It’s a rare day right now when people can disagree fervently with each others views, but still hold each other in greatest respect.

In reading “The Secret Barrister”, you come away with a data filled summary of the comparatively and continued poor state of Westminster politics. Of successive abuses to a system of justice by politicians of all colours. To prioritise “PR” on everything to mask poor financial choices with sound bites, while quietly robbing us all blind of values we hold dear. And i’m sure Chris Grayling will receive few Christmas Cards from members of the judiciary based on their experience of him documented in this books pages.

Politics is but only half the story in this. I often muse to wonder where quality journalism disappeared to? There are good pockets in the London Review of Books, and with the work on the Panama Papers by ICIJ – but where else are the catalogue of abuses systematically documented in a data based, consumable way? Where is the media with the same bite as “World in Action” back on the day? It appears completely AWOL.

One of the really curious things about Westminster is that MPs are required to align to the terms of the “The Code of Conduct for Members of Parliament“. If you go down to item 6, it reads “Members have a general duty to act in the interests of the nation as a whole; and a special duty to their constituents”. Now, tell me how the Whip system works there. On the face of it, it is profoundly against the very code in which our democracy is enshrined.

There appears to be no data source published on the number of votes taken, and whether they were “free” votes or directed to be 1, 2 or 3 line instructions from each whips office. Fundamentally, how many votes taken were allowed to rest on the conscious obligations to be exercised by MPs freely, or to what extent were they compelled like sheep through the abattoir voting booths there?

My gut suggests our current government are probably inflicting more divisive whips more often than any UK government in our history, not least as the future interests of our country appear to being driven by a very small proportion of representatives there. The bare complexion of this should be easily apparent from the numbers and some simple comparative graphs – so, who’s keeping count?

Democracy this isn’t. And the lack of quality journalism in the UK is heavily complicit in it’s disappearance.

Simple Mistakes – User Experience 101 failure

Things have been very busy at work recently, but I surfaced on Saturday to take my wife into Reading to collect goods she’d ordered from Boots earlier in the week. Just replenishing a range of items so she didn’t start running out from Monday. She’d been advised to pick everything up anytime after midday from their town centre store.

You’ve probably guessed it – no sign of the order. She was just pointed at the customer service phone number on her copy of the order sheet, and asked to call them. Which she duly did outside, only to find that hundreds of customers who’d paid for their orders in the week with Mastercard were similarly in the same position. So she asked if she could cancel the order and at least buy the same things in the store while she was in town. Answer: the operator wasn’t sure if she was even allowed to cancel, but would ask and email back. That email never arrived; the only one that did a day later confirmed her items had now dispatched for the store but still no indication of arrival date – and the next 15 mile round trip needed.

I’m reminded of two things learnt from both Amazon and in producing strategy maps using the Wardley Mapping technique. The common thing when you’re involved in any product, project or business process design is that you start with the customer and optimise for their most delightful experience – then work back from there. And you only start trying to be unique if it’s directly visible to that user experience in some tangible way. Both facets together are still an incredibly important gap that I see folks miss all the time (I see that in projects at work, but that’s another story).

During the week, it was announced Amazon had purchased PillPack – a small new England company – and it sent the shares of all the big Pharmacy Chains in the USA tumbling (in market cap terms, around $13 Billion in a day). So, what do they do? Simple:

If you have a regular prescription, they put all the tablets you’re supposed to consume in a time/date labelled packet. These are printed and filled in a roll, output in chronological order – then loaded into a dispenser you receive in the mail (overnight if needed urgently):Simple! And then a set of services where they maintain your repeat needs with your doctor directly, so all the grunt work in ensuring you get your meds is done for you. They even allow you to set your holiday location if you’re away and ship there if needed to ensure you never have an unwanted gap.

Compare that to the run around most folks are exposed to with regular prescriptions and in understanding what to take and when. Instead you have a friendly, subscription based business serving your needs. (For some reason, Wall Street currently obsess about subscription based businesses – they value their stock not on Price to Earnings ratios but on Price to Sales Revenue multiples instead – and Amazon are in the thick of that too).

Personal experience here with Amazon (we’re Prime members) is that there is any problem with an order and we ask to cancel, it just happens and money immediately reimbursed. You can see why all those retail pharmacy shares took a hit with the PillPack buyout announcement by Amazon; you can see the end user experience is about to get radically better, and probably first in a number of Amazon initiatives in the Health Industry that will follow a path of relentless, customer obsessed, focus. 

Amazon already have a joint venture with Goldman Sachs and Berkshire Hathaway to work out how to provide cost effective health benefits to their combined employee populations. Something they’ll no doubt open outside the company too in time. That’s when life for CVS, Walgreens, Target and so forth (plus Boots in the UK) will get very interesting. Bring it on!

IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Does your WordPress website go over a cliff in July 2018?

Secure connections, faster web sites, better Google search rankings – and well before Google throw a switch that will disadvantage many other web sites in July 2018. I describe the process to achieve this for anyone running a WordPress Multisite Network below. Or I can do this for you.

Many web sites that handle financial transactions use a secure connection; this gives a level of guarantee that you are posting your personal or credit card details directly to a genuine company. But these “HTTPS” connections don’t just protect user data, but also ensure that the user is really connecting to the right site and not an imposter one. This is important because setting up a fake version of a website users normally trust is a favourite tactic of hackers and malicious actors. HTTPS also ensures that a malicious third party can’t hijack the connection and insert malware or censor information.

Back in 2014, Google asked web site owners if they could make their sites use HTTPS connections all the time, and provided both a carrot and a stick as incentives. On the one hand, they promised that future versions of their Chrome Browser would explicitly call out sites that were presenting insecure pages, so that users knew where to tread very carefully. On the upside, they suggested that they would positively discriminate secure sites over insecure ones in future Google searches.

The final step in this process comes in July 2018:

New HTTP Treatment by Chrome from July 2018

The logistics of achieving “HTTPS” connections for many sites is far from straight forward. Like many service providers, I host a WordPress network, that aims individual customer domain names at a single Linux based server. That in turn looks to see which domain name the inbound connection request has come from, and redirects onto that website customers own subdirectory structure for the page content, formatting and images.

The main gotcha is that if I tell my server that its certified identity is “www.software-enabled.com”, an inbound request from “www.ianwaring.com”, or “www.obesemanrowing.org.uk”, will get very confused. It will look like someone has hijacked the sites, and the users browser session will gain some very pointed warnings suggesting a malicious traffic subversion attempt.

A second gotcha – even if you solve the certified identity problem – is that a lot of the content of a typical web site contains HTTP (not HTTPS) links to other pages, pictures or video stored within the same site. It would normally be a considerable (and error prone) process to change http: to https: links on all pages, not least as the pages themselves for all the different customer sites are stored by WordPress inside a complex MySQL database.

What to do?

It took quite a bit of research, but cracked it in the end. The process I used was:

  1. Set up each customer domain name on the free tier of the CloudFlare content delivery network. This replicates local copies of the web sites static pages in locations around the world, each closer to the user than the web site itself.
  2. Change the customer domain name’s Name Servers to the two cited by CloudFlare in step (1). It may take several hours for this change to propagate around the Internet, but no harm continuing these steps.
  3. Repeat (1) and (2) for each site on the hosted WordPress network.
  4. Select the WordPress “Network Admin” dashboard, and install two plug-ins; iControlWP’s “CloudFlare Flexible SSL”, and then WebAware’s “SSL Insecure Content Fixer”. The former handles the connections to the CloudFlare network (ensuring routing works without unexpected redirect loops); the latter changes http: to https: connections on the fly for references to content within each individual customer website. Network Enable both plugins. There is no need to install the separate CloudFlare WordPress plugin.
  5. Once CloudFlare’s web site shows all the domain names as verified that they are being managed by CloudFlare’s own name servers with their own certificates assigned (they will get a warning or a tick against each), step through the “Crypto” screen on each one in turn – switching on “Always use https” redirections.

At this point, whether users access the websites using http: or https: (or don’t mention either), each will come up with a padlocked, secure, often greened address bar with “https:” in front of the web address of the site. Job done.

Once the HTTP redirects to HTTPS appear to be working, and all the content is being displayed correctly on pages, I go down the Crypto settings on the CloudFlare web site and enable “opportunistic encryption” and “HTTPS rewrites”.

In the knowledge that Google also give faster sites better rankings in search results over slow ones, there is also a “Speed” section in the CloudFlare web site. On this, i’ve told it to compress CSS, JavaScript and HTML pages – termed “Auto Minify” – to minimise the amount of data transmitted to the users browser but to still render it correctly. This, in combination with my use of a plug-in to use Google’s AMP (Accelerated Mobile Pages) shortcuts – which in turn can give 3x load speed improvements on mobile phones – all the customer sites are really flying.

CloudFlare do have a paid offering called “Argo Smart Routing” that further speeds up delivery of web site content. Folks are shown to be paying $5/month and seeing page loads in 35% of the time prior to this being enabled. You do start paying for the amount of traffic you’re releasing into the Internet at large, but the pricing tiers are very generous – and should only be noticeable for high traffic web sites.

So, secure connections, faster web sites, better Google search rankings – and well before Google throw the switch that will disadvantage many other web sites in July 2018. I suspect having hundreds of machines serving the content on CloudFlare’s Content Delivery Network will also make the site more resilient to distributed denial of service flood attack attempts, if any site I hosted ever got very popular. But I digress.

If you would like me to do this for you on your WordPress site(s), please get in touch here.

A small(?) task of running up a Linux server to run a Django website

Django 1.11 on Ubuntu 16.04

I’m conscious that the IT world is moving in the direction of “Serverless” code, where business logic is loaded to a service and the infrastructure underneath abstracted away. In that way, it can be woken up from dormant and scaled up and down automatically, in line with the size of the workload being put on it. Until then, I wanted (between interim work assignments) to set up a home project to implement a business idea I had some time back.

In doing this, i’ve had a tour around a number of bleeding edge attempts. As a single page app written in JavaScript on Amazon AWS with Cognito and DynamoDB. Then onto Polymer Web Components, which I stopped after it looked like Apple were unlikely to have support in Safari on iOS in the short term. Then onto Firebase on Google Cloud, which was fine until I thought I needed a relational DB for my app (I am experienced on MongoDB from 2013, but NoSQL schemas aren’t the right fit for my app). And then to Django, which seemed to be gaining popularity these days, not least as it’s based on Python and is designed for fast development of database driven web apps.

I looked for the simplest way to run up a service on all the main cloud vendors. After half a day of research, elected to try Django on Digital Ocean, where a “one click install” was available. This looked the simplest way to install Django on any of the major cloud vendors. It took 30 minutes end to end to run the instance up, ready to go; that was until I realised it was running an old version of Django (1.08), and used Python 2.7 — which is not supported by the (then) soon to be released 2.0 version of Django. So, off I went trying to build everything ground up.

The main requirement was that I was developing on my Mac, but the production version in the cloud on a Linux instance — so I had to set up both. I elected to use PostgreSQL as the database, Nginx with Gunicorn as the web server stack, used Lets Encrypt (as recommended by the EFF) for certificates and Django 1.11 — the latest version when I set off. Local development environment using Microsoft Visual Studio Code alongside GitHub.

One of the nuances on Django is that users are normally expected to login with a username different from their email address. I really wanted my app to use a persons email address as their only login username, so I had to put customisations into the Django set-up to achieve that along the way.

A further challenge is that target devices used by customers are heavily weighted to mobile phones on other sites I run, so I elected to use Google’s Material user interface guidelines. The Django implementation is built on an excellent framework i’ve used in another project, as built by four Stanford graduates  — MaterializeCSS — and supplemented by a lot of custom work on template tags, forms and layout directives by Mikhail Podgurskiy in a package called django-material (see: http://forms.viewflow.io/).

The mission was to get all the above running before I could start adding my own authentication and application code. The end result is an application that will work nicely on phones, tablets or PCs, resizing automatically as needed.

It turned out to be a major piece of work just getting the basic platform up and running, so I noted all the steps I took (as I went along) just in case this helps anyone (or the future me!) looking to do the same thing. If it would help you (it’s long), just email me at [email protected]. I’ve submitted it back to Digital Ocean, but happy to share the step by step recipe.

Alternatively, hire me to do it for you!

WTF – Tim O’Reilly – Lightbulbs On!

What's the Future - Tim O'Reilly

Best Read of the Year, not just for high technology, but for a reasoned meaning behind political events over the last two years, both in the UK and the USA. I can relate it straight back to some of the prescient statements made by Jeff Bezos about Amazon “Day 1” disciplines: the best defence against an organisations path to oblivion being:

  1. customer obsession
  2. a skeptical view of proxies
  3. the eager adoption of external trends, and
  4. high-velocity decision making

Things go off course when interests divide in a zero-sum way between different customer groups that you serve, and where proxies indicating “success” diverge from a clearly defined “desired outcome”.

The normal path is to start with your “customer” and give an analogue of what indicates “success” for them in what you do; a clear understanding of the desired outcome. Then the measures to track progress toward that goal, the path you follow to get there (adjusting as you go), and a frequent review that steps still serve the intended objective. 

Fake News on Social Media, Finance Industry Meltdowns, unfettered slavery to “the market” and to “shareholder value” have all been central to recent political events in both the UK and the USA. Politicians of all colours were complicit in letting proxies for “success” dissociate fair balance of both wealth and future prospects from a vast majority of the customers they were elected to serve. In the face of that, the electorate in the UK bit back – as they did for Trump in the US too.

Part 3 of the book, entitled “A World Ruled by Algorithms” – pages 153-252 – is brilliant writing on our current state and injustices. Part 4 (pages 255-350) entitled “It’s up to us” maps a path to brighter times for us and our descendants.

Tim says:

The barriers to fresh thinking are even higher in politics than in business. The Overton Window, a term introduced by Joseph P. Overton of the Mackinac Center for Public Policy,  says that an ideas political viability falls within a window framing a range of policies considered politically acceptable in the current climate of public opinion. There are ideas that a politician simply cannot recommend without being considered too extreme to gain or keep public office.

In the 2016 US presidential election, Donald Trump didn’t just  push the Overton Window far too to right, he shattered it, making statement after statement that would have been disqualifying for any previous candidate. Fortunately, once the window has come unstuck, it is possible to move it radically new directions.

He then says that when such things happen, as they did at the time of the Great Depression, the scene is set to do radical things to change course for the ultimate greater good. So, things may well get better the other side of Trumps outrageous pandering to the excesses of the right, and indeed after we see the result of our electorates division over BRexit played out in the next 18 months.

One final thing that struck me was how one political “hot potato” issue involving Uber in Taiwan got very divided and extreme opinions split 50/50 – but nevertheless got reconciled to everyone’s satisfaction in the end. This using a technique called Principal Component Analysis (PCA) and a piece of software called “Pol.is”. This allows folks to publish assertions, vote and see how the filter bubbles evolve through many iterations over a 4 week period. “I think Passenger Liability Insurance should be mandatory for riders on UberX private vehicles” (heavy split votes, 33% both ends of the spectrum) evolved to 95% agreeing with “The Government should leverage this opportunity to challenge the taxi industry to improve their management and quality control system, so that drivers and riders would enjoy the same quality service as Uber”. The licensing authority in Taipei duly followed up for the citizens and all sides of that industry. 

I wonder what the BRexit “demand on parliament” would have looked like if we’d followed that process, and if indeed any of our politicians could have encapsulated the benefits to us all on either side of that question. I suspect we’d have a much clearer picture than we do right now.

In summary, a superb book. Highly recommended.

Your DNA – a Self Testing 101

23andMe testing kitYour DNA is a string of protein pairs that encapsulate your “build” instructions, as inherited from your birth parents. While copies of it are packed tightly into every cell in, and being given off, your body, it is of considerable size; a machine representation of it is some 2.6GB in length – the size of a blue-ray DVD.

The total entity – the human genome – is a string of C-G and A-T protein pairs. The exact “reference” structure, given the way in which strands are structured and subsections decoded, was first successfully concluded in 2003. It’s absolute accuracy has gradually improved regularly as more DNA samples have been analysed down the years since.

A sequencing machine will typically read short lengths of DNA chopped up into pieces (in a random pile, like separate pieces of a jigsaw), and by comparison against a known reference genome, gradually piece together which bit fits where; there are known ‘start’ and ‘end’ segment patterns along the way. To add a bit of complexity, the chopped read may get scanned backwards, so a lot of compute effort to piece a DNA sample into what it looks like if we were able to read it uninterrupted from beginning to end.

At the time of writing (July 2017), we’re up to version 38 of the reference human genome. 23andMe currently use version 37 for their data to surface inherited medical traits. Most of the DNA sampling industry trace family history reliably using version 36, and hence most exports to work with common DNA databases automatically “downgrade” to that version for best consistency.

DNA Structure

DNA has 46 sections (known as Chromosomes); 23 of them come from your birth father, 23 from your birth mother. While all humans have over 99% commonality, the 1% difference make every one of us (or a pair of identical twins) statistically unique.

The cost to sample your own DNA – or that of a relative – is these days in the range of £79-£149. The primary one looking for inherited medical traits is 23andMe. The biggest volume for family tree use is AncestryDNA. That said, there are other vendors such as Family Tree DNA (FTDNA) and MyHeritage that also offer low cost testing kits.

The Ancestry DNA database has some 4 million DNA samples to match against, 23andMe over 1 million. The one annoyance is that you can’t export your own data from these two and then insert it in the other for matching purposes (neither have import capabilities). However, all the major vendors do allow exports, so you can upload your data from AncestryDNA or 23andMe into FTDNA, MyHeritage and to the industry leading cross-platform GEDmatch DNA databases very simply.

Exports create a ZIP file. With FTDNA, MyHeritage and GEDmatch, you request an import, and these prompt for the name of that ZIP file itself; you have no need to break it open first at all.

On receipt of the testing kit, register the code on the provided sample bottle on their website. Just avoid eating/drinking for 30 minutes, spit into the provided tube up to the level mark, seal, put back in the box, seal it and pop it in a postbox. Results will follow in your account on their website in 2-4 weeks.

Family Tree matching

Once you receive your results, Ancestry and 23andMe will give you details of any suggested family matches on their own databases. The primary warning here is that matches will be against your birth mother and whoever made her pregnant; given historical unavailability of effective birth control mechanisms and the secrecy of adoption processes, this has been known to surface unexpected home truths. Relatives trace up and down the family tree from those two reference points. A quick gander of self help forums on social media can be entertaining, or a litany of horror stories – alongside others of raw delight. Take care, be ready for the unexpected:

My first social media experience was seeing someone confirm a doctor as her birth father. Her introductory note to him said that he may remember her Mum, as she used to be his nursing assistant.

Another was to a man, who once identified admitted to knowing her birth mother in his earlier years – but said it couldn’t be him “as he’d never make love with someone that ugly”.

Outside of those, fairly frequent outright denials questioning the fallibility of the science behind DNA testing, none of which stand up to any factual scrutiny. But among the stories, there are also stories of delight in all parties when long lost, separated or adopted kids locate, and successfully reconnect, with one or both birth parents and their families.

Loading into other databases, such as GEDmatch

In order to escape the confines of vendor specific DNA databases, you can export data from almost any of the common DNA databases and reload the resulting ZIP file into GEDmatch. Once imported, there’s quite a range of analysis tools sitting behind a fairly clunky user interface.

The key discovery tool is the “one to many” listing, which does a comparison of your DNA against everyone elses in the GEDmatch database – and lists partial matches in order of closeness to your own data. It does this using a unit of measure called “centiMorgans”, abbreviated “cM”. Segments that show long exact matches are totted up, giving a total proportion of DNA you share. If you matched yourself or an identical twin, you’d match a total of circa 6800cM. Half your DNA comes from each birth parent, so they’d show as circa 3400cM. From your grandparents, half again. As your family tree extends both upwards and sideways (to uncles, aunts, cousins, their kids, etc), the numbers will increasingly dilute by half each step; you’ll likely be in the thousands of potential matches 4 or 5 steps away from your own data:

If you want to surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably, then only matches of greater than 1300cM are likely to have statistical significance. Any lower than that is an increasingly difficult struggle to fill out a family tree, usually persued by asking other family members to get their DNA tested; it is fairly common for GEDmatch to give you details (including email addresses) of 1-2,000 closest matches, albeit sorted in descending ‘close-ness’ order for you).

As one example from GEDmatch, the highlighted line shows a match against one of the subjects parents (their screen name and email address cropped off this picture):

GEDmatch parent

There are more advanced techniques to use a Chromosome browser to pinpoint whether a match comes down a male line or not (to help understand which side of the tree relationships a match is more likely to reside on), but these are currently outside my own knowledge (and current personal need).

Future – take care

One of the central tenets of the insurance industry is to scale societal costs equitably across a large base of folks who may, at random, have to take benefits from the funding pool. To specifically not prejudice anyone whose DNA may give indications of inherited risks or pre-conditions that may otherwise jeopardise their inclusion in cost effective health insurance or medical coverage.

Current UK law specifically makes it illegal for any commercial company or healthcare enterprise to solicit data, including DNA samples, where such provision may prejudice the financial cost, or service provision, to the owner of that data. Hence, please exercise due care with your DNA data, and with any entity that can associate that data with you as a uniquely identifiable individual. Wherever possible, only have that data stored in locations in which local laws, and the organisations holding your data, carry due weight or agreed safe harbour provisions.

Country/Federal Law Enforcement DNA records.

The largest DNA databases in many countries are held, and administered, for police and criminal justice use. A combination of crime scene samples, DNA of known convicted individuals, as well as samples to help locate missing people. The big issue at the time of writing is that there’s no ability to volunteer any submission for matching against missing person or police held samples, even though those data sets are fairly huge.

Access to such data assets are jealously guarded, and there is no current capability to volunteer your own readings for potential matches to be exposed to any case officer; intervention is at the discretion of the police, and they usually do their own custom sampling process and custom lab work. Personally, a great shame, particularly for individuals searching for a missing relative and seeking to help enquiries should their data help identify a match at some stage.

I’d personally gladly volunteer if there were appropriate safeguards to keep my physical identity well away from any third party organisation; only to bring the match to the attention of a case officer, and to leave any feedback to interested relatives only at their professional discretion.

I’d propose that any matches over 1300 cM (CentiMorgans) get fed back to both parties where possible, or at least allow cases to get closed. That would surface birth parent, child, sibling, half sibling, uncle, aunt, niece, nephew, grandparent and grandchild relationships reliably.

At the moment, police typically won’t take volunteer samples unless a missing person is vulnerable. Unfortunately not yet for tracing purposes.

Come join in – £99 is all you need to start

Whether for medical traits knowledge, or to help round out your family trees, now is a good time to get involved cost effectively. Ancestry currently add £20 postage to their £79 testing kit, hence £99 total. 23andMe do ancestry matching, Ethnicity and medical analyses too for £149 or so all in. However, Superdrug are currently selling their remaining stock of 23andMe testing kits (bought when the US dollar rate was better than it now is) for £99. So – quick, before stock runs out!

Either will permit you to load the raw data, once analysed, onto FTDNA, MyHeritage and GEDmatch when done too.

Never a better time to join in.

The Next Explosion – the Eyes have it

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

One of the early lessons you pick up looking at product lifecycles is that some people hold out buying any new technology product or service longer than anyone else. You make it past the techies, the visionaries, the early majority, late majority and finally meet the laggards at the very right of the diagram (PDF version here). The normal way of selling at that end of the bell curve is to embed your product in something else; the person who swore they’d never buy a Microprocessor unknowingly have one inside the controls on their Microwave, or 50-100 ticking away in their car.

In 2016, Google started releasing access to its Vision API. They were routinely using their own Neural networks for several years; one typical application was taking the video footage from their Google Maps Streetview cars, and correlating house numbers from video footage onto GPS locations within each street. They even started to train their own models to pick out objects in photographs, and to be able to annotate a picture with a description of its contents – without any human interaction. They have also begun an effort to do likewise describing the story contained in hundreds of thousands of YouTube videos.

One example was to ask it to differentiate muffins and dogs:

This is does with aplomb, with usually much better than human performance. So, what’s next?

One notable time in Natural History was the explosion in the number of species on earth that  occured in the Cambrian period, some 534 million years ago. This was the time when it appears life forms first developed useful eyes, which led to an arms race between predators and prey. Eyes everywhere, and brains very sensitive to signals that come that way; if something or someone looks like they’re staring at you, sirens in your conscience will be at full volume.

Once a neural network is taught (you show it 1000s of images, and tell it which contain what, then it works out a model to fit), the resulting learning can be loaded down into a small device. It usually then needs no further training or connection to a bigger computer nor cloud service. It can just sit there, and report back what it sees, when it sees it; the target of the message can be a person or a computer program anywhere else.

While Google have been doing the heavy lifting on building the learning models in the cloud, Apple have slipped in with their own CloudML data format, a sort of PDF for the resulting machine learning data formats. Then using the Graphics Processing Units on their iPhone and iPad devices to run the resulting models on the users device. They also have their ARkit libraries (as in “Augmented Reality”) to sense surfaces and boundaries live on the embedded camera – and to superimpose objects in the field of view.

With iOS 11 coming in the autumn, any handwritten notes get automatically OCR’d and indexed – and added to local search. When a document on your desk is photo’d from an angle, it can automatically flatten it to look like a hi res scan of the original – and which you can then annotate. There are probably many like features which will be in place by the time the new iPhone models arrive in September/October.

However, tip of the iceberg. When I drive out of the car park in the local shopping centre here, the barrier automatically raises given the person with the ticket issued to my car number plate has already paid. And I guess we’re going to see a Cambrian explosion as inexpensive “eyes” get embedded in everything around us in our service.

With that, one example of what Amazon are experimenting with in their “Amazon Go” shop in Seattle. Every visitor a shoplifter: https://youtu.be/NrmMk1Myrxc

Lots more to follow.

PS: as a footnote, an example drawing a ruler on a real object. This is 3 weeks after ARkit got released. Next: personalised shoe and clothes measurements, and mail order supply to size: http://www.madewitharkit.com/post/162250399073/another-ar-measurement-app-demo-this-time

Danger, Will Robinson, Danger

One thing that bemused the hell out of me – as a Software guy visiting prospective PC dealers in 1983 – was our account manager for the North UK. On arrival at a new prospective reseller, he would take a tape measure out, and measure the distance between the nearest Directors Car Parking Slot, and their front door. He’d then repeat the exercise for the nearest Visitors Car Parking Spot and the front door. And then walk in for the meeting to discuss their application to resell our range of Personal Computers.

If the Directors slot was closer to the door than the Visitor slot, the meeting was a very short one. The positioning betrayed the senior managements attitude to customers, which in countless cases I saw in other regions (eventually) to translate to that Company’s success (or otherwise). A brilliant and simple leading indicator.

One of the other red flags when companies became successful was when their own HQ building became ostentatious. I always wonder if the leaders can manage to retain their focus on their customers at the same time as building these things. Like Apple in a magazine today:

Apple HQ

And then Salesforce, with the now tallest building in San Francisco:

Salesforce Tower

I do sincerely hope the focus on customers remains in place, and that none of the customers are adversely upset with where each company is channeling it’s profits. I also remember a Telco Equipment salesperson turning up at his largest customer in his new Ferrari, and their reaction of disgust that unhinged their long term relationship; he should have left it at home and driven in using something more routine.

Modesty and Frugality are usually a better leading indicator of delivering good value to folks buying from you. As are all the little things that demonstrate that the success of the customer is your primary motivation.