Quality Journalism – UK Oxymoron?

I’m writing this the day that John McCain died in the USA – and the most compelling eulogy came from Barack Obama. It’s a rare day right now when people can disagree fervently with each others views, but still hold each other in greatest respect.

In reading “The Secret Barrister”, you come away with a data filled summary of the comparatively and continued poor state of Westminster politics. Of successive abuses to a system of justice by politicians of all colours. To prioritise “PR” on everything to mask poor financial choices with sound bites, while quietly robbing us all blind of values we hold dear. And i’m sure Chris Grayling will receive few Christmas Cards from members of the judiciary based on their experience of him documented in this books pages.

Politics is but only half the story in this. I often muse to wonder where quality journalism disappeared to? There are good pockets in the London Review of Books, and with the work on the Panama Papers by ICIJ – but where else are the catalogue of abuses systematically documented in a data based, consumable way? Where is the media with the same bite as “World in Action” back on the day? It appears completely AWOL.

One of the really curious things about Westminster is that MPs are required to align to the terms of the “The Code of Conduct for Members of Parliament“. If you go down to item 6, it reads “Members have a general duty to act in the interests of the nation as a whole; and a special duty to their constituents”. Now, tell me how the Whip system works there. On the face of it, it is profoundly against the very code in which our democracy is enshrined.

There appears to be no data source published on the number of votes taken, and whether they were “free” votes or directed to be 1, 2 or 3 line instructions from each whips office. Fundamentally, how many votes taken were allowed to rest on the conscious obligations to be exercised by MPs freely, or to what extent were they compelled like sheep through the abattoir voting booths there?

My gut suggests our current government are probably inflicting more divisive whips more often than any UK government in our history, not least as the future interests of our country appear to being driven by a very small proportion of representatives there. The bare complexion of this should be easily apparent from the numbers and some simple comparative graphs – so, who’s keeping count?

Democracy this isn’t. And the lack of quality journalism in the UK is heavily complicit in it’s disappearance.

WTF – Tim O’Reilly – Lightbulbs On!

What's the Future - Tim O'Reilly

Best Read of the Year, not just for high technology, but for a reasoned meaning behind political events over the last two years, both in the UK and the USA. I can relate it straight back to some of the prescient statements made by Jeff Bezos about Amazon “Day 1” disciplines: the best defence against an organisations path to oblivion being:

  1. customer obsession
  2. a skeptical view of proxies
  3. the eager adoption of external trends, and
  4. high-velocity decision making

Things go off course when interests divide in a zero-sum way between different customer groups that you serve, and where proxies indicating “success” diverge from a clearly defined “desired outcome”.

The normal path is to start with your “customer” and give an analogue of what indicates “success” for them in what you do; a clear understanding of the desired outcome. Then the measures to track progress toward that goal, the path you follow to get there (adjusting as you go), and a frequent review that steps still serve the intended objective. 

Fake News on Social Media, Finance Industry Meltdowns, unfettered slavery to “the market” and to “shareholder value” have all been central to recent political events in both the UK and the USA. Politicians of all colours were complicit in letting proxies for “success” dissociate fair balance of both wealth and future prospects from a vast majority of the customers they were elected to serve. In the face of that, the electorate in the UK bit back – as they did for Trump in the US too.

Part 3 of the book, entitled “A World Ruled by Algorithms” – pages 153-252 – is brilliant writing on our current state and injustices. Part 4 (pages 255-350) entitled “It’s up to us” maps a path to brighter times for us and our descendants.

Tim says:

The barriers to fresh thinking are even higher in politics than in business. The Overton Window, a term introduced by Joseph P. Overton of the Mackinac Center for Public Policy,  says that an ideas political viability falls within a window framing a range of policies considered politically acceptable in the current climate of public opinion. There are ideas that a politician simply cannot recommend without being considered too extreme to gain or keep public office.

In the 2016 US presidential election, Donald Trump didn’t just  push the Overton Window far too to right, he shattered it, making statement after statement that would have been disqualifying for any previous candidate. Fortunately, once the window has come unstuck, it is possible to move it radically new directions.

He then says that when such things happen, as they did at the time of the Great Depression, the scene is set to do radical things to change course for the ultimate greater good. So, things may well get better the other side of Trumps outrageous pandering to the excesses of the right, and indeed after we see the result of our electorates division over BRexit played out in the next 18 months.

One final thing that struck me was how one political “hot potato” issue involving Uber in Taiwan got very divided and extreme opinions split 50/50 – but nevertheless got reconciled to everyone’s satisfaction in the end. This using a technique called Principal Component Analysis (PCA) and a piece of software called “Pol.is”. This allows folks to publish assertions, vote and see how the filter bubbles evolve through many iterations over a 4 week period. “I think Passenger Liability Insurance should be mandatory for riders on UberX private vehicles” (heavy split votes, 33% both ends of the spectrum) evolved to 95% agreeing with “The Government should leverage this opportunity to challenge the taxi industry to improve their management and quality control system, so that drivers and riders would enjoy the same quality service as Uber”. The licensing authority in Taipei duly followed up for the citizens and all sides of that industry. 

I wonder what the BRexit “demand on parliament” would have looked like if we’d followed that process, and if indeed any of our politicians could have encapsulated the benefits to us all on either side of that question. I suspect we’d have a much clearer picture than we do right now.

In summary, a superb book. Highly recommended.

Danger, Will Robinson, Danger

One thing that bemused the hell out of me – as a Software guy visiting prospective PC dealers in 1983 – was our account manager for the North UK. On arrival at a new prospective reseller, he would take a tape measure out, and measure the distance between the nearest Directors Car Parking Slot, and their front door. He’d then repeat the exercise for the nearest Visitors Car Parking Spot and the front door. And then walk in for the meeting to discuss their application to resell our range of Personal Computers.

If the Directors slot was closer to the door than the Visitor slot, the meeting was a very short one. The positioning betrayed the senior managements attitude to customers, which in countless cases I saw in other regions (eventually) to translate to that Company’s success (or otherwise). A brilliant and simple leading indicator.

One of the other red flags when companies became successful was when their own HQ building became ostentatious. I always wonder if the leaders can manage to retain their focus on their customers at the same time as building these things. Like Apple in a magazine today:

Apple HQ

And then Salesforce, with the now tallest building in San Francisco:

Salesforce Tower

I do sincerely hope the focus on customers remains in place, and that none of the customers are adversely upset with where each company is channeling it’s profits. I also remember a Telco Equipment salesperson turning up at his largest customer in his new Ferrari, and their reaction of disgust that unhinged their long term relationship; he should have left it at home and driven in using something more routine.

Modesty and Frugality are usually a better leading indicator of delivering good value to folks buying from you. As are all the little things that demonstrate that the success of the customer is your primary motivation.

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Reinventing Healthcare

Comparison of US and UK healthcare costs per capita

A lot of the political effort in the UK appears to circle around a government justifying and handing off parts of our NHS delivery assets to private enterprises, despite the ultimate model (that of the USA healthcare industry) costing significantly more per capita. Outside of politicians lining their own pockets in the future, it would be easy to conclude that few would benefit by such changes; such moves appear to be both economically farcical and firmly against the public interest. I’ve not yet heard any articulation of a view that indicates otherwise. But less well discussed are the changes that are coming, and where the NHS is uniquely positioned to pivot into the future.

There is significant work to capture DNA of individuals, but these are fairly static over time. It is estimated that there are 10^9 data points per individual, but there are many other data points – which change against a long timeline – that could be even more significant in helping to diagnose unwanted conditions in a timely fashion. To flip the industry to work almost exclusively to preventative and away from symptom based healthcare.

I think I was on the right track with an interest in Microbiome testing services. The gotcha is that commercial services like uBiome, and public research like the American (and British) Gut Project, are one-shot affairs. Taking a stool, skin or other location sample takes circa 6,000 hours of CPU wall time to reconstruct the 16S rRNA gene sequences of a statistically valid population profile. Something I thought I could get to a super fast turnaround using excess capacity (spot instances – excess compute power you can bid to consume when available) at one or more of the large cloud vendors. And then to build a data asset that could use machine learning techniques to spot patterns in people who later get afflicted by an undesirable or life threatening medical condition.

The primary weakness in the plan is that you can’t suss the way a train is travelling by examining a photograph taken looking down at a static railway line. You need to keep the source sample data (not just a summary) and measure at regular intervals; an incidence of salmonella can routinely knock out 30% of your Microbiome population inside 3 days before it recovers. The profile also flexes wildly based on what you eat and other physiological factors.

The other weakness is that your DNA and your Microbiome samples are not the full story. There are many other potential leading indicators that could determine your propensity to become ill that we’re not even sampling. The questions of which of our 10^18 different data points are significant over time, and how regularly we should be sampled, are open questions

Experience in the USA is that in environments where regular preventative checkups of otherwise healthy individuals take place – that of Dentists – have managed to lower the cost of service delivery by 10% at a time where the rest of the health industry have seen 30-40% cost increases.

So, what are the measures that should be taken, how regularly and how can we keep the source data in a way that allows researchers to employ machine learning techniques to expose the patterns toward future ill-health? There was a good discussion this week on the A16Z Podcast on this very subject with Jeffrey Kaditz of Q Bio. If you have a spare 30 minutes, I thoroughly recommend a listen: https://soundcloud.com/a16z/health-data-feedback-loop-q-bio-kaditz.

That said, my own savings are such that I have to refocus my own efforts elsewhere back in the IT industry, and my MicroBiome testing service Business Plan mothballed. The technology to regularly sample a big enough population regularly is not yet deployable in a cost effective fashion, but will come. When it does, the NHS will be uniquely positioned to pivot into the sampling and preventative future of healthcare unhindered.

Politicians and the NHS: the missing question

 

The inevitable electioneering has begun, with all the political soundbites simplified into headline spend on the NHS. That is probably the most gross injustice of all.

This is an industry lined up for the most fundamental seeds of change. Genomics, Microbiomes, ubiquitous connected sensors and quite a realisation that the human body is already the most sophisticated of survival machines. There is also the realisation that weight and overeating are a root cause of downstream problems, with a food industry getting a free ride to pump unsuitable chemicals into the food chain without suffering financial consequences for the damage caused. Especially at the “low cost” end of the dietary spectrum.

Politicians, pharma and food lobbyists are not our friends. In the final analysis, we’re all being handed a disservice because those leading us are not asking the fundamental question about health service delivery, and to work back from there.

That question is: “What business are we in?”.

As a starter for 10, I recommend this excellent post on Medium: here.

Why did Digital Equipment Corporation Fail?

Digital Equipment Corp Logo

I answered that question on Quora, but thought i’d put a copy of my (long) answer here. Just one ex-employees perception from prisoner badge #50734!

I managed the UK Software Products Group in my last two years at DEC and had a 17 year term there (1976-1993). There are a wide variety of components that contributed to the final state, though failing to understand industry trends across the company was not among them. The below is a personal view, and happy for any ex colleagues to chip in with their own perspectives.

The original growth 1957 through to Jan 1983 has based on discrete, industry based product lines. In combination, they placed demand on central engineering, manufacturing and sales to support their own business objectives, and generally ran the show to dominate their own industry segments. For example, Laboratory Data Products, Graphic Arts (Newspapers!), Commercial OEM, Tech OEM, MDC (Manufacturing, Distribution and Control), ESG (Engineering Systems Group), Medical Systems and so forth.

The finance function ran as a separate reporting entity with management controls that went top to bottom with little ability for product lines to unduly behave in any way except for the overall corporate good.

The whole lot started to develop into a chaotic internal economy, albeit one that meshed together well and covered any cracks. The product lines were removed in Jan 1983, followed by a first ever stall in a hitherto unblemished Stock Market Performance. The company became much more centrally planned, and one built around a one company, one strategy, one architecture focus (the cynics in the company paraphrased this as one company, one egg, one basket). All focus on getting VAX widely deployed, given it’s then unique ability to run exactly the same binaries from board products all the way up to high end, close to mainframe class processors.

The most senior leadership started to go past it’s sell by date in the late 1980s. While the semiconductor teams were, as expected, pumping out impressive $300 VAX silicon, the elements of the leadership became fixated on the date on which DEC would finally overtake IBM in size. They made some poor technology investment calls in trying to extend VAX into IBM 3090 scale territory, seemingly oblivious to being nibbled from underneath on the low end.

Areas of the company were being frustrated at the high end focus and the inability for the Executive to give clear guidance on the next generation processor requirements. They kept on being flip-flopped between 32-bit and 64-bit designs, and brought out MIPS based workstations in a vain attempt to at least stay competitive performance wise until the new architecture was ready to ship.

Bob Palmer came to prominence by stopping the flip-flopping and had the semiconductor team ship a 64 bit design that completely outperformed every other chip in the industry by 50-100% (which ended up being the case for nearly 10 years). He then got put in charge of worldwide manufacturing, increasing the productivity by 4x in 2 years.

The company needed to increase its volume radically or reduce headcount to align competitively with the market as it went horizontally orientated (previously, IBM, DEC and the BUNCH – Burroughs, Univac, NCR, CDC and Honeywell were all vertically orientated).

Ken Olsen got deposed by the board around 1992, and they took the rather unusual step of reaching out to Bob Palmer, who was then a direct report of SVP Jack Smith, to lead the company.

While there was some early promise, the company focussed around a small number of areas (PCs, Components & Peripherals, Systems, Consumer Process Manufacturing, Discrete Manufacturing and Defence, and I think Health). The operating practice was that any leaders who missed their targets two quarters in a row were fired, and the salesforce given commission targets for the first time.

The whole thing degenerated from there, such that the company made equivalent losses in Palmers reign greater than the retained profits made under Olsen 1957-1982. He sued Intel for patent infringement, which ended up with Intel settling including the purchase of semiconductor operations in Hudson. He likewise sued Microsoft, which ended up with Microsoft lending DEC money to get its field force trained up on Microsoft products (impressive ju-jitsu on their part). Then sold the whole company to Compaq.

Text in some of the books about DEC include some comments by C Gordon Bell (technical god in DECs great years) which will not endear him to a place on Bob Palmers Christmas Card List (but words which many of us would agree with).

There was also a spoof Harvard Business Review article, written by George Van Treeck (a widely respected employee on the Marketing Notes conference maintained on the company network), which outlined the death of Digital – written in 1989 or so. Brilliant writing (I still have a copy in my files here), and he guessed the stages with impressive accuracy way back then. His words are probably a better summary than this, but until then, I trust this will give one persons perception.

It is still, without doubt, the finest company i’ve ever had the privilege to work for.

Rest in Peace, Ken Olsen. You did a great job, and the world is much better for your lifes work.

Starting with the end in mind: IT Management Heat vs Light

A very good place to startOne source of constant bemusement to me is the habit of intelligent people to pee in the industry market research bathwater, and then to pay handsomely to drink a hybrid mix of the result collected across their peers.

Perhaps betrayed by an early experience of one research company coming in to present to the management of the vendor I was working at, and finding in the rehearsal their conjecture that sales of specific machine sizes had badly dipped in the preceding quarter. Except they hadn’t; we’d had the biggest growth in sales of the highlighted machines in our history in that timeframe. When I mentioned my concern, the appropriate slides were corrected in short order, and no doubt the receiving audience impressed with the skill in their analysis that built a forecast starting with an amazingly accurate, perceptive (and otherwise publicly unreported) recent history.

I’ve been doubly nervous ever since – always relating back to the old “Deep Throat” hints given in “All the Presidents Men” – that of, in every case, “to follow the money”.

Earlier today, I was having some banter on one of the boards of “The Motley Fool” which referenced the ways certain institutions were imposing measures on staff – well away from a useful business use that positively supported better results for their customers. Well, except of providing sound bites to politicians. I can sense that in Education, in some elements of Health provision, and rather fundamentally in the Police service. I’ve even done a drains-up some time ago that reflected on the way UK Police are measured, and tried trace the rationale back to source – which was a senior politician imploring them to reduce crime; blog post here. The subtlety of this was rather lost; the only control placed in their hands was that of compiling the associated statistics, and to make their behaviours on the ground align supporting that data collection, rather than going back to core principles of why they were there, and what their customers wanted of them.

Jeff Bezos (CEO of Amazon) has the right idea; everything they do aligns with the ultimate end customer, and everything else works back from there. Competition is something to be conscious of, but only to the extent of understanding how you can serve your own customers better. Something that’s also the central model that W. Edwards Deming used to help transform Japanese Industry, and in being disciplined to methodically improve “the system” without unnecessary distractions. Distractions which are extremely apparent to anyone who’s been subjected to his “Red Beads” experiment. But the central task is always “To start with the end in mind”.

With that, I saw a post by Simon Wardley today where Gartner released the results of a survey on “Top 10 Challenges for I&O Leaders”, which I guess is some analogue of what used to be referred to as “CIOs”. Most of which felt to me like a herd mentality – and divorced from the sort of issues i’d have expected to be present. In fact a complete reenactment of this sort of dialogue Simon had mentioned before.

Simon then cited the first 5 things he thought they should be focussed on (around Corrective Action), leaving the remainder “Positive Action” points to be mapped based on that appeared upon that foundation. This in the assumption that those actions would likely be unique to each organisation performing the initial framing exercise.

Simon’s excellent blog post is: My list vs Gartner, shortly followed by On Capabilities. I think it’s a great read. My only regret is that, while I understand his model (I think!), i’ve not had to work on the final piece between his final strategic map (for any business i’m active in) and articulating a pithy & prioritised list of actions based on the diagram created. And I wish he’d get the bandwidth to turn his Wardley Maps into a Book.

Until then, I recommend his Bits & Pieces Blog; it’s a quality read that deserves good prominence on every IT Manager’s (and IT vendors!) RSS feed.

CloudKit – now that’s how to do a secure Database for users

Data Breach Hand Brick Wall Computer

One of the big controversies here relates to the appetite of the current UK government to release personal data with the most basic understanding of what constitutes personal identifiable information. The lessons are there in history, but I fear without knowing the context of the infamous AOL Data Leak, that we are destined to repeat it. With it goes personal information that we typically hold close to our chests, which may otherwise cause personal, social or (in the final analysis) financial prejudice.

When plans were first announced to release NHS records to third parties, and in the absence of what I thought were appropriate controls, I sought (with a heavy heart) to opt out of sharing my medical history with any third party – and instructed my GP accordingly. I’d gladly share everything with satisfactory controls in place (medical research is really important and should be encouraged), but I felt that insufficient care was being exercised. That said, we’re more than happy for my wife’s Genome to be stored in the USA by 23andMe – a company that demonstrably satisfied our privacy concerns.

It therefore came as quite a shock to find that a report, highlighting which third parties had already been granted access to health data with Government mandated approval, ran to a total 459 data releases to 160 organisations (last time I looked, that was 47 pages of PDF). See this and the associated PDFs on that page. Given the level of controls, I felt this was outrageous. Likewise the plans to release HMRC related personal financial data, again with soothing words from ministers in whom, given the NHS data implications, appear to have no empathy for the gross injustices likely to result from their actions.

The simple fact is that what constitutes individual identifiable information needs to be framed not only with what data fields are shared with a third party, but to know the resulting application of that data by the processing party. Not least if there is any suggestion that data is to be combined with other data sources, which could in turn triangulate back to make seemingly “anonymous” records traceable back to a specific individual.Which is precisely what happened in the AOL Data Leak example cited.

With that, and on a somewhat unrelated technical/programmer orientated journey, I set out to learn how Apple had architected it’s new CloudKit API announced this last week. This articulates the way in which applications running on your iPhone handset, iPad or Mac had a trusted way of accessing personal data stored (and synchronised between all of a users Apple devices) “in the Cloud”.

The central identifier that Apple associate with you, as a customer, is your Apple ID – typically an email address. In the Cloud, they give you access to two databases on their cloud infrastructure; one a public one, the other private. However, the second you try to create or access a table in either, the API accepts your iCloud identity and spits back a hash unique to your identity and the application on the iPhone asking to process that data. Different application, different hash. And everyone’s data is in there, so it’s immediately unable to permit any triangulation of disparate data that can trace back to uniquely identify a single user.

Apple take this one stage further, in that any application that asks for any personal identifiable data (like an email address, age, postcode, etc) from any table has to have access to that information specifically approved by the handset owners end user; no explicit permission (on a per application basis), no data.

The data maintained by Apple, besides holding personal information, health data (with HealthKit), details of home automation kit in your house (with HomeKit), and not least your credit card data stored to buy Music, Books and Apps, makes full use of this security model. And they’ve dogfooded it so that third party application providers use exactly the same model, and the same back end infrastructure. Which is also very, very inexpensive (data volumes go into Petabytes before you spend much money).

There are still some nuances I need to work. I’m used to SQL databases and to some NoSQL database structures (i’m MongoDB certified), but it’s not clear, based on looking at the way the database works, which engine is being used behind the scenes. It appears to be a key:value store with some garbage collection mechanics that look like a hybrid file system. It also has the capability to store “subscriptions”, so if specific criteria appear in the data store, specific messages can be dispatched to the users devices over the network automatically. Hence things like new diary appointments in a calendar can be synced across a users iPhone, iPad and Mac transparently, without the need for each to waste battery power polling the large database on the server waiting for events that are likely to arrive infrequently.

The final piece of the puzzle i’ve not worked out yet is, if you have a large database already (say of the calories, carbs, protein, fat and weights of thousands of foods in a nutrition database), how you’d get that loaded into an instance of the public database in Apple’s Cloud. Other that writing custom loading code of course!

That apart, really impressed how Apple have designed the datastore to ensure the security of users personal data, and to ensure an inability to triangulate data between information stored by different applications. And that if any personal identifiable data is requested by an application, that the user of the handset has to specifically authorise it’s disclosure for that application only. And without the app being able to sense if the data is actually present at all ahead of that release permission (so, for example, if a Health App wants to gain access to your blood sampling data, it doesn’t know if that data is even present or not before the permission is given – so the app can’t draw inferences on your probably having diabetes, which would be possible if it could deduce if it knew that you were recording glucose readings at all).

In summary, impressive design and a model that deserves our total respect. The more difficult job will be to get the same mindset in the folks looking to release our most personal data that we shared privately with our public sector servants. They owe us nothing less.

Email: is 20% getting through really a success?

Baseball Throw

Over the weekend, I sent an email out to a lot of my contacts on LinkedIn. Because of the number of folks i’m connected to, I elected to subscribe to Mailchimp, the email distribution service recommended by most of the experts I engage in the WordPress community. I might be sad, but it’s been fascinating to watch  the stats roll in after sending that email.

In terms of proportion of all my emails successfully delivered, that looks fine:

Emails Delivered to LinkedIn Contacts

However, 2 days after the email was sent, readership of my email message, with the subject line including the recipients Christian name to avoid one of the main traps that spam gets caught in, is:

Emails Seen and UnOpened

Eh, pardon? Only 47.4% of the emails I sent out were read at all? On first blush, that sounds really low to an amateur me. I would have expected it for folks on annual leave, but still not as low as less than half of all messages sent out. In terms of device types used to read the email:

Desktop vs Mobile Email Receipt

which I guess isn’t surprising, given the big volume of readers that looked at the email in the first hour of when it was sent (which was at around 9:00pm on Saturday night). There was another smaller peak between 7am-8am on Sunday morning, and then fairly level tides with small surges around working day arrival, lunch and departure times. In terms of devices used:

Devices used to read Email

However, Mailchimp insert a health warning, saying that iOS devices do handshake the email comms reliably, whereas other services are a lot more fickle – so the number of Apple devices may tend to over report. That said, it reinforces the point I made in a post a few days ago about the importance of keeping your email subject line down to 35 characters – to ensure it’s fully displayed on an iPhone.

All in, I was still shocked by the apparent number of emails successively delivered but not opened at all. Thinking it was bad, I checked and found that Mailchimp reckon the average response aimed into folks aligned to Computers and Electronics (which is my main industry), voluntarily opted in, is 17.8%, and click throughs to provided content around the 1.9% mark. My email click through rate is running at 2.9%. So, my email was 2x the industry norm for readership and 50% above normal click-through rates, though these are predominantly people i’ve enjoyed working with in the past – and who voluntarily connected to me down the years.

So, sending an email looks to be as bad at getting through as expecting to see Tweets from a specific person in your Twitter stream. I know some of my SMS traffic to my wife goes awry occasionally, and i’m told Snapchat is one of the few messaging services that routinely gives you an indication that your message did get through and was viewed.

Getting guaranteed attention of a communication is hence a much longer journey than I expected, and probably (like newspaper ads of old) relying on repeat “opportunities to see”. But don’t panic – i’m not sending the email again to that list; it was a one-time exercise.

This is probably a dose of the obvious to most people, but the proportion of emails lost in action – when I always thought it a reliable distribution mechanism – remains a big learning for me.