IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Crossing the Chasm on One Page of A4 … and Wardley Maps

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

The core essence of most management books I read can be boiled down to occupy a sheet of A4. There have also been a few big mistakes along the way, such as what were considered at the time to be seminal works, like Tom Peter’s “In Search of Excellence” — that in retrospect was an example summarised as “even the most successful companies possess DNA that also breed the seeds of their own destruction”.

I have much simpler business dynamics mapped out that I can explain to fast track employees — and demonstrate — inside an hour; there are usually four graphs that, once drawn, will betray the dynamics (or points of failure) afflicting any business. A very useful lesson I learnt from Microsoft when I used to distribute their software. But I digress.

Among my many Business books, I thought the insights in Geoffrey Moores Book “Crossing the Chasm” were brilliant — and useful for helping grow some of the product businesses i’ve run. The only gotcha is that I found myself keeping on cross referencing different parts of the book when trying to build a go-to-market plan for DEC Alpha AXP Servers (my first use of his work) back in the mid-1990’s — the time I worked for one of DEC’s Distributors.

So, suitably bored when my wife was watching J.R. Ewing being mischievous in the first UK run of “Dallas” on TV, I sat on the living room floor and penned this one page summary of the books major points. Just click it to download the PDF with my compliments. Or watch the author himself describe the model in under 14 minutes at an O’Reilly Strata Conference here. Or alternatively, go buy the latest edition of his book: Crossing the Chasm

My PA (when I ran Marketing Services at Demon Internet) redrew my hand-drawn sheet of A4 into the Microsoft Publisher document that output the one page PDF, and that i’ve referred to ever since. If you want a copy of the source file, please let me know — drop a request to: ian.waring@software-enabled.com.

That said, i’ve been far more inspired by the recent work of Simon Wardley. He effectively breaks a service into its individual components and positions each on a 2D map;  x-axis dictates the stage of the components evolution as it does through a Chasm-style lifecycle; the y-axis symbolises the value chain from raw materials to end user experience. You then place all the individual components and their linkages as part of an end-to-end service on the result. Having seen the landscape in this map form, then to assess how each component evolves/moves from custom build to commodity status over time. Even newest components evolve from chaotic genesis (where standards are not defined and/or features incomplete) to becoming well understood utilities in time.

The result highlights which service components need Agile, fast iterating discovery and which are becoming industrialised, six-sigma commodities. And once you see your map, you can focus teams and their measures on the important changes needed without breeding any contradictory or conflict-ridden behaviours. You end up with a well understood map and – once you overlay competitive offerings – can also assess the positions of other organisations that you may be competing with.

The only gotcha in all of this approach is that Simon hasn’t written the book yet. However, I notice he’s just provided a summary of his work on his Bits n Pieces Blog yesterday. See: Wardley Maps – set of useful Posts. That will keep anyone out of mischief for a very long time, but the end result is a well articulated, compelling strategy and the basis for a well thought out, go to market plan.

In the meantime, the basics on what is and isn’t working, and sussing out the important things to focus on, are core skills I can bring to bear for any software, channel-based or internet related business. I’m also technically literate enough to drag the supporting data out of IT systems for you where needed. Whether your business is an Internet-based startup or an established B2C or B2B Enterprise focussed IT business, i’d be delighted to assist.

Apple Watch: My first 48 hours

To relate my first impressions of my Apple Watch (folks keep asking).  I bought the Stainless Steel one with a Classic Black Strap.

The experience in the Apple Store was a bit too focussed on changing the clock face design; the experience of using it, for accepting the default face to start with, and using it for real, is (so far) much more pleasant. But take it off the charger, put it on, and you get:

Apple Watch PIN Challenge

Tap in your pin, then the watch face is there:

Apple Watch Clock Face

There’s actually a small (virtual) red/blue LED just above the “60” atop the clock – red if a notification has come in, turning into a blue padlock if you still need to enter your PIN, but otherwise what you see here. London Time, 9 degrees centigrade, 26th day of the current month, and my next calendar appointment underneath.

For notifications it feels deserving of my attention, it not only lights the LED (which I only get so see if I flick my wrist up to see the watch face), but it also goes tap-tap-tap on my wrist. This optionally also sounds a small warning, but that’s something I switched off pretty early on. The taptic hint is nice, quiet and quite subtle.

Most of the set-up for apps and settings is done on the Apple iPhone you have paired up to the watch. Apps reside on the phone, and ones you already have that can talk to your watch are listed already. You can then select which ones you want to appear on the watches application screen, and a subset you want to have as “glances” for faster access. The structure looks something like this:

Apple Watch No NotificationsApple Watch Clock Face

Apple Watch Heart Rate Apple Watch Local Weather Amazon Stock Quote Apple Watch Dark Sky

 

Hence, swipe down from the top, you see the notification stream, swipe back up, you’re back to the clock face. Swipe up from the bottom, you get the last “glance” you looked at. In my case, I was every now and then seeing how my (long term buy and hold) shares in Amazon were doing after they announced the size of their Web Services division. The currently selected glance stays in place for next time I swipe up unless I leave the screen having moved along that row.

If I swipe from left to right, or right to left, I step over different “glances”. These behave like swiping between icon screens on an iPhone or iPad; if you want more detail, you can click on them to invoke the matching application. I have around 12 of these in place at the moment. Once done, swipe back up, and back to the clock face again. After around 6 seconds, the screen blacks out – until the next time you swing the watch face back into view, at which point it lights up again. Works well.

You’ll see it’s monitoring my heart rate, and measuring my movement. But in the meantime, if I want to call or message someone, I can hit the small button on the side and get a list of 12 commonly called friends:

Apple Watch Friends

Move the crown around, click the picture, and I can call or iMessage them directly. Text or voice clip. Yes, directly on the watch, even if my iPhone is upstairs or atop the cookery books in the kitchen; it has a microphone and a speaker, and works from anywhere over local WiFi. I can even see who is phoning me and take their calls on the watch.

If I need to message anyone else, I can press the crown button in and summon Siri; the accuracy of Siri is remarkable now. One of my sons sent an iMessage to me when I was sitting outside the Pharmacy in Boots, and I gave a full sentence reply (verbally) then told it to send – 100% accurately despite me largely whispering into the watch on my wrist. Must have looked strange.

There are applications on the watch but these are probably a less used edge case; in my case, the view on my watch looks just like the layout i’ve given in the iPhone Watch app:

Apple Watch Applications

So, I can jump in to invoke apps that aren’t set as glances. My only surprise so far was finding that FaceBook haven’t yet released their Watch or Messenger apps, though Instagram (which they also own) is there already. Eh, tap tap on my wrist to tell me Paula Radcliffe had just completed her last London Marathon:

BBC News Paula Radcliffeand a bit later:

Everton 3 Man Utd 0

Oh dear, what a shame, how sad (smirk – Aston Villa fan typing). But if there’s a flurry of notifications, and you just want to clear the lot off in one fell swoop, just hard press the screen and…

Clear All Notificatios

Tap the X and zap, all gone.

There are a myriad of useful apps; I have Dark Sky (which gives you a hyper local forecast of any impending rain), City Mapper (helps direct you around London on all different forms of Transport available), Uber, and several others. They are there in the application icons, but also enabled from the Watch app on my phone (Apps, then the subset selected as Glances):

Ians Watch Apps Ians Watch Glances

With that, tap tap on my wrist:

Apple Watch Stand Up!

Hmmm – i’ve been sitting for too long, so time to get off my arse. It will also assess my exercise in the day and give me some targets to achieve – which it’ll then display for later admiration. Or disgust.

There is more to come. I can already call a Uber taxi directly from the watch. The BBC News Glance rotates the few top stories if selected. Folks in the USA can already use it to pay at any NFC cash terminal with a single click (if the watch comes off your wrist, it senses this and will insist on a PIN then). Twitter gives notifications and has a glance that reports the top trend hashtag when viewed.

So far, the battery is only getting from 100% down to 30% in regular use from 6:00am in the morning until 11:30pm at night, so looking good. Boy, those Amazon shares are going up; that’ll pay for my watch many times over:

Watch on Arm

Overall, impressed so far, very happy with it, and i’m sure the start of a world where software steps submerge into a world of simple notifications and responses to same. And i’m sure Jane (my wife) will want one soon. Just have to wean her out of her desire for the £10,000+ gold one to match her gold coloured MacBook.

Apple Watch: it’s Disneys MagicBand, for the world outside the theme park

  

A 500 word article that rings true to me. It’ll also be the central hub for all the health sensors that will spring to prominence in the coming months. I’ll put my order in next week. In the meantime, read some wise words here

Apple Watch: what makes it special

Edit

Based on what I’ve seen discussed – and alleged – ahead of Monday’s announcement, the following are the differences people will see with this device.

  1. Notifications. Inbound message or piece of useful context? It will let you know by tapping gently on your arm. Early users are already reporting on how their phone – which until now gets reviews whenever a notification arrives – now stays in their pocket most of the time.
  2. Glances. Google Now on Android puts useful contextual information on “cards”. Hence when you pass a bus stop, up pops the associated next bus timetable. Walk close to an airport checkin desk, up pops your boarding pass. Apple guidelines say that a useful app should communicate its raison d’être within 10 seconds – a hence ‘glance’.
  3. Siri. The watch puts a Bluetooth microphone on your wrist, and Apple APIs can feed speech into text based forms straight away. And you may have noticed that iMessage already allows you to send a short burst of audio to a chosen person or group. Dick Tracey’s watch comes to life.
  4. Brevity. Just like Twitter, but even more focussed. There isn’t the screen real estate to hold superfluous information, so developers need to agonise on what is needed and useful, and to zone out unnecessary context. That should give back more time to the wearer.
  5. Car Keys. House Keys. Password Device. There’s one device and probably an app for each of those functions. And can probably start bleating if someone tries to walk off with your mobile handset.
  6. Stand up! There’s already quotes from Apple CEO Tim Cook saying that sitting down for excessively long periods of time is “the new cancer”. To that effect, you can set the device to nag you into moving if you appear to not be doing so regularly enough.
  7. Accuracy. It knows where you are (with your phone) and can set the time. The iPhone adjusts after a long flight based on the identification of the first cell tower it gets a mobile signal from on landing. And day to day, it’ll keep your clock always accurate.
  8. Payments. Watch to card reader, click, paid. We’ll need the roll out of Apple Pay this side of the Atlantic to realise this piece.

It is likely to evolve into a standalone Bluetooth hub of all the sensors around and on you – and that’s where its impact in time will one plus to death.

With the above in mind, I think the Apple Watch will be another big success story. The main question is how they’ll price the expen$ive one when its technology will evolve by leaps and bounds every couple of years. I just wonder if a subscription to possessing a Rolex price watch is a possible business model being considered.

We’ll know this time tomorrow. And my wife has already taken a shine to the expensive model, based purely on its looks with a red leather strap. Better start saving… And in the meantime, a few sample screenshots to pore over:

Hooked, health markets but the mind is wandering… to pooh and data privacy

Hooked by Nir Eyal

One of the things I learnt many years ago was that there were four fundamental basics to increasing profits in any business. You sell:

  • More Products (or Services)
  • to More People
  • More Often
  • At higher unit profit (which is higher price, lower cost, or both)

and with that, four simple Tableau graphs against a timeline could expose the business fundamentals explaining good growth, or the core reason for declining revenue. It could also expose early warning signs, where a small number of large transactions hid an evolving surprise – like the volume of buying customers trending relentlessly down, while the revenue numbers appeared to be flying okay.

Another dimension is that a Brand equates to trust, and that consistency and predictability of the product or service plays a big part to retain that trust.

Later on,  a more controversial view was that there were two fundamental business models for any business; that of a healer or a dealer. One sells an effective one-shot fix to a customer need, while the other survives by engineering a customers dependency to keep on returning.

With that, I sometimes agonise on what the future of health services delivery is. One the one hand, politicians verbal jousts over funding and trying to punt services over to private enterprise. In several cases to providers of services following the economic rent (dealer) model found in the American market, which, at face value, has a business model needing per capita expense that no sane person would want to replicate compared to the efficiency we have already. On the other hand, a realisation that the market is subject to radical disruption, through a combination of:

  • An ever better informed, educated customer base
  • A realisation that just being overweight is a root cause of many adverse trends
  • Genomics
  • Microbiome Analysis
  • The upcoming ubiquity of sensors that can monitor all our vitals

With that, i’ve started to read “Hooked” by Nir Eyal, which is all about the psychology of engineering habit forming products (and services). The thing in the back of my mind is how to encourage the owner (like me) of a smart watch, fitness device or glucose monitor to fundamentally remove my need to enter my food intake every day – a habit i’ve maintained for 12.5 years so far.

The primary challenge is that, for most people, there is little newsworthy data that comes out of this exercise most of the time. The habit would be difficult to reinforce without useful news or actionable data. Some of the current gadget vendors are trying to encourage use by encouraging steps competition league tables you can have with family and friends (i’ve done this with relatives in West London, Southampton, Tucson Arizona and Melbourne Australia; that challenge finished after a week and has yet to be repeated).

My mind started to wander back to the challenge of disrupting the health market, and how a watch could form a part. Could its sensors measure my fat, protein and carb intake (which is the end result of my food diary data collection, along with weekly weight measures)? Could I build a service that would be a data asset to help disrupt health service delivery? How do I suss Microbiome changes – which normally requires analysis of a stool samples??

With that, I start to think i’m analysing this the wrong way around. I remember an analysis some time back when a researcher assessed the extent drug (mis)use in specific neighbourhoods by monitoring the make-up of chemical flows in networks of sewers. So, rather than put sensors on people’s wrists (and only see a subset of data), is there a place for technology in sewer pipes instead? If Microbiomes and the Genetic makeup of our output survives relatively intact, then sampling at strategic points of the distribution network would give us a pretty good dataset. Not least as DNA sequencing could allow the original owner (source) of output to connect back to any pearls of wisdom that could be analysed or inferred from their contributions, even if the drop-off points happened at home, work or elsewhere.

Hmmm. Water companies and Big Data.

Think i’ll park that and get on with the book.

iOS devices, PreSchool Kids and lessons from Africa

Ruby Jane Waring

This is Ruby, our two and a half year old Granddaughter and owner of her own iPad Mini (she is also probably the youngest Apple shareholder out there, as part of her Junior ISA). She was fairly adept with her parents iPhones and iPads around the house months before she was two, albeit curious as to why there was no “Skip Ad” option on the TV at home (try as she did).

Her staple diet is YouTube (primarily Peppa Pig, Ben & Holly’s Magic Kingdom, and more recently Thomas the Tank Engine and Alphablocks). This weekend, there was a section on BBC Click that showed some primary school kids in Malawi, each armed with iPads and green headphones, engrossed doing maths exercises. The focus then moved to a Primary School in Nottingham, using the same application built for the kids in Malawi, translated to English but with the similarly (and silently) engrossed.

I found the associated apps (search for author “onebillion” and you should see five of them) and installed each on her iPad Mini:

  • Count to 10
  • Count to 20
  • Maths, age 3-5
  • Maths, age 4-6
  • 2, 5 and 10 (multiplication)

The icons look like this, red to the left of the diagonal and with a white tick mark, orange background on the rest; the Malawi versions have more green in them in place of orange.

Countto10icon

We put her onto the English version of “Count to 10”, tapped in her name, then handed it over to her.

Instructions Count to 10

Tapped on the rabbit waving to her, and off. Add frogs the the island (one tap for each):

Count to 10 Add Frogs

Then told to tap one to remove it, then click the arrow:

Leave one frog on IslandDing! Instant feedback that seemed to please her. She smiled, gave us a thumbs up, then clicked the arrow for the next exercise:

Add birds to the wire

which was to add three birds to the wire. Press the arrow, ding! Smile and thumbs up, and she just kept doing exercise after exercise on her own bat.

A bit later on, the exercise was telling her to put a certain number of objects in each box – with the number to place specified as a number above the box. Unprompted, she was getting all those correct. Even when a box had ‘0’ above it, and she duly left that box empty. And then the next exercise, when she was asked to count the number of trees, and drag one of the numbers “0”, “1”, “2”, “3” or “4” to a box before pressing the arrow. Much to our surprise (more like chins on the floor), she was correctly associating each digit with the number of objects. Unprompted.

I had to email her Mum at that stage to ask if she’d been taught to recognise numbers already by the character shapes. Her Mum blamed it on her Cbeebies consumption alone.

When we returned her home after her weekend stay, the first thing she insisted on showing both her Mother and her Father was how good she was at this game. Fired it up herself, and showed them both independently.

So, Kudos to the authors of this app. Not only teaching kids in Malawi, but very appealing to kids here too. Having been one of the contributors to its Kickstarter funding, I just wonder how long it will be before she starts building programs in ScratchJr (though that’s aimed at budding programmers aged 5-7). It’s there on her iPad already when she wants to try it – and has her Scratch literate (and Minecraft guru) 10 year old brother on hand to assist if needed.

I think buying her her own iPad Mini (largely because when she stayed weekends, I never got my own one back) was a great investment. I hope it continues to provide an outlet for her wonder of the world around her in the years ahead.

 

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.

 

Microbiomes and a glimpse to doctors becoming a small niche

Microbiomes, Gut and Spot the Salmonella

When I get up in the morning, I normally follow a path on my iPad through email, Facebook, LinkedIn, Twitter, Google+, Feedly (for my RSS feeds) and Downcast (to update my Podcasts for later listening). This morning, Kevin Kelly served up a comment on Google+ that piqued my interest, and that led to a long voyage of discovery. Much to my wifes disgust as I quoted gory details about digestive systems at the same time she was trying to eat her breakfast. He said:

There are 2 reasons this great Quantified Self experiment is so great. One, it shows how important your microbial ecosystem is. Two, it shows how significant DAILY genome sequencing will be.

He then gave a pointer to an article about Microbiomes here.

The Diet Journey

I’ve largely built models based on innocent attempts to lose weight, dating back to late 2000 when I tried the Atkins diet. That largely stalled after 3 weeks and one stone loss. Then fairly liberated in 2002 by a regime at my local gym, when I got introduced (as part of a six week program) to the website of Weight Loss Resources. This got me in the habit of recording my food intake and exercise very precisely, which translated branded foods and weights into daily intake of carbs, protein and fat. That gave me my calorie consumption and nutritional balance, and kept track alongside weekly weight readings. I’ve kept that data flowing now for over 12 years, which continues to this day.

Things i’ve learnt along the way are:

  • Weight loss is heavily dependent on me consuming less calories than my Basal Metabolic Rate (BMR), and at the same time keeping energy deduced from carbs, protein and fat at a specific balance (50% from Carbs, 20% Protein, 30% fat)
  • 1g of protein is circa 4.0 Kcals, 1g of carbs around 3.75 Kcals, and fat around 9.0 Kcals.
  • Muscle weighs 2x as much as fat
  • There is a current fixation at gyms with upping your muscle content at first, nominally to increase your energy burn rate (even at rest)
  • The digestive system is largely first in, first out; protein is largely processed in acidic conditions, and carbs later down the path in alkaline equivalents. Fat is used as part of both processes.
  • There are a wide variety of symbiotic (opposite of parasite!) organisms that assist the digestive process from beginning to end
  • Weight loss is both heat and exhaust. Probably other forms of radiation too, given we are all like a light bulb in the infrared spectrum (I always wonder how the SAS manage to deploy small teams in foreign territory and remain, for the most part, undetected)

I’ve always harboured a suspicion that taking antibiotics have an indiscriminate bombing effect on the population of microbiomes there to assist you. Likewise the effect of what used to be my habit of drinking (very acidic) Diet Coke. But never seen anyone classify the variety and numbers of Microbiomes, and to track this over time.

The two subjects had the laboratory resources to examine samples of their own saliva, and their own stool samples, and map things over time. Fascinating to see what happened when one of them suffered Salmonella (the green in the above picture), and the other got “Delhi Belly” during a trip abroad.

The links around the article led to other articles in National Geographic, including one where the author reported much wider analysis of the Microbiomes found in 60 different peoples belly buttons (here) – he had a zoo of 58 different ones in his own. And then to another article where the existence of certain microbiome mutations in the bloodstream were an excellent leading indicator of the presence of cancerous tumours in the individual (here).

Further dips into various Wikipedia articles cited examples of microbiome populations showing up in people suffering from various dilapidating illnesses such as ME, Fibromyalgia and Lyme disease, in some instances having a direct effect on driving imbalances to cause depression. Separately, that what you ate often had quite an effect in altering the relative sizes of parts of the Microbiome population in short order.

There was another article that suggested new research was going to study the Microbiome Zoo present in people’s armpits, but I thought that an appropriate time to do an exit stage left on my reading. Ugh.

Brain starts to wander again

Later on, I reflected for a while on how I could supply some skills i’ve got to build up data resources – at least should suitable sensors be able to measure, sample and sequence microbiomes systematically every day. I have the mobile phone programming, NoSQL database deployment and analytics skills. But what if we had sensors that everyone could have on them 24/7 that could track the microbiome zoo that is you (internally – and I guess externally too)? Load the data resources centrally, and I suspect the Wardley Map of what is currently the NHS would change fundamentally.

I also suspect that age-old Chinese Medicine will demonstrate it’s positive effects on further analysis. It was about the only thing that solved my wifes psoriasis on her hands and feet; she was told about the need to balance yin/yan and remove heat put things back to normal, which was achieved by consumption of various herbs and vegetation. It would have been fascinating to see how the profile of her microbiomes changed during that process.

Sensors

I guess the missing piece is the ability to have sensors that can help both identify and count types microbiomes on a continuous basis. It looks like a laboratory job at the moment. I wonder if there are other characteristics or conditions that could short cut the process. Health apps about to appear from Apple and Google initiatives tend to be effective at monitoring steps, heart rate. There looks to be provision for sensing blood glucose levels non-invasively by shining infrared light on certain parts of the skin (inner elbow is a favourite); meanwhile Google have patented contact lenses that can measure glucose levels in the blood vessels in the wearers eyes.

The local gym has a Boditrax machine that fires an electrical up one foot and senses the signal received back in the other, and can relate body water, muscle and fat content. Not yet small enough for a mobile phone. And Withings produce scales that can report back weight to the users handset over bluetooth (I sometimes wonder if the jarring of the body as you tread could let a handset sensors deduce approximate weight, but that’s for another day).

So, the mission is to see if anyone can produce sensors (or an edible, communicating pill) that can effectively work, in concert with someones phone and the interwebs, to reliably count and identify biome mixes and to store these for future analysis, research or notification purposes. Current research appears to be in monitoring biome populations in:

  1. Oral Cavity
  2. Nasal
  3. Gastrointestinal Organs
  4. Vaginal
  5. Skin

each with their own challenges for providing a representative sample surface sufficient to be able to provide regular, consistent and accurate readings. If indeed we can miniaturize or simplify the lab process reliably. The real progress will come when we can do this and large populations can be sampled – and cross referenced with any medical conditions that become apparent in the data provider(s). Skin and the large intestine appear to have the most interesting microbiome profiles to look at.

Long term future

The end result – if done thoroughly – is that the skills and error rates of GP provided treatment would become largely relegated, just as it was for farm workers in the 19th century (which went from 98% of the population working the land to less than 2% within 100 years).

With that, I think Kevin Kelly is 100% correct in his assessment – that the article shows how significant DAILY genome sequencing will be. So, what do we need to do to automate the process, and make the fruits of its discoveries available to everyone 24/7?

Footnote: there look to be many people attempting to automate subsets of the DNA/RNA identification process. One example highlighted by MIT Review today being this.