IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Reinventing Healthcare

Comparison of US and UK healthcare costs per capita

A lot of the political effort in the UK appears to circle around a government justifying and handing off parts of our NHS delivery assets to private enterprises, despite the ultimate model (that of the USA healthcare industry) costing significantly more per capita. Outside of politicians lining their own pockets in the future, it would be easy to conclude that few would benefit by such changes; such moves appear to be both economically farcical and firmly against the public interest. I’ve not yet heard any articulation of a view that indicates otherwise. But less well discussed are the changes that are coming, and where the NHS is uniquely positioned to pivot into the future.

There is significant work to capture DNA of individuals, but these are fairly static over time. It is estimated that there are 10^9 data points per individual, but there are many other data points – which change against a long timeline – that could be even more significant in helping to diagnose unwanted conditions in a timely fashion. To flip the industry to work almost exclusively to preventative and away from symptom based healthcare.

I think I was on the right track with an interest in Microbiome testing services. The gotcha is that commercial services like uBiome, and public research like the American (and British) Gut Project, are one-shot affairs. Taking a stool, skin or other location sample takes circa 6,000 hours of CPU wall time to reconstruct the 16S rRNA gene sequences of a statistically valid population profile. Something I thought I could get to a super fast turnaround using excess capacity (spot instances – excess compute power you can bid to consume when available) at one or more of the large cloud vendors. And then to build a data asset that could use machine learning techniques to spot patterns in people who later get afflicted by an undesirable or life threatening medical condition.

The primary weakness in the plan is that you can’t suss the way a train is travelling by examining a photograph taken looking down at a static railway line. You need to keep the source sample data (not just a summary) and measure at regular intervals; an incidence of salmonella can routinely knock out 30% of your Microbiome population inside 3 days before it recovers. The profile also flexes wildly based on what you eat and other physiological factors.

The other weakness is that your DNA and your Microbiome samples are not the full story. There are many other potential leading indicators that could determine your propensity to become ill that we’re not even sampling. The questions of which of our 10^18 different data points are significant over time, and how regularly we should be sampled, are open questions

Experience in the USA is that in environments where regular preventative checkups of otherwise healthy individuals take place – that of Dentists – have managed to lower the cost of service delivery by 10% at a time where the rest of the health industry have seen 30-40% cost increases.

So, what are the measures that should be taken, how regularly and how can we keep the source data in a way that allows researchers to employ machine learning techniques to expose the patterns toward future ill-health? There was a good discussion this week on the A16Z Podcast on this very subject with Jeffrey Kaditz of Q Bio. If you have a spare 30 minutes, I thoroughly recommend a listen: https://soundcloud.com/a16z/health-data-feedback-loop-q-bio-kaditz.

That said, my own savings are such that I have to refocus my own efforts elsewhere back in the IT industry, and my MicroBiome testing service Business Plan mothballed. The technology to regularly sample a big enough population regularly is not yet deployable in a cost effective fashion, but will come. When it does, the NHS will be uniquely positioned to pivot into the sampling and preventative future of healthcare unhindered.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Apple Watch: what makes it special

Edit

Based on what I’ve seen discussed – and alleged – ahead of Monday’s announcement, the following are the differences people will see with this device.

  1. Notifications. Inbound message or piece of useful context? It will let you know by tapping gently on your arm. Early users are already reporting on how their phone – which until now gets reviews whenever a notification arrives – now stays in their pocket most of the time.
  2. Glances. Google Now on Android puts useful contextual information on “cards”. Hence when you pass a bus stop, up pops the associated next bus timetable. Walk close to an airport checkin desk, up pops your boarding pass. Apple guidelines say that a useful app should communicate its raison d’être within 10 seconds – a hence ‘glance’.
  3. Siri. The watch puts a Bluetooth microphone on your wrist, and Apple APIs can feed speech into text based forms straight away. And you may have noticed that iMessage already allows you to send a short burst of audio to a chosen person or group. Dick Tracey’s watch comes to life.
  4. Brevity. Just like Twitter, but even more focussed. There isn’t the screen real estate to hold superfluous information, so developers need to agonise on what is needed and useful, and to zone out unnecessary context. That should give back more time to the wearer.
  5. Car Keys. House Keys. Password Device. There’s one device and probably an app for each of those functions. And can probably start bleating if someone tries to walk off with your mobile handset.
  6. Stand up! There’s already quotes from Apple CEO Tim Cook saying that sitting down for excessively long periods of time is “the new cancer”. To that effect, you can set the device to nag you into moving if you appear to not be doing so regularly enough.
  7. Accuracy. It knows where you are (with your phone) and can set the time. The iPhone adjusts after a long flight based on the identification of the first cell tower it gets a mobile signal from on landing. And day to day, it’ll keep your clock always accurate.
  8. Payments. Watch to card reader, click, paid. We’ll need the roll out of Apple Pay this side of the Atlantic to realise this piece.

It is likely to evolve into a standalone Bluetooth hub of all the sensors around and on you – and that’s where its impact in time will one plus to death.

With the above in mind, I think the Apple Watch will be another big success story. The main question is how they’ll price the expen$ive one when its technology will evolve by leaps and bounds every couple of years. I just wonder if a subscription to possessing a Rolex price watch is a possible business model being considered.

We’ll know this time tomorrow. And my wife has already taken a shine to the expensive model, based purely on its looks with a red leather strap. Better start saving… And in the meantime, a few sample screenshots to pore over:

Another lucid flurry of Apple thinking it through – unlike everyone else

Apple Watch Home Screen

This happens every time Apple announce a new product category. Audience reaction, and the press, rush off to praise or condemn the new product without standing back and joining the dots. The Kevin Lynch presentation at the Keynote also didn’t have a precursor of a short video on-ramp to help people understand the full impact of what they were being told. With that, the full impact is a little hidden. It’s a lot more than having Facebook, Twitter, Email and notifications on your wrist when you have your phone handset in your pocket.

There were a lot of folks focussing on it’s looks and comparisons to the likely future of the Swiss watch industry. For me, the most balanced summary of the luxury esthetics from someone who’s immersed in that industry can be found at:  http://www.hodinkee.com/blog/hodinkee-apple-watch-review

Having re-watched the keynote, and seen all the lame Androidware, Samsung, LG and Moto 360 comparisons, there are three examples that explode almost all of the “meh” reactions in my view. The story is hidden my what’s on that S1 circuit board inside the watch, and the limited number of admissions of what it can already do. Three scenarios:

1. Returning home at the end of a working day (a lot of people do this).

First thing I do after I come indoors is to place my mobile phone on top of the cookery books in our kitchen. Then for the next few hours i’m usually elsewhere in the house or in the garden. Talking around, that behaviour is typical. Not least as it happens in the office too, where if i’m in a meeting, i’d normally leave my handset on silent on my desk.

With every Android or Tizen Smart Watch I know, the watch loses the connection as soon as I go out of Bluetooth range – around 6-10 meters away from the handset. That smart watch is a timepiece from that point on.

Now, who forgot to notice that the Apple Watch has got b/g WiFi integrated on their S1 module? Or that it it can not only tell me of an incoming call, but allow me to answer it, listen and talk – and indeed to hand control back to my phone handset when I return to it’s current proximity?

2. Sensors

There are a plethora of Low Energy Bluetooth sensors around – and being introduced with great regularity – for virtually every bodily function you can think of. Besides putting your own fitness tracking sensors on at home, there are probably many more that can be used in a hospital setting. With that, a person could be quite a walking network of sensors and wander to different wards or labs during their day, or indeed even be released to recuperate at home.

Apple already has some sensors (heart rate, and probably some more capabilities to be announced in time, using the infrared related ones on the skin side of the Apple watch), but can act as a hub to any collection of external bluetooth sensors at the same time. Or in smart pills you can swallow. Low Energy Bluetooth is already there on the Apple Watch. That, in combination with the processing power, storage and b/g WiFi makes the watch a complete devices hub, virtually out of the box.

If your iPhone is on the same WiFi, everything syncs up with the Health app there and the iCloud based database already – which you can (at your option) permit an external third party to have access to. Now, tell me about the equivalent on any other device or service you can think of.

3. Paying for things.

The iPhone 5S, 6 and 6 Plus all have integrated finger print scanners. Apple have put some functionality into iOS 8 where, if you’re within Bluetooth range (6-10 meters of your handset), you can authenticate (with your fingerprint) the fact your watch is already on your wrist. If the sensors on the back have any suspicion that the watch leaves your wrist, it immediately invalidates the authentication.

So, walk up to a contactless till, see the payment amount appear on the watch display, one press of the watch pays the bill. Done. Now try to do that with any other device you know.

Developers, developers, developers.

There are probably a million other applications that developers will think of, once folks realise there is a full UNIX computer on that SoC (System on a Chip). With WiFi. With Bluetooth. With a Taptic feedback mechanism that feels like someone is tapping your wrist (not loudly vibrating across the table, or flashing LED lights at you). With a GPU driving a high quality, touch sensitive display. Able to not only act as a remote control for your iTunes music collection on another device, but to play it locally when untethered too (you can always add bluetooth earbuds to keep your listening private). I suspect some of the capabilities Apple have shown (like the ability to stream your heartbeat to another Apple Watch user) will evolve into potential remote health visit applications that can work Internet wide.

Meanwhile, the tech press and the discussion boards are full of people lamenting the fact that there is no GPS sensor in the watch itself (like every other Smart Watch I should add – GPS location sensing is something that eats battery power for breakfast; better to rely on what’s in the phone handset, or to wear a dedicated bluetooth GPS band on the other wrist if you really need it).

Don’t be distracted; with the electronics already in the device, the Apple Watch is truly only the beginning. We’re now waiting for the full details of the WatchKit APIs to unleash that ecosystem with full force.

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.

 

How that iPhone handset knows where I am

Treasure Island MapI’ve done a little bit of research to see how an Apple iPhone tracks my location – at least when i’ll be running iOS 8 later this autumn. It looks like it picks clues up from lots of places as you go:

  1. The signal from your local cell tower. If you switch your iPhone on after a flight, that’s probably the first thing it sees. This is what the handset uses to set your timezone and adjust your clock immediately.
  2. WiFi signals. As with Google, there is a location database accessed that translates WiFi router Mac addresses into an approximate geographic location where they’ve been sensed before. At least for the static ones.
  3. The Global Positioning System sensors, that work with both the US and Russian GPS satellite networks.  If you can stand in a field and see the horizon all around you, then your phone should have up to 14 satellites visible. Operationally, if it can see 2, you can get your x and y co-ordinates to within a meter or two. If it can see 3, then you get x, y and z co-ordinates – enough to give your elevation above sea level as well.
  4. Magnetometer and Gyroscope. The iPhone has an electronic compass and some form of gyroscope inside, so the system software can sense the direction, orientation (in 3D space) and movement. So, when you move from outdoors to an indoor location (like a shopping centre or building), the iPhone can remember the last known accurate GPS fix, and deduce (based on direction and speed as you move since that last sampling) your current position.

The system software on iOS 8 just returns your location and an indication of error scale based on all of the above. For some reason, the indoor positioning with the gyroscope is of high resolution for your x and y position, but returns the z position as a floor number only (0 being the ground floor, -1 one down from there, 1..top level above).

In doing all the above, if it senses you’ve moved indoors, then it shuts down the GPS sensor – as it is relatively power hungry and saves the battery at a time when the sensor would be unusable anyway.

Beacons

There are a number of applications where it would be nice to sense your proximity to a specific location indoors, and to do something clever in an application. For example, when you turn up in front of a Starbucks outlet, for Apple Passport to put your loyalty/payment card onto the lock screen for immediate access; same with a Virgin Atlantic check-in desk, where Passport could bring up your Boarding Pass in the same way.

One of the ways of doing this is to deploy low energy bluetooth beacons. These normally have two numbers associated with them; the first 64-bits is a licensee specific number (such as “Starbucks”), the second 64-bit number a specific identifier for that licensee only. This may be a specific outlet on their own applications database, or an indicator of a department location in a department store. It is up to the company deploying the Low Energy Bluetooth Beacons to encode this for their own iPhone applications (and to reflect the positions of the beacons in their app if they redesign their store or location layouts).

Your iPhone can sense beacons around it to four levels:

  1. I can’t hear a beacon
  2. I can sense one, but i’m not close to it yet
  3. I can sense one, and i’m within 3 meters (10 feet) of it right now
  4. I can sense one, and my iPhone is immediately adjacent to the beacon

Case (4) being for things like cash register applications. (2) and (3) are probably good enough for your store specific application to get fired up when you’re approaching.

There are some practical limitations, as low energy bluetooth uses the same 2.4Ghz spectrum that WiFi does, and hence suffers the same restrictions. That frequency agitates water (like a Microwave), hence the reason it was picked for inside applications; things like rain, moisture in walls and indeed human beings standing in the signal path tend to arrest the signal strength quite dramatically.

The iPhone 5S itself has an inbuilt Low Energy Bluetooth Beacon, but in line with the way Apple protect your privacy, it is not enabled by default. Until it is explicitly switched on by the user (who is always given an ability to decline the location sharing when any app requests this), hardware in store cannot track you personally.

Apple appear to have restricted licensees to using iBeacons for their own applications only, so only users of Apple iOS devices can benefit. There is an alternative “Open Beacon” effort in place, designed to enable applications that run across multiple vendor devices (see here for further details).

The Smart Watch Future

With the recent announcement and availability of various Android watches from Samsung, LG and Motorola, it’s notable that they all appear to have the compass, gyroscope but no current implementation of a GPS (i’ve got to guess for reasons of limited battery power and the sensors power appetite). Hence I expect that any direction sensing Smartwatch applications will need to talk to an application talking to the mobile phone handset in the users pocket – over low energy bluetooth. Once established, the app on the watch will know the devices orientation in 3D space and the direction it is headed; probably enough to keep pointing you towards a destination correctly as you walk along.

The only thing we don’t yet know is whether Apple’s own rumoured iWatch will break the mould, or like it’s Android equivalents, act as a peripheral to the network hub that is the users phone handset. We should know that later on this year.

In the meantime, it’s good to see that Apple’s model is to protect the users privacy unless they explicitly allow a vendor app to track their location, which they can agree to or decline at any time. I suspect a lot of vendors would like to track you, but Apple have picked a very “its up to the iPhone user and no-one else” approach – for each and every application, one by one.

Footnote: Having thought about it, I think I missed two things.

One is that I recall reading somewhere that if the handset battery is running low, the handset will bleat it’s current location to the cloud. Hence if you dropped your handset and it was lost in vegetation somewhere, it would at least log it’s last known geographic location for the “Find my iPhone” service to be able to pinpoint it as best it could.

Two is that there is a visit history stored in the phone, so your iPhones travels (locations, timestamps, length of time stationary) are logged as a series of move vectors between stops. These are GPS type locations, and not mapped to any physical location name or store identifier (or even position in stores!). The user has got to give specific permission for this data to be exposed to a requesting app. Besides use for remembering distances for expenses, I can think of few user-centric applications where you would want to know precisely where you’ve travelled in the last few days. Maybe a bit better as a version of the “secret” app available for MacBooks, where if you mark your device on a cloud service as having been stolen, you can get specific feedback on its movements since.

The one thing that often bugs me is people putting out calls on Facebook to help find their stolen or mislaid phones. Every iPhone should have “Find my iPhone” enabled (which is offered at iOS install customisation time) or the equivalent for Android (Android Device Manager) activated likewise. These devices should be difficult to steal.

Apple iWatch: Watch, Fashion, Sensors or all three?

iWatch Concept Guess Late last year there was an excellent 60 minute episode of the Cubed.fm Podcast by Benedict Evans and Ben Bajarin, with guest Bill Geiser, CEO of Metawatch. Bill had been working on Smart watches for over 20 years, starting with wearables to measure his swimming activity, working for over 8 years as running Fossil‘s Watch Technology Division, before buying out that division to start Metawatch. He has also consulted for Sony in the design and manufacture of their Smart watches, for Microsoft SPOT technology and for Palm on their watch efforts. The Podcast is a really fascinating background on the history and likely future directions of this (widely believed to be) nascent industry: listen here.

Following that podcast, i’ve always listened carefully to the ebbs and flows of likely smart watch releases from Google, and from Apple (largely to see how they’ve built further than the great work by Pebble). Apple duly started registering the iWatch trademark in several countries (nominally in class 9 and 14, representative of Jewelry, precious metal and watch devices). There was a flurry of patent applications from Apple in January 2014 of Liquid Metal and Sapphire materials, which included references to potential wrist-based devices.

There have also been a steady stream of rumours that an Apple watch product would likely include sensors that could pair with health related applications (over low energy bluetooth) to the users iPhone.

Apple duly recruited Angela Ahrendts, previously CEO of Burberry, to head up Apple’s Retail Operations. Shortly followed by Nike Fuelband Consultant Jay Blahnik and several Medical technology hires. Nike (where Apple CEO Tim Cook is a Director) laid off it’s Fuelband hardware team, citing a future focus on software only. And just this weekend, it was announced that Apple had recruited the Tag Heuer Watches VP of Sales (here).

That article on the Verge had a video of an interview from CNBC with Jean-Claude Biver, who is Head of Watch brands for LVMH – including Louis Vuitton, Hennessey and TAG Heuer. The bizarre thing (to me) he mentioned was that his employee who’d just left for a contract at Apple was not going to a Direct Competitor, and that he wished him well. He also cited a “Made in Switzerland” marketing asset as being something Apple could then leverage. I sincerely think he’s not naive, as Apple may well impact his market quite significantly if there was a significant product overlap. I sort of suspect that his reaction was that of someone partnering Apple in the near future, not of someone waiting for an inbound tidal wave from an foreign competitor.

Google, at their I/O Developers Conference last week, duly announced Android Wear, among which was support for Smart Watches from Samsung, LG and Motorola. Besides normal time and date use, include the ability to receive the excellent “Google Now” notifications from the users phone handset, plus process email. The core hope is that application developers will start to write their own applications to use this new set of hardware devices.

Two thoughts come to mind.

A couple of weeks back, my wife needed a new battery in one of her Swatch watches. With that, we visited the Swatch Shop outside the Arndale Centre in Manchester. While her battery was being replaced, I looked at all the displays, and indeed at least three range catalogues. Beautiful fashionable devices that convey status and personal expression. Jane duly decided to buy another Swatch that matched an evening outfit likely to be worn to an upcoming family Wedding Anniversary. A watch battery replacement turned into an £85 new sale!

Thought #1 is that the Samsung and LG watches are, not to put a finer point on it, far from fashion items (I nearly said “ugly”). Available in around 5 variations, which map to the same base unit shape and different colour wrist bands. LG likewise. The Moto 360 is better looking (bulky and circular). That said, it’s typically Fashion/Status industry suicide with an offer like this. Bill Geiser related that “one size fits all” is a dangerous strategy; suppliers typically build a common “watch movement” platform, but wrap this in an assortment of enclosures to appeal to a broad audience.

My brain sort of locks on to a possibility, given a complete absence of conventional watch manufacturers involved with Google’s work, to wonder if Apple are OEM’ing (or licensing) a “watch guts” platform usable by Watch manufacturers to use in their own enclosures.

Thought #2 relates to sensors. There are often cited assumptions that Apple’s iWatch will provide a series of sensors to feed user activity and vital signs into their iPhone based Health application. On that assumption, i’ve been noting the sort of sensors required to feed the measures maintained “out of the box” by their iPhone health app, and agonising as to if these would fit on a single wrist based device.

The main one that has been bugging me – and which would solve a need for millions of users – is that of measuring glucose levels in the bloodstream of people with Diabetes. This is usually collected today with invasive blood sampling; I suspect little demand for a watch that vampire bites the users wrist. I found today that there are devices that can measure blood glucose levels by shining Infrared Light at a skin surface using near-infrared absorption spectroscopy. One such article here.

The main gotcha is that the primary areas where such readings a best taken are on the ear drum or on the inside of an arm’s elbow joint. Neither the ideal position for a watch, but well within the reach of earbuds or a separate sensor. Both could communicate with the Health App directly wired to an iPhone or over a low energy bluetooth connection.

Blood pressure may also need such an external sensor. There are, of course, plenty of sensors that may find their way into a watch style form factor, and indeed there are Apple patents that discuss some typical ones they can sense from a wrist-attached device. That said, you’re working against limited real estate for the devices electronics, display and indeed the size of battery needed to power it’s operation.

In summary, I wonder aloud if Apple are providing an OEM watch movement for use by conventional Watch suppliers, and whether the Health sensor characteristics are better served by a raft of third party, low energy bluetooth devices rather than an iWatch itself.

About the only sure thing is that when Apple do finally announce their iWatch, that my wife will expect me to be early in the queue to buy hers. And that I won’t disappoint her. Until then, iWatch rumours updated here.

The Internet of Things withers – while HealthKit ratchets along

FDA Approved Logo

I sometimes shudder at the estimates, as once outlined by executives at Cisco, that reckons the market for “Internet of Things” – communicating sensors embedded everywhere – would be likely be a $19 trillion market. A market is normally people willing to invest to make money, save money, to improve convenience or reduce waste. Or a mix. I then look at various analysts reports where they size both the future – and the current market size. I really can’t work out how they arrive at today’s estimated monetary amounts, let alone do the leap of faith into the future stellar revenue numbers. Just like IBM with their alleged ‘Cloud’ volumes, it’s difficult to make out what current products are stuffed inside the current alleged volumes.

One of my sons friends is a Sales Director for a distributor of sensors. There appear good use cases in Utility networks, such as monitoring water or gas flow and to estimate where leaks are appearing, and their loss dimensions. This is apparently already well served. As are industrial applications, based on pneumatics, fluid flow and hook ups to SCADA equipment. A bit of RFID so stock movements can be automatically checked through their distribution process. Outside of these, there are the 3 usual consumer areas; that of cars, health and home equipment control – the very three areas that both Apple and Google appear to be focussed on.

To which you can probably add Low Power Bluetooth Beacons, which will allow a phone handset to know it’s precise location, even where GPS co-ordinates are not available (inside shopping centres as an example). If you’re in an open field with sight of the horizon around you in all directions, circa 14 GPS satellites should be “visible”; if your handset sees two of them, it can suss your x and y co-ordinates to a meter or so. If it sees 3 satellites, that’s normally enough to calculate your x, y and z co-ordinates – ie: geographic location and height above sea level. If it can only see 1 or none, it needs another clue. Hence a super secret rollout where vendors are offering these LEB beacons and can trade the translation from their individual identifiers to their exact location.

In Apple’s case, Apple Passbook Loyalty Cards and Boarding Passes are already getting triggered with an icon on the iOS 8 home screen when you’re adjacent to a Starbucks outlet or Virgin Atlantic Check-in desk; one icon press, and your payment card or boarding pass is there for you already. I dare say the same functionality is appearing in Google Now on Android; it can already suss when I get out of my car and start to walk, and keeps a note of my parking location – so I can ask it to navigate me back precisely. It’s also started to tell me what web sites people look at when they are in the same restaurant that i’m sitting in (normally the web site or menu of the restaurant itself).

We’re in a lull between Apple’s Worldwide Developer Conference, and next weeks equivalent Google I/O developer event, where Googles version of Health and HomeKit may well appear. Maybe further developments to link your cars Engine Control Unit to the Internet as well (currently better engaged by Phil Windley’s FUSE project). Apple appear to have done a stick and twist on connecting an iPhone to a cars audio system only, where the cars electronics use Blackberry’s QNX embedded Linux software; Android implementations from Google are more ambitious but (given long car model cycle times) likely to take longer to hit volume deployments. Unless we get an unexpected announcement at Google I/O next week.

My one surprise is that my previous blog post on Apples HomeKit got an order of magnitude more readers than my two posts on the Health app and the HealthKit API (posts here and here). I’d never expected that using your iPhone as a universal, voice controlled home lock/light/door remote would be so interesting to people. I also hear that Nest (now a Google subsidiary) are about to formally announce shipment of their 500,000th room temperature control. Not sure about their Smoke Alarm volumes to date though.

That apart, I noticed today that the US Food and Drug Administration had, in March, issued some clarifications on what type of mobile connected devices would not warrant regulatory classification as a medical device in the USA. They were:

  1. Mobile apps for providers that help track or manage patient immunizations by assessing the need for immunization, consent form, and immunization lot number

  2. Mobile apps that provide drug-drug interactions and relevant safety information (side effects, drug interactions, active ingredient) as a report based on demographic data (age, gender), clinical information (current diagnosis), and current medications

  3. Mobile apps that enable, during an encounter, a health care provider to access their patient’s personal health record (health information) that is either hosted on a web-based or other platform

So, it looks like Apple Health application and their HealthKit API have already skipped past the need for regulatory approvals there already. The only thing i’ve not managed to suss is how they measure blood pressure and glucose levels on a wearable device without being invasive. I’ve seen someone mention that a hi res camera is normally sufficient to detect pulse rates by seeing image changes on a picture of a patients wrist. I’ve also seen an inference that suitably equipped glasses can suss basic blood composition looking at what is exposed visibly in the iris of an eye. But if Apple’s iWatch – as commonly rumoured – can detect Glucose levels for Diabetes patients, i’m still agonising how they’d do it. Short of eating or attaching another (probably disposable) Low Energy Bluetooth sensor for the phone handset to collect data from.

That looks like it’ll be Q4 before we’ll all know the story. All I know right now is that Apple produce an iWatch, and indeed return the iPhone design to being more rounded like the 3S was, that my wife will expect me to be in the queue on release date to buy them both for her.