IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

The Next Explosion – the Eyes have it

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

One of the early lessons you pick up looking at product lifecycles is that some people hold out buying any new technology product or service longer than anyone else. You make it past the techies, the visionaries, the early majority, late majority and finally meet the laggards at the very right of the diagram (PDF version here). The normal way of selling at that end of the bell curve is to embed your product in something else; the person who swore they’d never buy a Microprocessor unknowingly have one inside the controls on their Microwave, or 50-100 ticking away in their car.

In 2016, Google started releasing access to its Vision API. They were routinely using their own Neural networks for several years; one typical application was taking the video footage from their Google Maps Streetview cars, and correlating house numbers from video footage onto GPS locations within each street. They even started to train their own models to pick out objects in photographs, and to be able to annotate a picture with a description of its contents – without any human interaction. They have also begun an effort to do likewise describing the story contained in hundreds of thousands of YouTube videos.

One example was to ask it to differentiate muffins and dogs:

This is does with aplomb, with usually much better than human performance. So, what’s next?

One notable time in Natural History was the explosion in the number of species on earth that  occured in the Cambrian period, some 534 million years ago. This was the time when it appears life forms first developed useful eyes, which led to an arms race between predators and prey. Eyes everywhere, and brains very sensitive to signals that come that way; if something or someone looks like they’re staring at you, sirens in your conscience will be at full volume.

Once a neural network is taught (you show it 1000s of images, and tell it which contain what, then it works out a model to fit), the resulting learning can be loaded down into a small device. It usually then needs no further training or connection to a bigger computer nor cloud service. It can just sit there, and report back what it sees, when it sees it; the target of the message can be a person or a computer program anywhere else.

While Google have been doing the heavy lifting on building the learning models in the cloud, Apple have slipped in with their own CloudML data format, a sort of PDF for the resulting machine learning data formats. Then using the Graphics Processing Units on their iPhone and iPad devices to run the resulting models on the users device. They also have their ARkit libraries (as in “Augmented Reality”) to sense surfaces and boundaries live on the embedded camera – and to superimpose objects in the field of view.

With iOS 11 coming in the autumn, any handwritten notes get automatically OCR’d and indexed – and added to local search. When a document on your desk is photo’d from an angle, it can automatically flatten it to look like a hi res scan of the original – and which you can then annotate. There are probably many like features which will be in place by the time the new iPhone models arrive in September/October.

However, tip of the iceberg. When I drive out of the car park in the local shopping centre here, the barrier automatically raises given the person with the ticket issued to my car number plate has already paid. And I guess we’re going to see a Cambrian explosion as inexpensive “eyes” get embedded in everything around us in our service.

With that, one example of what Amazon are experimenting with in their “Amazon Go” shop in Seattle. Every visitor a shoplifter: https://youtu.be/NrmMk1Myrxc

Lots more to follow.

PS: as a footnote, an example drawing a ruler on a real object. This is 3 weeks after ARkit got released. Next: personalised shoe and clothes measurements, and mail order supply to size: http://www.madewitharkit.com/post/162250399073/another-ar-measurement-app-demo-this-time

Danger, Will Robinson, Danger

One thing that bemused the hell out of me – as a Software guy visiting prospective PC dealers in 1983 – was our account manager for the North UK. On arrival at a new prospective reseller, he would take a tape measure out, and measure the distance between the nearest Directors Car Parking Slot, and their front door. He’d then repeat the exercise for the nearest Visitors Car Parking Spot and the front door. And then walk in for the meeting to discuss their application to resell our range of Personal Computers.

If the Directors slot was closer to the door than the Visitor slot, the meeting was a very short one. The positioning betrayed the senior managements attitude to customers, which in countless cases I saw in other regions (eventually) to translate to that Company’s success (or otherwise). A brilliant and simple leading indicator.

One of the other red flags when companies became successful was when their own HQ building became ostentatious. I always wonder if the leaders can manage to retain their focus on their customers at the same time as building these things. Like Apple in a magazine today:

Apple HQ

And then Salesforce, with the now tallest building in San Francisco:

Salesforce Tower

I do sincerely hope the focus on customers remains in place, and that none of the customers are adversely upset with where each company is channeling it’s profits. I also remember a Telco Equipment salesperson turning up at his largest customer in his new Ferrari, and their reaction of disgust that unhinged their long term relationship; he should have left it at home and driven in using something more routine.

Modesty and Frugality are usually a better leading indicator of delivering good value to folks buying from you. As are all the little things that demonstrate that the success of the customer is your primary motivation.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

My Apple Watch: one year on

Apple Watch Clock Face

I have worn my Apple Watch every day for a year now. I still think Apple over complicated the way they sell them, and didn’t focus on the core use cases that make it an asset to its users. For me, the stand outs are as follows:

  1. It tells the time, super accurately. The proof point is lining up several Apple watches next to each other; the second hands are always in perfect sync.
  2. You can see the time in the dark, just looking at your wrist. Stupidly simple, I know.
  3. When a notification comes in, you get tapped on the wrist. Feeling the device do that to you for the first time is one of the things that gives someone their first “oh” moment; when I demo the watch to anyone, putting it on their wrist and invoking the heart Taptic pulse is the key feature to show. Outside of message notifications, having the “Dark Sky” app tell me it’s going to rain outside in 5 or 10 minutes is really helpful.
  4. I pay for stuff using my Nationwide Debit or Credit cards, without having to take them out of my wallet. Outside my regular haunts, this still surprises people to this day; I fear that the take up is not as common as I expected, but useful nonetheless.
  5. I know it’s monitoring some aspects of my health (heart rate, exercise) and keeps egging me on, even though it’s not yet linked into the diet and food intake site I use daily (www.weightlossresources.co.uk).
  6. Being able to see who’s calling my phone and the ability to bat back a “busy, will call you back” reply without having to drag out my phone.

The demo in an Apple Store doesn’t major on any of the above; they tend to focus on aesthetics and in how you can choose different watch faces, an activity most people ever do once. On two occasions while waiting in the queue to be served at an Apple Store, someone has noticed I’m wearing an Apple Watch and has asked what it’s like day to day; I put mine on their wrist and quickly step through the above. Both went from curiosity to buying one as soon as the Apple rep freed up.

I rarely if ever expose the application honeycombe. Simplicity sells, and I’m sure Apple’s own usage stats would show that to them. I also find Siri unusable every time I try to use it on the watch.

As for the future, what would make it even more compelling for me? Personally:

  • In the final analysis, an Apple Watch is a Low Energy Bluetooth enabled WiFi hub with a clock face on it. Having more folks stringing other health sensors, in addition to ones on the device itself, to health/diet related apps will align nicely with future “self service” health management trends.
  • Being able to tone down the ‘chatty-ness’ of notifications. I’m conscious that keeping looking at my watch regularly when in a meeting is a big no-no, and having more control on what gets to tap my wrist and what just floats by in a stream for when I look voluntarily would be an asset.
  • When driving (which is when I’m not wearing glasses), knowing who or what is notifying me in a much bigger font would help. Or the ability to easily throw a text to voice of the from and subject lines onto my phone or car speakers, with optional read back of the message body.
  • Siri. I just wish it worked to a usable standard. I hope there are a few Amazon Echos sitting in the dev labs in Cupertino and that someone there is replicating its functionality in a wrist form factor.

So, the future is an eclectic mix between a low energy Bluetooth WiFi hub for health apps, a super accurate watch, selective notification advisor and an Amazon Echo for your wrist – with integrated card payments. Then getting the result easily integrated into health apps and application workflows – which I hope is what their upcoming WorldWide Developers Conference will cover.

Apple Watch: My first 48 hours

To relate my first impressions of my Apple Watch (folks keep asking).  I bought the Stainless Steel one with a Classic Black Strap.

The experience in the Apple Store was a bit too focussed on changing the clock face design; the experience of using it, for accepting the default face to start with, and using it for real, is (so far) much more pleasant. But take it off the charger, put it on, and you get:

Apple Watch PIN Challenge

Tap in your pin, then the watch face is there:

Apple Watch Clock Face

There’s actually a small (virtual) red/blue LED just above the “60” atop the clock – red if a notification has come in, turning into a blue padlock if you still need to enter your PIN, but otherwise what you see here. London Time, 9 degrees centigrade, 26th day of the current month, and my next calendar appointment underneath.

For notifications it feels deserving of my attention, it not only lights the LED (which I only get so see if I flick my wrist up to see the watch face), but it also goes tap-tap-tap on my wrist. This optionally also sounds a small warning, but that’s something I switched off pretty early on. The taptic hint is nice, quiet and quite subtle.

Most of the set-up for apps and settings is done on the Apple iPhone you have paired up to the watch. Apps reside on the phone, and ones you already have that can talk to your watch are listed already. You can then select which ones you want to appear on the watches application screen, and a subset you want to have as “glances” for faster access. The structure looks something like this:

Apple Watch No NotificationsApple Watch Clock Face

Apple Watch Heart Rate Apple Watch Local Weather Amazon Stock Quote Apple Watch Dark Sky

 

Hence, swipe down from the top, you see the notification stream, swipe back up, you’re back to the clock face. Swipe up from the bottom, you get the last “glance” you looked at. In my case, I was every now and then seeing how my (long term buy and hold) shares in Amazon were doing after they announced the size of their Web Services division. The currently selected glance stays in place for next time I swipe up unless I leave the screen having moved along that row.

If I swipe from left to right, or right to left, I step over different “glances”. These behave like swiping between icon screens on an iPhone or iPad; if you want more detail, you can click on them to invoke the matching application. I have around 12 of these in place at the moment. Once done, swipe back up, and back to the clock face again. After around 6 seconds, the screen blacks out – until the next time you swing the watch face back into view, at which point it lights up again. Works well.

You’ll see it’s monitoring my heart rate, and measuring my movement. But in the meantime, if I want to call or message someone, I can hit the small button on the side and get a list of 12 commonly called friends:

Apple Watch Friends

Move the crown around, click the picture, and I can call or iMessage them directly. Text or voice clip. Yes, directly on the watch, even if my iPhone is upstairs or atop the cookery books in the kitchen; it has a microphone and a speaker, and works from anywhere over local WiFi. I can even see who is phoning me and take their calls on the watch.

If I need to message anyone else, I can press the crown button in and summon Siri; the accuracy of Siri is remarkable now. One of my sons sent an iMessage to me when I was sitting outside the Pharmacy in Boots, and I gave a full sentence reply (verbally) then told it to send – 100% accurately despite me largely whispering into the watch on my wrist. Must have looked strange.

There are applications on the watch but these are probably a less used edge case; in my case, the view on my watch looks just like the layout i’ve given in the iPhone Watch app:

Apple Watch Applications

So, I can jump in to invoke apps that aren’t set as glances. My only surprise so far was finding that FaceBook haven’t yet released their Watch or Messenger apps, though Instagram (which they also own) is there already. Eh, tap tap on my wrist to tell me Paula Radcliffe had just completed her last London Marathon:

BBC News Paula Radcliffeand a bit later:

Everton 3 Man Utd 0

Oh dear, what a shame, how sad (smirk – Aston Villa fan typing). But if there’s a flurry of notifications, and you just want to clear the lot off in one fell swoop, just hard press the screen and…

Clear All Notificatios

Tap the X and zap, all gone.

There are a myriad of useful apps; I have Dark Sky (which gives you a hyper local forecast of any impending rain), City Mapper (helps direct you around London on all different forms of Transport available), Uber, and several others. They are there in the application icons, but also enabled from the Watch app on my phone (Apps, then the subset selected as Glances):

Ians Watch Apps Ians Watch Glances

With that, tap tap on my wrist:

Apple Watch Stand Up!

Hmmm – i’ve been sitting for too long, so time to get off my arse. It will also assess my exercise in the day and give me some targets to achieve – which it’ll then display for later admiration. Or disgust.

There is more to come. I can already call a Uber taxi directly from the watch. The BBC News Glance rotates the few top stories if selected. Folks in the USA can already use it to pay at any NFC cash terminal with a single click (if the watch comes off your wrist, it senses this and will insist on a PIN then). Twitter gives notifications and has a glance that reports the top trend hashtag when viewed.

So far, the battery is only getting from 100% down to 30% in regular use from 6:00am in the morning until 11:30pm at night, so looking good. Boy, those Amazon shares are going up; that’ll pay for my watch many times over:

Watch on Arm

Overall, impressed so far, very happy with it, and i’m sure the start of a world where software steps submerge into a world of simple notifications and responses to same. And i’m sure Jane (my wife) will want one soon. Just have to wean her out of her desire for the £10,000+ gold one to match her gold coloured MacBook.

Apple Watch: what makes it special

Edit

Based on what I’ve seen discussed – and alleged – ahead of Monday’s announcement, the following are the differences people will see with this device.

  1. Notifications. Inbound message or piece of useful context? It will let you know by tapping gently on your arm. Early users are already reporting on how their phone – which until now gets reviews whenever a notification arrives – now stays in their pocket most of the time.
  2. Glances. Google Now on Android puts useful contextual information on “cards”. Hence when you pass a bus stop, up pops the associated next bus timetable. Walk close to an airport checkin desk, up pops your boarding pass. Apple guidelines say that a useful app should communicate its raison d’être within 10 seconds – a hence ‘glance’.
  3. Siri. The watch puts a Bluetooth microphone on your wrist, and Apple APIs can feed speech into text based forms straight away. And you may have noticed that iMessage already allows you to send a short burst of audio to a chosen person or group. Dick Tracey’s watch comes to life.
  4. Brevity. Just like Twitter, but even more focussed. There isn’t the screen real estate to hold superfluous information, so developers need to agonise on what is needed and useful, and to zone out unnecessary context. That should give back more time to the wearer.
  5. Car Keys. House Keys. Password Device. There’s one device and probably an app for each of those functions. And can probably start bleating if someone tries to walk off with your mobile handset.
  6. Stand up! There’s already quotes from Apple CEO Tim Cook saying that sitting down for excessively long periods of time is “the new cancer”. To that effect, you can set the device to nag you into moving if you appear to not be doing so regularly enough.
  7. Accuracy. It knows where you are (with your phone) and can set the time. The iPhone adjusts after a long flight based on the identification of the first cell tower it gets a mobile signal from on landing. And day to day, it’ll keep your clock always accurate.
  8. Payments. Watch to card reader, click, paid. We’ll need the roll out of Apple Pay this side of the Atlantic to realise this piece.

It is likely to evolve into a standalone Bluetooth hub of all the sensors around and on you – and that’s where its impact in time will one plus to death.

With the above in mind, I think the Apple Watch will be another big success story. The main question is how they’ll price the expen$ive one when its technology will evolve by leaps and bounds every couple of years. I just wonder if a subscription to possessing a Rolex price watch is a possible business model being considered.

We’ll know this time tomorrow. And my wife has already taken a shine to the expensive model, based purely on its looks with a red leather strap. Better start saving… And in the meantime, a few sample screenshots to pore over: