Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

Apple Watch: what makes it special

Edit

Based on what I’ve seen discussed – and alleged – ahead of Monday’s announcement, the following are the differences people will see with this device.

  1. Notifications. Inbound message or piece of useful context? It will let you know by tapping gently on your arm. Early users are already reporting on how their phone – which until now gets reviews whenever a notification arrives – now stays in their pocket most of the time.
  2. Glances. Google Now on Android puts useful contextual information on “cards”. Hence when you pass a bus stop, up pops the associated next bus timetable. Walk close to an airport checkin desk, up pops your boarding pass. Apple guidelines say that a useful app should communicate its raison d’être within 10 seconds – a hence ‘glance’.
  3. Siri. The watch puts a Bluetooth microphone on your wrist, and Apple APIs can feed speech into text based forms straight away. And you may have noticed that iMessage already allows you to send a short burst of audio to a chosen person or group. Dick Tracey’s watch comes to life.
  4. Brevity. Just like Twitter, but even more focussed. There isn’t the screen real estate to hold superfluous information, so developers need to agonise on what is needed and useful, and to zone out unnecessary context. That should give back more time to the wearer.
  5. Car Keys. House Keys. Password Device. There’s one device and probably an app for each of those functions. And can probably start bleating if someone tries to walk off with your mobile handset.
  6. Stand up! There’s already quotes from Apple CEO Tim Cook saying that sitting down for excessively long periods of time is “the new cancer”. To that effect, you can set the device to nag you into moving if you appear to not be doing so regularly enough.
  7. Accuracy. It knows where you are (with your phone) and can set the time. The iPhone adjusts after a long flight based on the identification of the first cell tower it gets a mobile signal from on landing. And day to day, it’ll keep your clock always accurate.
  8. Payments. Watch to card reader, click, paid. We’ll need the roll out of Apple Pay this side of the Atlantic to realise this piece.

It is likely to evolve into a standalone Bluetooth hub of all the sensors around and on you – and that’s where its impact in time will one plus to death.

With the above in mind, I think the Apple Watch will be another big success story. The main question is how they’ll price the expen$ive one when its technology will evolve by leaps and bounds every couple of years. I just wonder if a subscription to possessing a Rolex price watch is a possible business model being considered.

We’ll know this time tomorrow. And my wife has already taken a shine to the expensive model, based purely on its looks with a red leather strap. Better start saving… And in the meantime, a few sample screenshots to pore over:

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.

 

The madness that is Hodor and Yo. Or is it?

Yo LogoOne constant source of bemusement – well, really horror – is the inefficiency of social media to deliver a message to it’s intended recipients. In any company setting, saying “I didn’t receive your message” is the management equivalent of “the dog ate my homework” excuse at school; it is considered a very rare occurrence and the excuse a poor attempt to seek forgiveness.

Sending bulk (but personalised) email to a long list of people who know you is just the start. Routinely, 30% of what you send will end up finishing short of your destination; no matter how many campaigns i’ve seen from anyone, none get higher than 70% delivery to the intended recipients. In practice, the number routinely read by the recipient normally bests at 20-30% of the number sent. Spam filters often over-zealous too. With practice, you get to find out that sending email to arrive in the recipients in-tray at 3:00pm on a Thursday afternoon local time is 7x more likely to be read than the same one sent at 6:00am on a Sunday morning. And that mentioning the recipients name, an indication of what it’s about and what they’ll see when the email is opened – all hooked together in the subject line -vastly improves open rates. But most people are still facing 70-80% wastage rates. I’ve done some work on this, but that experience is available to my consulting clients!

So, thank god for Facebook. Except that the visibility of status updates routinely only gets seen by 16% of your friends on average (the range is 2%-47% depending on all sorts of factors, but 16% is the average). The two ways to improve this is to make your own list that others can subscribe to, and if they remember to access that list name, then they’ll see the works. But few remember to do this. The other method is to pay Facebook for delivery, where you can push your update (or invite to an interest list, aka ‘likes’) to a defined set of demographics in specific geographic areas. But few guarantees that you’ll get >50% viewership even then.

So, thank god for Twitter. Except the chance of some of your followers actually seeing your tweets drops into the sub-1% range; the norm is that you’ll need to be watching your stream as the update is posted. So you’re down to using something like Tweetdeck to follow individual people in their own column, or a specific hashtag in another. You very quickly run out of screen real estate to see everything you actually want to see. This is a particular frustration to me, as I quite often find myself in the middle of a Tweet storm (where a notable person, like @pmarca – Marc Andreessen – will routinely run off 8-12 numbered tweets); the end result is like listening to a group of experts discussing interesting things around a virtual water cooler, and that is fascinating to be part of. The main gotcha is that I get to see his stuff early on a Saturday morning in the UK only because he tweets before folks on the west coast of the USA are headed to bed – otherwise i’d never catch it.

Some of the modern messaging apps (like SnapChat) at least tell you when that picture has been received and read by the recipient(s) you sent it too – and duly deleted on sight. But we’re well short of an application where you can intelligently follow Twitter scale dialogues reliably for people you really want to follow. Twitter themselves just appear happy to keep suggesting all sorts of people for me to follow, probably unconscious that routine acceptance would do little other than further polluting my stream with useless trash.

Parking all this, I saw one company produce a spoof Android custom keyboard, where the only key provided just says “Hodor”. Or if you press it down for longer, it gives you “Hodor” in bold. You can probably imagine the content of the reviews of it on the Google Play Store (mainly long missives that just keep repeating the word).

Then the next madness. Someone writing an application that just lists your friends names, and if you press their name, it just sends through a message to them saying “Yo!”.

Yo! Screenshot

Just like the Facebook Pokes of old. A team of three programmers wrote it in a couple of days, and it’s already been downloaded many thousands of times from the Apple App Store. It did sound to me like a modern variation of the Budweiser “Whats Up” habit a few years back, so I largely shook my head and carried on with other work.

The disbelief set in when I found out that this app had been subject to a $1.5 million VC funding round, which valued the company (this is their only “significant” app) at a $10m valuation. Then found out one of the lead investors was none other than a very respected John Borthwick (who runs Betaworks, an application Studio housed in the old Meat Packing area of New York).

His thing seems to be that this application ushers in a new world, where we quite often want to throw a yes/go-ahead/binary notification reliably to another entity. That may be a person (to say i’ve left work, or i’ve arrived at the restaurant, etc) or indeed a device (say ‘Yo’ to the coffee maker as you approach work, or to turn on the TV). So, there may indeed be some logic in the upcoming world of the “Internet of Things”, hyped to death as it may be.

John’s announcement of his funding can be found here. The challenge will no doubt be to see whether his investment is as prescient as many of his other ones (IFTTT, Bit.lyDots, Digg Deeper, etc) have been to date. In the meantime, back to code my own app – which is slightly more ambitious than that now famous one.

The Internet of Things withers – while HealthKit ratchets along

FDA Approved Logo

I sometimes shudder at the estimates, as once outlined by executives at Cisco, that reckons the market for “Internet of Things” – communicating sensors embedded everywhere – would be likely be a $19 trillion market. A market is normally people willing to invest to make money, save money, to improve convenience or reduce waste. Or a mix. I then look at various analysts reports where they size both the future – and the current market size. I really can’t work out how they arrive at today’s estimated monetary amounts, let alone do the leap of faith into the future stellar revenue numbers. Just like IBM with their alleged ‘Cloud’ volumes, it’s difficult to make out what current products are stuffed inside the current alleged volumes.

One of my sons friends is a Sales Director for a distributor of sensors. There appear good use cases in Utility networks, such as monitoring water or gas flow and to estimate where leaks are appearing, and their loss dimensions. This is apparently already well served. As are industrial applications, based on pneumatics, fluid flow and hook ups to SCADA equipment. A bit of RFID so stock movements can be automatically checked through their distribution process. Outside of these, there are the 3 usual consumer areas; that of cars, health and home equipment control – the very three areas that both Apple and Google appear to be focussed on.

To which you can probably add Low Power Bluetooth Beacons, which will allow a phone handset to know it’s precise location, even where GPS co-ordinates are not available (inside shopping centres as an example). If you’re in an open field with sight of the horizon around you in all directions, circa 14 GPS satellites should be “visible”; if your handset sees two of them, it can suss your x and y co-ordinates to a meter or so. If it sees 3 satellites, that’s normally enough to calculate your x, y and z co-ordinates – ie: geographic location and height above sea level. If it can only see 1 or none, it needs another clue. Hence a super secret rollout where vendors are offering these LEB beacons and can trade the translation from their individual identifiers to their exact location.

In Apple’s case, Apple Passbook Loyalty Cards and Boarding Passes are already getting triggered with an icon on the iOS 8 home screen when you’re adjacent to a Starbucks outlet or Virgin Atlantic Check-in desk; one icon press, and your payment card or boarding pass is there for you already. I dare say the same functionality is appearing in Google Now on Android; it can already suss when I get out of my car and start to walk, and keeps a note of my parking location – so I can ask it to navigate me back precisely. It’s also started to tell me what web sites people look at when they are in the same restaurant that i’m sitting in (normally the web site or menu of the restaurant itself).

We’re in a lull between Apple’s Worldwide Developer Conference, and next weeks equivalent Google I/O developer event, where Googles version of Health and HomeKit may well appear. Maybe further developments to link your cars Engine Control Unit to the Internet as well (currently better engaged by Phil Windley’s FUSE project). Apple appear to have done a stick and twist on connecting an iPhone to a cars audio system only, where the cars electronics use Blackberry’s QNX embedded Linux software; Android implementations from Google are more ambitious but (given long car model cycle times) likely to take longer to hit volume deployments. Unless we get an unexpected announcement at Google I/O next week.

My one surprise is that my previous blog post on Apples HomeKit got an order of magnitude more readers than my two posts on the Health app and the HealthKit API (posts here and here). I’d never expected that using your iPhone as a universal, voice controlled home lock/light/door remote would be so interesting to people. I also hear that Nest (now a Google subsidiary) are about to formally announce shipment of their 500,000th room temperature control. Not sure about their Smoke Alarm volumes to date though.

That apart, I noticed today that the US Food and Drug Administration had, in March, issued some clarifications on what type of mobile connected devices would not warrant regulatory classification as a medical device in the USA. They were:

  1. Mobile apps for providers that help track or manage patient immunizations by assessing the need for immunization, consent form, and immunization lot number

  2. Mobile apps that provide drug-drug interactions and relevant safety information (side effects, drug interactions, active ingredient) as a report based on demographic data (age, gender), clinical information (current diagnosis), and current medications

  3. Mobile apps that enable, during an encounter, a health care provider to access their patient’s personal health record (health information) that is either hosted on a web-based or other platform

So, it looks like Apple Health application and their HealthKit API have already skipped past the need for regulatory approvals there already. The only thing i’ve not managed to suss is how they measure blood pressure and glucose levels on a wearable device without being invasive. I’ve seen someone mention that a hi res camera is normally sufficient to detect pulse rates by seeing image changes on a picture of a patients wrist. I’ve also seen an inference that suitably equipped glasses can suss basic blood composition looking at what is exposed visibly in the iris of an eye. But if Apple’s iWatch – as commonly rumoured – can detect Glucose levels for Diabetes patients, i’m still agonising how they’d do it. Short of eating or attaching another (probably disposable) Low Energy Bluetooth sensor for the phone handset to collect data from.

That looks like it’ll be Q4 before we’ll all know the story. All I know right now is that Apple produce an iWatch, and indeed return the iPhone design to being more rounded like the 3S was, that my wife will expect me to be in the queue on release date to buy them both for her.

A first look at Apple HomeKit

Apple HomeKit Logo

Today’s video from Apple’s Worldwide Developers Conference viewing concerned HomeKit, which is the integration platform to control household appliances from your iPhone. Apple have defined a common set of Accessory Profiles, which are configured into a Home > Zone > Room hierarchy (you can define several ‘home’ locations, but one of them is normally selected as the primary one). Native devices include:

  • Garage Door Openers (with associated lighting)
  • Lights
  • Door locks
  • Thermostats
  • IP (Internet Protocol) Cameras
  • Switches

Currently, there are a myriad of different per vendor standards to control home automation products, but Apple are providing functionality to enable hardware (or software) bridges between disparate protocols and their own. Once a bridge has been discovered, the iPhone sees all the devices sitting the other side of the bridge as if they were directly connected to the iPhone and using the Apple provided interface protocols.

Every device type has a set of characteristics, such as:

  • Power State
  • Lock State
  • Target State
  • Brightness
  • Model Number
  • Current Temperature
  • etc

When devices are first defined, each has a compulsory “identify me” action. Hence if you’re sitting on the floor, trying to work out which of twelve identical-looking lightbulbs in the room to give an appropriate name, the “identify me” action on the iPhone pick list will result in the matching bulb blinking twice; for a security camera, blinking a colour LED, and so forth.

Each device, it’s room name, zone (like “upstairs”, “back garden”) and home name, plus the common characteristic actions, are encoded and enacted using Siri – Apple’s voice control on the iPhone. “Switch on all downstairs lights”, “Set the room temperature to 20 degrees C” and so forth are spoken into your iPhone handset. That is the default User Interface for the whole Home Automation Setup. The HomeKit resident database is in turn also available for use by vendor specific products via the HomeKit API, should a custom application be desirable.

There are of course extensive security controls to frustrate any attempt for anyone to be able to do “man in the middle” attacks, or to subvert the security of your device connections. For developers, Apple provide a software simulator so that you can test your software against a wide range of device types, even before the hardware is made available to you.

Most of the supporting detail to build compliant devices is found in the MFI (Made for iDevices) Guidelines, which are only available the other side of a license agreement with Apple here. The full WWDC presentation on HomeKit (just under an hour long) is called “Introduction to HomeKit” and present in the list of video sessions from WWDC here.

Overall, very impressive. That’s the home stuff largely queued up, just awaiting news of a bridge I think. Knowing how simple the voice setup is on Android JellyBean for a programmer (voice enabling an app is circa 20 lines of JavaScript), i’m sure a Google equivalent is eminently possible; if Google haven’t done their own API, then a bridge to Apple’s ecosystem (if the licensing allows it) should not be a major endeavour.

So, the only missing thing was talk of iBeacon support. However, that is a different use case. There are already pilots that sense presence of a low energy bluetooth beacon, and bring specific applications onto the lock screen. Examples include the Starbucks payment card app coming forward to make itself immediately available when you’re close to a Starbucks counter, or the Virgin Atlantic app making your boarding card available when you approach the check-in desk at an airport. Both are features of Apple’s PassBook loyalty card app – which is already used by hundreds of retailers, supermarkets and airlines.

The one thing about iBeacon is that you can enable your iPhone 5S to be a low energy beacon in it’s own right. You have full control over this and your presence is not made available to anything but applications on your own iPhone handset – over which, in the final analysis, you have total control. One use case already is pairing your Pebble Smartwatch with your iPhone 5S handset, so that if your phone leaves your immediate location by a specified short distance (say, 2 meters), you’re aggressively told immediately.

So, lots to look forward to in the Autumn. Quite a measured approach compared to the “Internet of Things” which other vendors are hyping with impunity (and quoting staggering revenue numbers which I find difficult to map onto any reality – starting with what folks seem to suggest is even a current huge market size already).

My next piece of homework will be to look at CloudKit, now that Apple are dogfooding it’s use in their own products while releasing it to third party developers. Hopefully, a good sign that Apple are now providing cloud services that match the resilience of competitive offerings for the first time – even if they are specific to Apple’s own platforms. But that’s all the other side of finishing my company’s end of year tax return prep work first!