IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

New Mobile Phone or Tablet? Do this now:

Find My iPhone - Real MapIf you have an iPhone or iPad, install “Find My iPhone”. If you have an Android phone or tablet, install “Android Device Manager”. Both free of charge, and will prevent you looking like a dunce on social media if your device gets lost or stolen. Instead, you can get your phone (or tablets) current location like that above – from any Internet connection.

If you do, just login to iCloud or Android Device Manager on the web, and voila – it will draw its location on a map – and allow various options (like putting a message on the screen, or turn it into a remote speaker that the volume control can’t mute, or to wipe the device).

Phone lost in undergrowth and the battery about to die? Android phones will routinely bleat their location to the cloud before all power is lost, so ADM can still remember where you should look.

So, how does a modern smartphone know work out where you are? For the engineering marvel that is the Apple iPhone, it sort of works like this:

  1. If you’re in the middle of an open field with the horizon visible in all directions, your handset will be able to pick up signals from up to 14 Global Positioning System (GPS) Satellites. If it sees only 2 of them (with the remainder obscured by buildings, structures or your car roof, etc), it can work out your x and y co-ordinates to within 3 meters – worldwide. If it can see at least 3 of the 14 satellites, then it can work out your elevation above sea level too.
  2. Your phone will typically be communicating its presence to a local cell tower. Your handset knows the approximate location of these, albeit in distances measured in kilometers or miles. It’s primary use is to suss which worldwide time zone you are in; that’s why your iPhone sets itself to the correct local time when you switch on your handset at an airport after your flight lands.
  3. Your phone will sense the presence of WiFi routers and reference a database that associates the routers unique Ethernet address with the location where it is consistently found (by other handsets, or by previous data collection when building online street view maps). Such signals are normally within a 100-200 meters range. This range is constrained because WiFi usually uses the 2.4GHz band, which is the frequency at which a microwave oven agitates and heats water; the fact the signal suffers badly in rain is why it was primarily intended for internal use inside buildings.

A combination of the above are sensed and combined to drill down to your phones timezone, it’s location as being in a mobile phone cell area (can be a few hundred yards in dense populated areas, or miles in large rural areas or open countryside); to being close to a specific wifi router, or (all else being well, your exact GPS location to within 10 feet or so.

A couple of extra capabilities feature on latest iPhone and Android handsets to extend location coverage to areas in large internal buildings and shopping centres, where the ability for a handset to see any GPS satellites are severely constrained or absent altogether.

  • One is Low Energy Bluetooth Beacons. Your phone can sense the presence of nearby beacons (or, at your option, be one itself); these are normally associated with a particular retail organisation (one half of a numeric identifier) and another unique to each beacon unit (it is up to the organisation to map the location and associated attributes – like “this is the Perfume Department Retail Sale Counter on Floor 2 of the Reading Department Store”. An application can tell whether it can sense the signal at all, if you’re within 10′ of the beacon, or if the handset is immediately adjacent to the beacon (eg: handset being held against a till).

You’ll notice that there is no central database of bluetooth beacon locations and associated positions and attributes. The handset manufacturers are relatively paranoid that they don’t want a handset user being spammed incessantly as they walk past a street of retail outlets; hence, you must typically opt into the app of specific retailers to get notifications at all, and to be able to switch them off if they abuse your trust.

  • Another feature of most modern smartphone handsets is the presence of miniature gyroscopes, accelerometers and magnetic sensors in every device. Hence the ability to know how the phone is positioned in both magnetic compass direction and its orientation in 3D space at all times. It can also sense your speed by force and direction of your movements. Hence even if in an area or building with no GPS signal, your handset can fairly accurately suss your position from the last moment it had a quality location fix on you, augmented by the directions and speeds you’ve followed since. An example of history recorded around a typical shopping centre can look like this:

Typically, apps don’t lock onto your positioning full time; users will know how their phone batteries tend to drain much faster when their handsets are used with all sensors running full time in a typical app like Google Maps (in Navigation mode) or Waze. Instead, they tend to fill a location history, so a user can retrieve their own historical movement history or places they’ve recently visited. I don’t know of any app that uses this data, but know in Apples case, you’d have to give specific permission to an app to use such data with your blessing (or it would get no access to it at all). So, mainly for future potential use.

As for other location apps – Apple Passbook is already throwing my Starbucks card onto my iPhone’s lock screen when I’m close to a Starbucks location, and likewise my boarding card at a Virgin Atlantic Check-in Desk. I also have another app (Glympse) that messages my current map location, speed and eta (continuously updated) to any person I choose to share that journey with – normally my wife when on the train home, or my boss of affected by travel delays. But am sure there is more to come.

In the meantime, I hope people just install “Find my iPhone” or “Android Device Manager” on any phone handset you buy or use. They both make life less complicated if your phone or tablet ever goes missing. And you don’t get to look like a dunce for not taking the precautions up front that any rational thinking person should do.

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.

 

How that iPhone handset knows where I am

Treasure Island MapI’ve done a little bit of research to see how an Apple iPhone tracks my location – at least when i’ll be running iOS 8 later this autumn. It looks like it picks clues up from lots of places as you go:

  1. The signal from your local cell tower. If you switch your iPhone on after a flight, that’s probably the first thing it sees. This is what the handset uses to set your timezone and adjust your clock immediately.
  2. WiFi signals. As with Google, there is a location database accessed that translates WiFi router Mac addresses into an approximate geographic location where they’ve been sensed before. At least for the static ones.
  3. The Global Positioning System sensors, that work with both the US and Russian GPS satellite networks.  If you can stand in a field and see the horizon all around you, then your phone should have up to 14 satellites visible. Operationally, if it can see 2, you can get your x and y co-ordinates to within a meter or two. If it can see 3, then you get x, y and z co-ordinates – enough to give your elevation above sea level as well.
  4. Magnetometer and Gyroscope. The iPhone has an electronic compass and some form of gyroscope inside, so the system software can sense the direction, orientation (in 3D space) and movement. So, when you move from outdoors to an indoor location (like a shopping centre or building), the iPhone can remember the last known accurate GPS fix, and deduce (based on direction and speed as you move since that last sampling) your current position.

The system software on iOS 8 just returns your location and an indication of error scale based on all of the above. For some reason, the indoor positioning with the gyroscope is of high resolution for your x and y position, but returns the z position as a floor number only (0 being the ground floor, -1 one down from there, 1..top level above).

In doing all the above, if it senses you’ve moved indoors, then it shuts down the GPS sensor – as it is relatively power hungry and saves the battery at a time when the sensor would be unusable anyway.

Beacons

There are a number of applications where it would be nice to sense your proximity to a specific location indoors, and to do something clever in an application. For example, when you turn up in front of a Starbucks outlet, for Apple Passport to put your loyalty/payment card onto the lock screen for immediate access; same with a Virgin Atlantic check-in desk, where Passport could bring up your Boarding Pass in the same way.

One of the ways of doing this is to deploy low energy bluetooth beacons. These normally have two numbers associated with them; the first 64-bits is a licensee specific number (such as “Starbucks”), the second 64-bit number a specific identifier for that licensee only. This may be a specific outlet on their own applications database, or an indicator of a department location in a department store. It is up to the company deploying the Low Energy Bluetooth Beacons to encode this for their own iPhone applications (and to reflect the positions of the beacons in their app if they redesign their store or location layouts).

Your iPhone can sense beacons around it to four levels:

  1. I can’t hear a beacon
  2. I can sense one, but i’m not close to it yet
  3. I can sense one, and i’m within 3 meters (10 feet) of it right now
  4. I can sense one, and my iPhone is immediately adjacent to the beacon

Case (4) being for things like cash register applications. (2) and (3) are probably good enough for your store specific application to get fired up when you’re approaching.

There are some practical limitations, as low energy bluetooth uses the same 2.4Ghz spectrum that WiFi does, and hence suffers the same restrictions. That frequency agitates water (like a Microwave), hence the reason it was picked for inside applications; things like rain, moisture in walls and indeed human beings standing in the signal path tend to arrest the signal strength quite dramatically.

The iPhone 5S itself has an inbuilt Low Energy Bluetooth Beacon, but in line with the way Apple protect your privacy, it is not enabled by default. Until it is explicitly switched on by the user (who is always given an ability to decline the location sharing when any app requests this), hardware in store cannot track you personally.

Apple appear to have restricted licensees to using iBeacons for their own applications only, so only users of Apple iOS devices can benefit. There is an alternative “Open Beacon” effort in place, designed to enable applications that run across multiple vendor devices (see here for further details).

The Smart Watch Future

With the recent announcement and availability of various Android watches from Samsung, LG and Motorola, it’s notable that they all appear to have the compass, gyroscope but no current implementation of a GPS (i’ve got to guess for reasons of limited battery power and the sensors power appetite). Hence I expect that any direction sensing Smartwatch applications will need to talk to an application talking to the mobile phone handset in the users pocket – over low energy bluetooth. Once established, the app on the watch will know the devices orientation in 3D space and the direction it is headed; probably enough to keep pointing you towards a destination correctly as you walk along.

The only thing we don’t yet know is whether Apple’s own rumoured iWatch will break the mould, or like it’s Android equivalents, act as a peripheral to the network hub that is the users phone handset. We should know that later on this year.

In the meantime, it’s good to see that Apple’s model is to protect the users privacy unless they explicitly allow a vendor app to track their location, which they can agree to or decline at any time. I suspect a lot of vendors would like to track you, but Apple have picked a very “its up to the iPhone user and no-one else” approach – for each and every application, one by one.

Footnote: Having thought about it, I think I missed two things.

One is that I recall reading somewhere that if the handset battery is running low, the handset will bleat it’s current location to the cloud. Hence if you dropped your handset and it was lost in vegetation somewhere, it would at least log it’s last known geographic location for the “Find my iPhone” service to be able to pinpoint it as best it could.

Two is that there is a visit history stored in the phone, so your iPhones travels (locations, timestamps, length of time stationary) are logged as a series of move vectors between stops. These are GPS type locations, and not mapped to any physical location name or store identifier (or even position in stores!). The user has got to give specific permission for this data to be exposed to a requesting app. Besides use for remembering distances for expenses, I can think of few user-centric applications where you would want to know precisely where you’ve travelled in the last few days. Maybe a bit better as a version of the “secret” app available for MacBooks, where if you mark your device on a cloud service as having been stolen, you can get specific feedback on its movements since.

The one thing that often bugs me is people putting out calls on Facebook to help find their stolen or mislaid phones. Every iPhone should have “Find my iPhone” enabled (which is offered at iOS install customisation time) or the equivalent for Android (Android Device Manager) activated likewise. These devices should be difficult to steal.

Corning Glass, Android, Amazon then – surprise!

3D Glasses

One piece of uncharted territory in the mobile phone and tablet industry relates to how much Gorilla Glass (used for touch screens) that Corning manufacture, compared to an estimate of how many devices are physically shipped. Corning routinely publish the total area of glass produced, from which analysts attempt to triangulate with the relative sizes, and volumes, of the products that employ the technology.

The biggest estimated gap appears to relate to glass used to power “media tablets” in China. These tend to run the Open Source version of Android (aka “AOSP” – Android Open Source Project), don’t use any of the Google Play services (hence never need to authenticate with Google), and are assumed to be personal TVs that feed content from WiFi. Or suitable capacity SD memory cards traded (illicitly?) in some Chinese markets, preloaded with films or video from other sources.

The existence of these low cost WiFi personal TVs would explain why Apple, with a seemingly sub 15% unit market share, still drive a vastly disproportionate amount of web and e-commerce traffic that operators experience. However, such tablets – Kindle Fire being the most prominent exception – are fairly rare outside China and India.

There are rumours that Amazon are about to release a mobile phone – I don’t even think they’ve said phone themselves – but on their announcement invite video, folks are rocking their heads from side to side looking at a handheld device. All the bets are on showing items in 3D, as demonstrated by this (now Google) employee – who conjured the effect using a Nintendo Wii remote and matching sensor bar. A fascinating (less than 5 minutes) demonstration of what was possible some months back here: Head Tracking for Desktop VR Displays

Of course, by the time you read this, Amazon will have likely blown your head away with a ready to ship (soon) device, and some compelling content or applications. As an Amazon Prime customer, i’m looking forward to it. Not least having a 3D display without the need for special glasses!

Footnote: the Amazon Fire Phone was announced, two of it’s features described (in 80 seconds) in this BBC video. This neglected to mention that the WiFi can wind up to full dual channel 801.11ac speeds (as fast 300Mb/s), and that it already supports the UK LTE and HSPA+ bands out of the box. You can also throw video to your TV using Miracast (as present in a lot of modern TVs already, and in many set-top boxes). At the moment, like the Fire TV set top box, it has been announced for the US only.

I must admit, I did tap the US off contract price into Google: 649 usd in gbp – and it comes out £381.52 + VAT = £458 or so. As in the USA, that’s a 32GB phone for the price of a 16GB iPhone 5S. Then told myself off for doing this, as the USA cellular market is a strange beast (most business in $80/month contracts including handset subsidies – where the handset cost is $200 up front). Everything about the hardware is great, and the source of initial moans by the tech community around US pricing, being tied to AT&T for contract sales, no sign of a rumoured bundled carrier data contract etc – are things that Amazon could iterate at blinding speed – both in the USA and elsewhere.

It is a shopaholics dream phone – it can look up from a selection of millions of items visually, or by listening to music or TV shows – and to be able to order them for you (and deliver on a bundled Amazon Prime service) in a very, very slick fashion. About the only thing it can’t do yet is to value antiques. Or can it?

Uber in London: The Streisand Effect keeps on giving

Uber Logo

With the same overall theme as yesterday, if you’re looking at your future, step one is to look at what your customers would value, then to work back to the service components to deliver it.

I’ve followed Uber since I first discovered them in San Francisco, and it looks a simple model – to the user. You want to go from where you are to another local destination. You typically see where the closest driver is to you on your smartphone. You ask your handset for a price to go to a specific destination. It tells you. If you accept, the car is ordered and comes to pick you up. When you get dropped off, your credit card is charged, and both you and the taxi driver get the opportunity to rate each other. Job done.

Behind that facade is a model of supply and demand. Taxi drivers that can clock on and off at will. At times of high demand and dwindling available ride capacity, prices are ramped up (to “surge” pricing) to encourage more drivers onto the road. Drivers and customers with voluminous bad ratings removed. Drivers paid well enough to make more money than those in most taxi firms ($80-90,000/year in New York), or the freedom to work part time – even down to a level where your reward is to pay for your car for a few hours per week of work, and have free use of it at other times.

The service is simple and compelling enough that i’d have thought tax firms would have cottoned onto how the service works, and to replicate it before Uber ever appeared on these shores. But, with a wasted five years, they’ve appeared – and Taxi drivers all over Europe decided to run the most effective advertising campaign for an upstart competitor in their history. A one-day 850% subscriber growth; that really takes some doing, even if you were on the same side.

I’m just surprised that whoever called the go-slows all over Europe didn’t take the time out to study what we in the tech industry know as “The Streisand Effect” – Wikipedia reference here. BBC Radio 2 even ran a segment on Uber at lunchtime today, followed by every TV News Bulletin i’ve heard since. I downloaded the app as a result of hearing it on that lunchtime slot, as I guess many others did too (albeit no coverage in my area 50 miles West of London – yet). Given the five years of missed prep time, I think they’ve now lost – or find themselves in fast follower mode to incorporate similar technology into their service before they have a mass exodus to Uber (of customers, then drivers).

London Cabbies do know all the practical use of rat runs that SatNav systems are still learning, but even that is a matter of time now. I suspect appealing for regulation will, at best, only delay the inevitable.

The safest option – given users love the simplicity and lack of surprises in the service – is to get busy quickly. Plenty of mobile phone app prototyping help available on the very patch that London Black Cab drivers serve.

Apple buying Beats; one idea everyone appears to be missing

Beats by Dre Logo

There’s been a lot of commentary on blogs and podcasts following the apparent strong rumour that Apple are paying over $3 Billion to buy the Beats by Dr Dre headphone business and associated music streaming service. Most of it very bemused as to why Apple would want to do this. Thinking about it, I have my own theory, though i’d be first to admit I may be way out.

In trying to deduce a theory, a few characteristics of the position Apple find themselves in today:

  • Worldwide, they have circa 70% of all handset makers profits.
  • In every market they enter, they displace the previous market leading high end Android competitors, and relentlessly ratchet up their market share (currently 20% in most established geographies)
  • They are parked in the premium, highest price volume segment everywhere they serve
  • In developing markets, a lot of their initial adoption comes from users buying previous model second hand or refurbished handsets.
  • The latest 5c model was parked a bit too close to the 5S, making it a decoy price in both contract and prepay markets. Colour did not lead an appeal to a younger demographic as was originally expected.
  • Carriers (with the exception of Japan) tend to sell a handset on a cost recovery basis, either upfront (for PAYG) or as part of a 2 year term (Contract)
  • Users change their handsets about once every two years
  • There is a burgeoning market for the collection, disposal and/or resale of old iPhones
  • Historically, the strongest competitor has been Samsung. However, upstarts like Xioami are taking share from Samsung in China, and showing signs of doing that elsewhere as they sell into more territories. Xioami’s target demographic is 20-30 year old, first time purchasers since leaving the parental nest; high quality product, thin margins, but supplemented by useful, high quality and paid online services
  • Smartphone growth has started to stall, where the growing segments are either at the bottom (feature phone replacement or first step onto the ladder) or in the midrange (circa $300)

So, if I was Apple, what would I do in order to preserve the current high end volumes and profit margins, but dip down into growth segments? I think my strategy would be:

  1. In the car markets of the USA, Toyota sell Lexus at the premium end of the market, and Scion to the young, first time buyer demographics. Mindful there is also Honda/Acura and Nissan/Infiniti with similar volume/premium brands. Beats becomes Apple’s brand for the Xioami (20-30 year old) demographic; past that, many will hop onto the Apple brand as they age (or become wealthier).
  2. Apple formalise the bundling of a replacement handset and associated online services into a perpetual $15-ish scale monthly subscription. New replacement requires return of old handset, which Apple can continue to use in emergent markets; by doing so, they garner more wallet share. Telco services become relatively unbundled commodities.

I think that would give them high growth, more people in their 100’s of millions entering the Apple ecosystem, and without affecting the current iPhone business dynamics at all.

So, what do you think? It’ll be interesting to see how this pans out in the coming months.

Fixed! Tableau on my Mac using Amazon WorkSpaces

AWS Logo

I found out today that we may need to wait another month for Tableau Desktop Professional for the Mac to be released, and i’ve been eager to finish off my statistical analysis project. I’ve collected 12 years worth of daily food intake courtesy of WeightLossResources, which splits out to calories, carbs, protein, fat and exercise calories – and is tabulated against weekly weight readings.

Google Fusion Tables – in which I did a short online course – can do most things except to calculate and draw a straight line, or exponential equivalent, through a scatter plot. This is meat and drink to Tableau, but which unfortunately (for Mac, Chromebook and iPad user me) runs only on Microsoft Windows.

I got a notification this morning that Amazon Web Services – as promised at their AWS Summit 2014 in London last week – had released Amazon WorkSpaces hosted within Europe. This provisions quite a meaty PC for you, but which you can operate through provided client software on your local PC, Mac, Android Tablet or iPad. There is also a free add-on to sync the content of a local Windows or Mac Directory with the virtual storage on the hosted PC, so you can hook in access to files on your local device if needed. There are more advanced options for corporate users, including Active Directory Support and the ability to use that to sideload apps for a user community – though that is way in advance of what i’m doing here.

There are a number of options, from the “Basic” single CPU, 3.75GB memory, 50GB disk PC up to one with 2 CPUs, 7GB of memory, 100GB of disk and the complete Microsoft Office Professional Suite on board. More here. Prices from $35 to $75/PC per month.

I thought i’d have a crack at provisioning one for the month, and to give me 2 weeks to play with a trial copy of Tableau Desktop Professional (i’ve not used it since V7, and the current release is 8.1). Within 20 minutes of requesting it off my AWS console, I received an email saying it had been provisioned and was ready to go. So…

WorkSpaces Set Up

 

You tell it what you want, and it goes away for 20 minutes provisioning your request (I managed to accidentally do this for a US region, but deleted that and selected Ireland instead – it provisioned just the one in the Ireland datacentre). Once done, it sent me an email with a URL and a registration code for my PC (it will do this for each user if you provision several at once):

AWS WorkSpaces Registration

 

Tap in the registration code from the email received, it does the initial piece of the client end of the configuration, then asks me to login:

AWS Workspaces Login

 

Once i’d done that, it then invited me to install the client software, which I did for Mac OS/X locally, and emailed the links for Android and iOS to my email address to pick up on those devices. For what it’s worth, the Android version said my Nexus 5 wasn’t a supported device (I guess it needs a tablet), but the iOS version installed fine on my iPad Mini.

AWS Workspaces Client Setup

 

And in I went. A Windows PC. Surprisingly nippy, and I felt no real difference between this and what I remember of a local Windows 7 laptop I used to have at Computacenter some 18 months ago now:

AWS Workspaces Microsoft Windows

 

The main need then was to drop a few files onto the hard disk, but I had to go revisit the Amazon WorkSpaces web site and download the Sync package for Mac OS/X. Once installed on my Mac, it asked me for my PC’s registration code again (wouldn’t accept it copy/pasted in on that one screen, so I had to carefully re-enter a short string), asked which local Mac directory I wanted to use to sync with the hosted PC, and off it went. Syncs just like dropbox, took a few minutes to populate that with quite a few files I had sitting there already. Once up, I used the provided Firefox to download Tableau Desktop Professional, the Excel driver I needed (as I don’t have Microsoft Office on my basic version here) and – voila. Tableau running fine on AWS WorkSpaces, on my MacBook Air:

Tableau Desktop Professional Running

 

Very snappy too, and i’m now back at home with my favourite Analytics software of all time – on my Mac, and directly on my iPad Mini also. The latter with impressive keyboard and mouse support, just a two finger gesture (not that one) away at all times.

So, I now have the tools to complete the statistical analysis storyboard of my 12 years of nutrition and weight data – and to set specific calorie and carb content to hit my 2lbs/week downward goal again (i’ve been tracking at only half that rate in the last 6 months).

In the meantime, i’ve been really impressed with Amazon WorkSpaces. Fast, Simple and inexpensive – and probably of wide applicability to lots of Enterprise customers I know. A Windows PC that I can dispose of again as soon as i’ve finished with it, for a grand sum of less than £21 for my months use. Tremendous!

ScratchJr – programming for kids 5-7 – Fully Funded: yay!

ScratchJr UI

I’m absolutely delighted to report that ScratchJr – a tablet based system that teaches 5-7 year old kids how to program – duly hit its $80,000 funding goal just after bids closed on Kickstarter. With that, we have a version for the Apple iPad and a version for Android Tablets this year, and work is now underway to produce the associated teaching curriculum aids and materials.

Just waiting to get news of the ScratchJr t-shirt I get in exchange for my $45 contribution (which went via Amazon Payments as soon as the end date and successful funding level had been reached). I’ll order one in a size that should fit our 2-year old Granddaughter (and iPad Mini user) Ruby.

Full text of the announcement from the Project Lead Mitchel Resnick here.

If you haven’t seen it, I thoroughly recommend watching the video there. It’s an absolute delight to see kids so young speaking so authoritatively about the projects they have created on this platform at such a young age. The next step is to get Primary School teachers in the UK engaged with this; running something like the Education work we executed at Demon Internet (which got free and useful materials into over 95% of UK Secondary Schools for a cost of £50,000, plus £10,000 for associated competition prizes) would be fantastic, though mindful that there are many more primary schools than secondary ones here.

Three year lease, including support, insurance and warranty, for a tablet costs parents or their schools circa £10 per month over that term for an iPad Mini class device. Whether or not kids end up programming, it nevertheless gives them all sorts of other logic/sequencing skills applicable to a wide number of career options later in their lives.

ScratchJr in Use by Pupil

The older sibling product Scratch, the excellent Sugarlabs work (also being implemented on tablets) and Raspberry Pi also have a solid place, albeit slightly higher up the age range.

So, a gift well worth giving in my humble opinion. And kudos to the ScratchJr team for giving us a platform to fire up the imagination of kids from an even earlier age than before.