IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Crossing the Chasm on One Page of A4 … and Wardley Maps

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

The core essence of most management books I read can be boiled down to occupy a sheet of A4. There have also been a few big mistakes along the way, such as what were considered at the time to be seminal works, like Tom Peter’s “In Search of Excellence” — that in retrospect was an example summarised as “even the most successful companies possess DNA that also breed the seeds of their own destruction”.

I have much simpler business dynamics mapped out that I can explain to fast track employees — and demonstrate — inside an hour; there are usually four graphs that, once drawn, will betray the dynamics (or points of failure) afflicting any business. A very useful lesson I learnt from Microsoft when I used to distribute their software. But I digress.

Among my many Business books, I thought the insights in Geoffrey Moores Book “Crossing the Chasm” were brilliant — and useful for helping grow some of the product businesses i’ve run. The only gotcha is that I found myself keeping on cross referencing different parts of the book when trying to build a go-to-market plan for DEC Alpha AXP Servers (my first use of his work) back in the mid-1990’s — the time I worked for one of DEC’s Distributors.

So, suitably bored when my wife was watching J.R. Ewing being mischievous in the first UK run of “Dallas” on TV, I sat on the living room floor and penned this one page summary of the books major points. Just click it to download the PDF with my compliments. Or watch the author himself describe the model in under 14 minutes at an O’Reilly Strata Conference here. Or alternatively, go buy the latest edition of his book: Crossing the Chasm

My PA (when I ran Marketing Services at Demon Internet) redrew my hand-drawn sheet of A4 into the Microsoft Publisher document that output the one page PDF, and that i’ve referred to ever since. If you want a copy of the source file, please let me know — drop a request to: [email protected]led.com.

That said, i’ve been far more inspired by the recent work of Simon Wardley. He effectively breaks a service into its individual components and positions each on a 2D map;  x-axis dictates the stage of the components evolution as it does through a Chasm-style lifecycle; the y-axis symbolises the value chain from raw materials to end user experience. You then place all the individual components and their linkages as part of an end-to-end service on the result. Having seen the landscape in this map form, then to assess how each component evolves/moves from custom build to commodity status over time. Even newest components evolve from chaotic genesis (where standards are not defined and/or features incomplete) to becoming well understood utilities in time.

The result highlights which service components need Agile, fast iterating discovery and which are becoming industrialised, six-sigma commodities. And once you see your map, you can focus teams and their measures on the important changes needed without breeding any contradictory or conflict-ridden behaviours. You end up with a well understood map and – once you overlay competitive offerings – can also assess the positions of other organisations that you may be competing with.

The only gotcha in all of this approach is that Simon hasn’t written the book yet. However, I notice he’s just provided a summary of his work on his Bits n Pieces Blog yesterday. See: Wardley Maps – set of useful Posts. That will keep anyone out of mischief for a very long time, but the end result is a well articulated, compelling strategy and the basis for a well thought out, go to market plan.

In the meantime, the basics on what is and isn’t working, and sussing out the important things to focus on, are core skills I can bring to bear for any software, channel-based or internet related business. I’m also technically literate enough to drag the supporting data out of IT systems for you where needed. Whether your business is an Internet-based startup or an established B2C or B2B Enterprise focussed IT business, i’d be delighted to assist.

Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

Help available to keep malicious users away from your good work

Picture of a Stack of Tins of Spam Meat

One thing that still routinely shocks me is the shear quantity of malicious activity that goes on behind the scenes of any web site i’ve put up. When we were building Internet Vulnerability Testing Services at BT, around 7 new exploits or attack vectors were emerging every 24 hours. Fortunately, for those of us who use Open Source software, the protections have usually been inherent in the good design of the code, and most (OpenSSL heartbleed excepted) have had no real impact with good planning. All starting with closing off ports, and restricting access to some key ones from only known fixed IP addresses (that’s the first thing I did when I first provisioned our servers in Digital Ocean Amsterdam – just surprised they don’t give a template for you to work from – fortunately I keep my own default rules to apply immediately).

With WordPress, it’s required an investment in a number of plugins to stem the tide. Basic ones like Comment Control, that  can lock down pages, posts, images and attachments from having comments added to them (by default, spammers paradise). Where you do allow comments, you install the WordPress provided Akismet, which at least classifies 99% of the SPAM attempts and sticks them in the spam folder straight away. For me, I choose to moderate any comment from someone i’ve not approved content from before, and am totally ruthless with any attempt at social engineering; the latter because if they post something successfully with approval a couple of times, their later comment spam with unwanted links get onto the web site immediately until I later notice and take them down. I prefer to never let them get to that stage in the first place.

I’ve been setting up a web site in our network for my daughter in law to allow her to blog abound Mental Health issues for Children, including ADHD, Aspergers and related afflictions. For that, I installed BuddyPress to give her user community a discussion forum, and went to bed knowing I hadn’t even put her domain name up – it was just another set of deep links into my WordPress network at the time.

By the morning, 4 user registrations, 3 of them with spoof addresses. Duly removed, and the ability to register usernames then turned off completely while I fix things. I’m going into install WP-FB-Connect to allow Facebook users to work on the site based on their Facebook login credentials, and to install WangGuard to stop the “Splogger” bots. That is free for us for the volume of usage we expect (and the commercial dimensions of the site – namely non-profit and charitable), and appears to do a great job  sharing data on who and where these attempts come from. Just got to check that turning these on doesn’t throw up a request to login if users touch any of the other sites in the WordPress network we run on our servers, whose user communities don’t need to logon at any time, at all.

Unfortunately, progress was rather slowed down over the weekend by a reviewer from Kenya who published a list of best 10 add-ins to BuddyPress, #1 of which was a Social Network login product that could authenticate with Facebook or Twitter. Lots of “Great Article, thanks” replies. In reality, it didn’t work with BuddyPress at all! Duly posted back to warn others, if indeed he lets that news of his incompetence in that instance back to his readers.

As it is, a lot of WordPress Plugins (there are circa 157 of them to do social site authentication alone) are of variable quality. I tend to judge them by the number of support requests received that have been resolved quickly in the previous few weeks – one nice feature of the plugin listings provided. I also have formal support contracts in with Cyberchimps (for some of their themes) and with WPMU Dev (for some of their excellent Multisite add-ons).

That aside, we now have the network running with all the right tools and things seem to be working reliably. I’ve just added all the page hooks for Google Analytics and Bing Web Tools to feed from, and all is okay at this stage. The only thing i’d like to invest in is something to watch all the various log files on the server and to give me notifications if anything awry is happening (like MySQL claiming an inability to connect to the WordPress database, or Apache spawning multiple instances and running out of memory – something I had in the early days when the Google bot was touching specific web pages, since fixed).

Just a shame that there are still so many malicious link spammers out there; they waste 30 minutes of my day every day just clearing their useless gunk out. But thank god that Google are now penalising these very effectively; long may that continue, and hopefully the realisation of the error of their ways will lead to being a more useful member of the worldwide community going forward.

So, how do Policing Statistics work?

Metropolitan Police Sign

I know I posted a previous note on the curious measures being handed down to police forces to “reduce crime”. While the police may be able to influence it slightly, in the final analysis they only have direct control over one part of the value chain – that of producing the related statistics (I really don’t think they commit all the crimes on which they are measured!). The much longer post was this: http://www.ianwaring.com/2014/04/05/police-metrics-and-the-missing-comedy-of-the-red-beads/

I’ve just had one of my occasional visits back to “Plumpergeddon” – not recommended in work environments for reasons that will become apparent later – which documents the ebbs and flows of the legal process following a mugging and theft (of a MacBook and a wallet containing a debit card) in London in November 2011. It is, to put it mildly, a shocking story.

The victim of the crime – and owner of the MacBook – had installed a piece of software on his machine that – once he’d enabled a tick box on an associated web site – started to “phone home” at regular intervals. Taking pictures of the person using the computer, shots of what was on the screen at the same time, and both tagged with it’s exact geographic location. He ended up with over 6,000 pictures, including some which showed sale of goods on eBay that matched purchases made on his stolen credit cards.

I’m not sure exactly how the flow of incidents get rolled up into the crime statistics that the Met publish, but having done a quick trawl through the Plumpergeddon Blog, starting at the first post here and (warning: ever more NSFW as the story unfolds, given what the user started paying for and viewing!) moving up to the current status 29 pages later, the count looks like:

  • 1 count of mugging
  • 1 theft of a MacBook Pro Personal Computer, plus Wallet containing Company Debit Card
  • 2 counts of obtaining money (from a cashpoint with a stolen card) by deception
  • 9 counts of obtaining goods (using a stolen debit card, using a PIN) by deception
  • 2 counts of obtaining goods (using a stolen debit card, signing for them) by deception
  • 11 counts of demonstrably selling stolen goods

So, I make that 26 individual crime incidents.

The automated data collection started off within 4 weeks of the theft phoning home (it took one shot of the user, a screenshot and reported location and connection details every 10 minutes of active use). He ended up assembling circa 6,000 pieces of evidence (including screenshots of the person using his MacBook, and screenshots documenting the disposal of the goods purchased with the stolen card using three separate accounts on eBay). All preserved with details of the physical location of the MacBook and the details of the WiFi network it was connected to.

Many ebbs and flows along the way, but the long and short of it was that the case was formally dropped “for lack of evidence”. This was then followed by a brief piece of interest when some media activity started picking up, but it then sort of ebbed away again. In May 2013, news came back as The case file is back with the officer, and the case is closed pending further leads.”

Four weeks ago, the update said:

I Am No Longer the Victim. Apparently. I was told last night in a police station by a Detective Constable that because the £7,000 I was defrauded of was returned by my bank after 3-4 weeks, and the laptop was replaced by my insurance company after 4 months, I am no longer considered the victim for either of those crimes. I was told that my bank and insurance company are now the victims.

I assume this must mean that when a victim of an assault receives compensation, the attackers subsequently go free? Any UK based lawyers, police or other legal types care to shed some light on this obscure logic?

Cynical little me suspects i’m being told this because the police don’t want to pursue charges over those crimes, even though (as most readers will know and as I said in my previous post) I’ve done practically all the legwork for them.

I must admit to be completely appalled that a case like this. Given the amount of evidence submitted, it should have solved a string of fraudulent transactions and matching/associated Sale of Stolen Goods, that could have incremented the Metropolitan Police “crimes solved” counter like  jackpot machine. 26 crimes solved with all the evidence collecting leg work already done for them.

So, where does this case sit on the Metropolitan Police Statistics? Does it count as all 26 incidents “solved” because the insurance company have paid out and the debit card company have reversed the fraudulent transactions?And above all, is the Home Secretary really satisfied that she’s seeing an appropriate action under her “reducing crime” objective here??

The guy is still free and on the streets without any intervention since the day the crimes were committed. Free to become the sort of one-man crime wave that Bill Bratton managed to systematically get off the streets in New York during his first tenure as Police Chief there (I recall from his book The Turnaround that 70 individuals in custody completely changed the complexion of life in that City). Big effect when you can systematically follow up to root causes, as he did then.

However, back in London, I wonder how this string of events are mapped onto the crime statistics being widely published and cited. Any ideas?

Great Technology. Where’s the Business Value?

Exponential Growth Bar GraphIt’s a familiar story. Some impressive technical development comes up, and the IT industry adopts what politicians will call a “narrative” to try push its adoption – and profit. Two that are in the early stages right now are “Wearables” and “Internet of Things”. I’m already seeing some outlandish market size estimates for both, and wondering how these map back to useful applications that people will pay for.

“Internet of Things” is predicated on an assumption that with low cost sensors and internet connected microcomputers embedded in the world around us, the volume of data thrown onto the Internet will necessitate a ready market needing to consume large gobs of hardware, software and services. One approach to try to rationalise this is to spot where there are inefficiencies in a value chain exist, and to see where technology will help remove them.

One of my sons friends runs a company that has been distributing sensors of all sorts for over 10 years. Thinking there may be an opportunity to build a business on top of a network of these things, I asked him what sort of applications his products were put to. It appears to be down to networks of flows in various utilities and environmental assets (water, gas, rivers, traffic) or in industrial process manufacturing. Add some applications of low power bluetooth beacons, then you have some human traffic monitoring in retail environments. I start running out of ideas for potential inefficiencies that these (a) can address and (b) that aren’t already being routinely exploited. One example is in water networks, where fluid flows across a pipe network can help quickly isolate the existence of leaks, markedly improving the supply efficiency. But there are already companies in place that do that and they have the requisite relationships. No gap there apparent.

One post on Gigaom showed some interesting new flexible electronic materials this week. The gotcha with most such materials is the need for batteries, the presence of which restricts the number of potential applications. One set of switches from Swiss company Algra could emit a 2.4GHz radio signal between 6-10 meters using only energy from someone depressing a button; the main extra innovations are that the result is very thin, and have (unlike predecessors) extremely long mechanical lifetimes. No outside power source required. So, just glue your door bells or light switches where you need them, and voila – done forever.

The other material that caught my eye was a flexible image sensor from ISORG (using Plastic Logic licensed technology). They managed to have a material that you could layer on the surface of a display, and which can read the surface of any object placed against it. No camera needed, and with minimal thickness and weight. Something impossible with a standard CMOS imaging scanner, because that needs a minimum distance to focus on the object above it. So, you could effectively have an inbuilt scanner on the surface of your tablet, not only for monochrome pictures, but even fingerprints and objects in close proximity – for contactless gesture control. Hmmm – smart scanning shelves in retail and logistics – now that may give users some vastly improved efficiencies along the way.

The source article is at: http://gigaom.com/2014/04/07/how-thin-flexible-electronics-will-revolutionize-everything-from-user-interfaces-to-packaging/

A whole field is opening up around collecting data from the Onboard Diagnostics Bus that exists in virtually every modern car now, but i’ve yet to explore that in any depth so far. I’ve just noticed a trickle of news articles about Phil Windley’s FUSE project on Kickstarter (here) and some emerging work by Google in the same vein (with the Open Automotive Alliance). Albeit like TVs, vehicle manufacturers have regulatory challenges and/or slow replacement cycles stopping them moving at the same pace as the computer and electronic industries do.

Outside of that, i’m also seeing a procession of potential wearables, from glasses, to watches, to health sensors and to clip-on cameras.

Glasses and Smart Watches in general are another much longer story (will try and do that justice tomorrow), but these are severely limited by the need for battery power in limited space to so much more than their main application – which is simple display of time and pertinent notifications.

Health sensors are pretty well established already. I have a FitBit One on me at all times bar when i’m sleeping. However, it’s main use these days is to map the number of steps I take into an estimated distance I walk daily, which I tap pro-rata into Weight Loss Resources (I know a walk to our nearest paper shop and back is circa 10,000 steps – and 60 mins of moderate speeds – enough to give a good estimate of calories expended). I found the calorie count questionable and the link to MyFitnessPal a source of great frustration for my wife; it routinely swallows her calorie intake and rations out the extra extra calories earnt (for potential increased food consumption) very randomly over 1-3 days. We’ve never been able to correlate it’s behaviour rationally, so we largely ignore that now.

There’s lots of industry speculation around now that Apple’s upcoming iWatch will provide health related sensors, and to send readings into a Passbook-like Health Monitoring application on a users iPhone handset. One such report here. That would probably help my wife, who always appears to suffer a level of anxiety whenever her blood pressure is taken – which worsens her readings (see what happens after 22 days of getting used to taking daily readings – things settle down):

Jane Waring Blood Pressure Readings

I dare say if the reading was always on, she’d soon forget it’s existence and the readings reflect a true reality. In the meantime, there are also feelings that the same Health monitoring application will be able to take readings from other vendors sensors, and that Apple are trying to build an ecosystem of personal health devices that can interface to it’s iPhone based “hub” – and potentially from there onto Internet based health services. We can but wait until Apple are ready to admit it (or not!) at upcoming product announcement events this year.

The main other wearables today are cameras. I’ve seen some statistics on the effect of Police Officers wearing these in the USA:

US Police Officer with Camera

One of my youngest sons friends is a serving Police Officer here, and tells us that wearing of cameras in his police force is encouraged but optional. That said, he said most officers are big fans of using them. When turned off, they have a moving 30 second video buffer, so when first switched on, they have a record of what happened up to 30 seconds before that switch was applied. Similarly, when turned off, they continue filming for a further 30 seconds before returning to their looping state.

Perhaps surprising, he says that his interactions are such that he’s inclined to use less force even though, if you saw footage, you’d be amazed at his self restraint. In the USA, Police report that when people they’re engaging know they’re being filmed/recorded, they are far more inclined to behave themselves and not to try to spin “he said that, I said that” yarns.

There are all sorts of privacy implications if everyone starts wearing such devices, and they are getting increasingly smaller. Muvi cameras as one example, able to record 70-90 minutes of hi res video from their 55mm tall, clip attached enclosure. Someone was recently prosecuted in Seattle for leaving one of these lens-up on a path between buildings frequented by female employees at his company campus (and no, I didn’t see any footage – just news of his arrest!).

We’re moving away from what we thought was going to be a big brother world – but to one where such cameras use is “democratised” across the whole population.

Muvi Camcorder

 

I don’t think anyone has really comprehended the full effect of this upcoming ubiquity, but I suspect that a norm will be to expect that the presence of a working camera to be indicated vividly. I wonder how long it will take for that to become a new normal – and if there are other business efficiencies that their use – and that of other “Internet of Things” sensors in general – can lay before us all.

That said, I suspect industry estimates for “Internet of Things” revenues, as they stand today, along with a lack of perceived “must have this” applications, make them feel hopelessly optimistic to me.

NFC and it’s route to, eh, oblivion?

Square iPad POS Terminal

I see that credit card companies have started deploying Near Field Communications (NFC) technology to the world (aka Contactless Payments), and speculation is running on which mobile handset vendors will bundle the technology. Am I the only person who thinks that it’s a neat solution to a problem that doesn’t exist – outside of the few geeks and Credit Card Issuers who think it’s neat?

I think Square (the mobile phone credit card payment processor and designer of the iPad till shown above) have got it completely right, where NFC takes everyone up a blind alley. Square started by looking at the usual buying experience in a retail setting, and worked things back from there on how to remove all the friction. It sort of works like this.

When a regular customer is within a short distance from the shop, their picture and name appears on the till. If they walk in, one press shows their regular order items and any special upsells. They can be greeted by name, asked if they want their usual and whether they’d like to try the offer of the day. You can then offer to let them pay using their normal debit or credit card, process the payment and email the receipt. You have already authenticated them, so all good to go. A nice retail experience.

With NFC, all the action is past the time when you can do the basics of good service, and the upsell opportunity is gone. You just take the payment, and oops – they need to authenticate that they are the user of the phone (otherwise a thief with a stolen phone would run past every till in the nearest department store). So you have to enter a password or pin. So, what’s the extra advantage over using a card, without the retailer having to cough up the costs of expen$ive readers?

I can’t think of one. Square seem to have the right idea. And to me, NFC looks like a white elephant. Have I missed something?