IT Trends into 2018 – or the continued delusions of Ian Waring

William Tell the Penguin

I’m conflicted. CIO Magazine published a list of “12 technologies that will disrupt business in 2018”, which promptly received Twitter accolades from folks I greatly respect: Leading Edge Forum, DXC Technology and indeed Simon Wardley. Having looked at it, I thought it had more than it’s fair share of muddled thinking (and they listed 13 items!). Am I alone in this? Original here. Taking the list items in turn:

Smart Health Tech (as evidenced by the joint venture involving Amazon, Berkshire Hathaway and JP Morgan Chase). I think this is big, but not for the “corporate wellness programs using remote patient monitoring” reason cited. That is a small part of it.

Between the three you have a large base of employees in a country without a single payer healthcare system, mired with business model inefficiencies. Getting an operationally efficient pilot with reasonable scale using internal users in the JV companies running, and then letting outsiders (even competitors) use the result, is meat and drink to Amazon. Not least as they always start with the ultimate consumer (not rent seeking insurance or pharma suppliers), and work back from there.

It’s always telling that if anyone were to try anti-trust actions on them, it’s difficult to envision a corrective action that Amazon aren’t already doing to themselves already. This program is real fox in the hen house territory; that’s why on announcement of the joint venture, leading insurance and pharmaceutical shares took quite a bath. The opportunity to use remote patient monitoring, using wearable sensors, is the next piece of icing on top of the likely efficient base, but very secondary at the start.

Video, video conferencing and VR. Their description cites the magic word “Agile” and appears to focus on using video to connect geographically dispersed software development teams. To me, this feels like one of those situations you can quickly distill down to “great technology, what can we use this for?”. Conferencing – even voice – yes. Shared KanBan flows (Trello), shared BaseCamp views, communal use of GitHub, all yes. Agile? That’s really where you’re doing fast iterations of custom code alongside the end user, way over to the left of a Wardley Map; six sigma, doggedly industrialising a process, over to the right. Video or VR is a strange bedfellow in the environment described.

Chatbots. If you survey vendors, and separately survey the likely target users of the technology, you get wildly different appetites. Vendors see a relentless march to interactions being dominated by BOT interfaces. Consumers, given a choice, always prefer not having to interact in the first place, and only where the need exists, to engage with a human. Interacting with a BOT is something largely avoided unless it is the only way to get immediate (or out of hours) assistance.

Where the user finds themselves in front of a ChatBot UI, they tend to prefer an analogue of a human talking them, preferably appearing to be of a similar age.

The one striking thing i’ve found was talking to a vendor who built an machine learning model that went through IT Helpdesk tickets, instant message and email interaction histories, nominally to prioritise the natural language corpus into a list of intent:action pairs for use by their ChatBot developers. They found that the primary output from the exercise was in improving FAQ sheets in the first instance. Ian thinking “is this technology chasing a use case?” again. Maybe you have a different perspective!

IoT (Internet of Things). The sample provides was tying together devices, sensors and other assets driving reductions in equipment downtime, process waste and energy consumption in “early adopter” smart factories. And then citing security concerns and the need to work with IT teams in these environments to alleviate such risks.

I see lots of big number analyses from vendors, but little from application perspectives. It’s really a story of networked sensors relaying information back to a data repository, and building insights, actions or notifications on the resulting data corpus. Right now, the primary sensor networks in the wild are the location data and history stored on mobile phone handsets or smart watches. Security devices a smaller base. Embedded simple devices smaller still. I think i’m more excited when sensors get meaningful vision capabilities (listed separately below). Until then, content to let my Apple Watch keep tabs on my heart rate, and to feed that daily into a research project looking at strokes.

Voice Control and Virtual Assistants. Alexa: set an alarm for 6:45am tomorrow. Play Lucy in the Sky with Diamonds. What’s the weather like in Southampton right now? OK Google: What is $120 in UK pounds? Siri: send a message to Jane; my eta is 7:30pm. See you in a bit. Send.

It’s primarily a convenience thing when my hands are on a steering wheel, in flour in a mixing bowl, or the quickest way to enact a desired action – usually away from a keyboard and out of earshot to anyone else. It does liberate my two youngest grandchildren who are learning to read and write. Those apart, it’s just another UI used occasionally – albeit i’m still in awe of folks that dictate their book writing efforts into Siri as they go about their day. I find it difficult to label this capability as disruptive (to what?).

Immersive Experiences (AR/VR/Mixed Reality). A short list of potential use cases once you get past technology searching for an application (cart before horse city). Jane trying out lipstick and hair colours. Showing the kids a shark swimming around a room, or what colour Tesla to put in our driveway. Measuring rooms and seeing what furniture would look like in situ if purchased. Is it Groundhog Day for Second Life, is there a battery of disruptive applications, or is it me struggling for examples? Not sure.

Smart Manufacturing. Described as transformative tech to watch. In the meantime, 3D printing. Not my area, but it feels to me low volume local production of customised parts, and i’m not sure how big that industry is, or how much stock can be released by putting instant manufacture close to end use. My dentist 3D prints parts of teeth while patients wait, but otherwise i’ve not had any exposure that I could translate as a disruptive application.

Computer Vision. Yes! A big one. I’m reminded of a Google presentation that related the time in prehistoric times when the number of different life form species on earth vastly accelerated; this was the Cambrian Period, when life forms first developed eyes. A combination of cheap camera hardware components, and excellent machine learning Vision APIs, should be transformative. Especially when data can be collected, extracted, summarised and distributed as needed. Everything from number plate, barcode or presence/not present counters, through to the ability to describe what’s in a picture, or to transcribe the words recited in a video.

In the Open Source Software World, we reckon bugs are shallow as the source listing gets exposed to many eyes. When eyes get ubiquitous, there are probably going to be little that happens that we collectively don’t know about. The disruption is then at the door of privacy legislation and practice.

Artificial Intelligence for Services. The whole shebang in the article relates back to BOTs. I personally think it’s more nuanced; it’s being able to process “dirty” or mixed media data sources in aggregate, and to use the resulting analysis to both prioritise and improve individual business processes. Things like www.parlo.io‘s Broca NLU product, which can build a suggested intent:action Service Catalogue from Natural Language analysis of support tickets, CRM data, instant message and support email content.

I’m sure there are other applications that can make use of data collected to help deliver better, more efficient or timely services to customers. BOTs, I fear, are only part of the story – with benefits accruing more to the service supplier than to the customer exposed to them. Your own mileage may vary.

Containers and Microservices. The whole section is a Minestrone Soup of Acronyms and total bollocks. If Simon Wardley was in a grave, he’d be spinning in it (but thank god he’s not).

Microservices is about making your organisations data and processes available to applications that can be internally facing, externally facing or both using web interfaces. You typically work with Apigee (now owned by Google) or 3Scale (owned by Red Hat) to produce a well documented, discoverable, accessible and secure Application Programming Interface to the services you wish to expose. Sort licensing, cost mechanisms and away. This is a useful, disruptive trend.

Containers are a standardised way of packaging applications so that they can be delivered and deployed consistently, and the number of instances orchestrated to handle variations in load. A side effect is that they are one way of getting applications running consistently on both your own server hardware, and in different cloud vendors infrastructures.

There is a view in several circles that containers are an “interim” technology, and that the service they provide will get abstracted away out of sight once “Serverless” technologies come to the fore. Same with the “DevOps” teams that are currently employed in many organisations, to rapidly iterate and deploy custom code very regularly by mingling Developer and Operations staff.

With Serverless, the theory being that you should be able to write code once, and for it to be fired up, then scaled up or down based on demand, automatically for you. At the moment, services like Amazon AWS Lambda, Google Cloud Functions and Microsoft Azure Functions (plus point database services used with them) are different enough to make applications based on one limited to that cloud provider only.

Serverless is the Disruptive Technology here. Containers are where the puck is, not where the industry is headed.

Blockchain. The technology that first appeared under Bitcoin is the Blockchain. A public ledger, distributed over many different servers worldwide, that doesn’t require a single trusted entity to guarantee the integrity (aka “one version of the truth”) of the data. It manages to ensure that transactions move reliably, and avoids the “Byzantine Generals Problem” – where malicious behaviour by actors in the system could otherwise corrupt its working.

Blockchain is quite a poster child of all sorts of applications (as a holder and distributor of value), and focus of a lot of venture capital and commercial projects. Ethereum is one such open source, distributed platform for smart contracts. There are many others; even use of virtual coins (ICO’s) to act as a substitute for venture capital funding.

While it has the potential to disrupt, no app has yet broken through to mainstream use, and i’m conscious that some vendors have started to patent swathes of features around blockchain applications. I fear it will be slow boil for a long time yet.

Cloud to Edge Computing. Another rather gobbledygook set of words. I think they really mean that there are applications that require good compute power at the network edge. Devices like LIDAR (the spinning camera atop self driving cars) is typically consuming several GB of data per mile travel, where there is insufficient reliable bandwidth to delegate all the compute to a remote cloud server. So there are models of how a car should drive itself that are built in the cloud, but downloaded and executed in the car without a high speed network connection needing to be in place while it’s driving. Basic event data (accident ahead, speed, any notable news) may be fed back as it goes, with more voluminous data shared back later when adjacent to a fast home or work network.

Very fast chips are a thing; the CPU in my Apple Watch is faster than a room size VAX-11/780 computer I used earlier in my career. The ARM processor in my iPhone and iPad Pro are 64-bit powerhouses (Apple’s semiconductor folks really hit out of the park on every iteration they’ve shipped to date). Development Environments for powerful, embedded systems are something i’ve not seen so far though.

Digital Ethics. This is a real elephant in the room. Social networks have been built to fulfil the holy grail of advertisers, which is to lavish attention on the brands they represent in very specific target audiences. Advertisers are the paying customers. Users are the Product. All the incentives and business models align to these characteristics.

Political operators, both local as well as foreign actors, have fundamentally subverted the model. Controversial and most often incorrect and/or salacious stories get wide distribution before any truth emerges. Fake accounts and automated bots further corrupt the measures to pervert the engagement indicators that drive increased distribution (noticeable that one video segment of one Donald Trump speech got two orders of magnitude more “likes” than the number of people that actually played the video at all). Above all, messages that appeal to different filter bubbles drive action in some cases, and antipathy in others, to directly undermine voting patterns.

This is probably the biggest challenge facing large social networks, at the same time that politicians (though the root cause of much of the questionable behaviours, alongside their friends in other media), start throwing regulatory threats into the mix.

Many politicians are far too adept at blaming societal ills on anyone but themselves, and in many cases on defenceless outsiders. A practice repeated with alarming regularity around the world, appealing to isolationist bigotry.

The world will be a better place when we work together to make the world a better place, and to sideline these other people and their poison. Work to do.

Does your WordPress website go over a cliff in July 2018?

Secure connections, faster web sites, better Google search rankings – and well before Google throw a switch that will disadvantage many other web sites in July 2018. I describe the process to achieve this for anyone running a WordPress Multisite Network below. Or I can do this for you.

Many web sites that handle financial transactions use a secure connection; this gives a level of guarantee that you are posting your personal or credit card details directly to a genuine company. But these “HTTPS” connections don’t just protect user data, but also ensure that the user is really connecting to the right site and not an imposter one. This is important because setting up a fake version of a website users normally trust is a favourite tactic of hackers and malicious actors. HTTPS also ensures that a malicious third party can’t hijack the connection and insert malware or censor information.

Back in 2014, Google asked web site owners if they could make their sites use HTTPS connections all the time, and provided both a carrot and a stick as incentives. On the one hand, they promised that future versions of their Chrome Browser would explicitly call out sites that were presenting insecure pages, so that users knew where to tread very carefully. On the upside, they suggested that they would positively discriminate secure sites over insecure ones in future Google searches.

The final step in this process comes in July 2018:

New HTTP Treatment by Chrome from July 2018

The logistics of achieving “HTTPS” connections for many sites is far from straight forward. Like many service providers, I host a WordPress network, that aims individual customer domain names at a single Linux based server. That in turn looks to see which domain name the inbound connection request has come from, and redirects onto that website customers own subdirectory structure for the page content, formatting and images.

The main gotcha is that if I tell my server that its certified identity is “www.software-enabled.com”, an inbound request from “www.ianwaring.com”, or “www.obesemanrowing.org.uk”, will get very confused. It will look like someone has hijacked the sites, and the users browser session will gain some very pointed warnings suggesting a malicious traffic subversion attempt.

A second gotcha – even if you solve the certified identity problem – is that a lot of the content of a typical web site contains HTTP (not HTTPS) links to other pages, pictures or video stored within the same site. It would normally be a considerable (and error prone) process to change http: to https: links on all pages, not least as the pages themselves for all the different customer sites are stored by WordPress inside a complex MySQL database.

What to do?

It took quite a bit of research, but cracked it in the end. The process I used was:

  1. Set up each customer domain name on the free tier of the CloudFlare content delivery network. This replicates local copies of the web sites static pages in locations around the world, each closer to the user than the web site itself.
  2. Change the customer domain name’s Name Servers to the two cited by CloudFlare in step (1). It may take several hours for this change to propagate around the Internet, but no harm continuing these steps.
  3. Repeat (1) and (2) for each site on the hosted WordPress network.
  4. Select the WordPress “Network Admin” dashboard, and install two plug-ins; iControlWP’s “CloudFlare Flexible SSL”, and then WebAware’s “SSL Insecure Content Fixer”. The former handles the connections to the CloudFlare network (ensuring routing works without unexpected redirect loops); the latter changes http: to https: connections on the fly for references to content within each individual customer website. Network Enable both plugins. There is no need to install the separate CloudFlare WordPress plugin.
  5. Once CloudFlare’s web site shows all the domain names as verified that they are being managed by CloudFlare’s own name servers with their own certificates assigned (they will get a warning or a tick against each), step through the “Crypto” screen on each one in turn – switching on “Always use https” redirections.

At this point, whether users access the websites using http: or https: (or don’t mention either), each will come up with a padlocked, secure, often greened address bar with “https:” in front of the web address of the site. Job done.

Once the HTTP redirects to HTTPS appear to be working, and all the content is being displayed correctly on pages, I go down the Crypto settings on the CloudFlare web site and enable “opportunistic encryption” and “HTTPS rewrites”.

In the knowledge that Google also give faster sites better rankings in search results over slow ones, there is also a “Speed” section in the CloudFlare web site. On this, i’ve told it to compress CSS, JavaScript and HTML pages – termed “Auto Minify” – to minimise the amount of data transmitted to the users browser but to still render it correctly. This, in combination with my use of a plug-in to use Google’s AMP (Accelerated Mobile Pages) shortcuts – which in turn can give 3x load speed improvements on mobile phones – all the customer sites are really flying.

CloudFlare do have a paid offering called “Argo Smart Routing” that further speeds up delivery of web site content. Folks are shown to be paying $5/month and seeing page loads in 35% of the time prior to this being enabled. You do start paying for the amount of traffic you’re releasing into the Internet at large, but the pricing tiers are very generous – and should only be noticeable for high traffic web sites.

So, secure connections, faster web sites, better Google search rankings – and well before Google throw the switch that will disadvantage many other web sites in July 2018. I suspect having hundreds of machines serving the content on CloudFlare’s Content Delivery Network will also make the site more resilient to distributed denial of service flood attack attempts, if any site I hosted ever got very popular. But I digress.

If you would like me to do this for you on your WordPress site(s), please get in touch here.

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

Apple Watch: My first 48 hours

To relate my first impressions of my Apple Watch (folks keep asking).  I bought the Stainless Steel one with a Classic Black Strap.

The experience in the Apple Store was a bit too focussed on changing the clock face design; the experience of using it, for accepting the default face to start with, and using it for real, is (so far) much more pleasant. But take it off the charger, put it on, and you get:

Apple Watch PIN Challenge

Tap in your pin, then the watch face is there:

Apple Watch Clock Face

There’s actually a small (virtual) red/blue LED just above the “60” atop the clock – red if a notification has come in, turning into a blue padlock if you still need to enter your PIN, but otherwise what you see here. London Time, 9 degrees centigrade, 26th day of the current month, and my next calendar appointment underneath.

For notifications it feels deserving of my attention, it not only lights the LED (which I only get so see if I flick my wrist up to see the watch face), but it also goes tap-tap-tap on my wrist. This optionally also sounds a small warning, but that’s something I switched off pretty early on. The taptic hint is nice, quiet and quite subtle.

Most of the set-up for apps and settings is done on the Apple iPhone you have paired up to the watch. Apps reside on the phone, and ones you already have that can talk to your watch are listed already. You can then select which ones you want to appear on the watches application screen, and a subset you want to have as “glances” for faster access. The structure looks something like this:

Apple Watch No NotificationsApple Watch Clock Face

Apple Watch Heart Rate Apple Watch Local Weather Amazon Stock Quote Apple Watch Dark Sky

 

Hence, swipe down from the top, you see the notification stream, swipe back up, you’re back to the clock face. Swipe up from the bottom, you get the last “glance” you looked at. In my case, I was every now and then seeing how my (long term buy and hold) shares in Amazon were doing after they announced the size of their Web Services division. The currently selected glance stays in place for next time I swipe up unless I leave the screen having moved along that row.

If I swipe from left to right, or right to left, I step over different “glances”. These behave like swiping between icon screens on an iPhone or iPad; if you want more detail, you can click on them to invoke the matching application. I have around 12 of these in place at the moment. Once done, swipe back up, and back to the clock face again. After around 6 seconds, the screen blacks out – until the next time you swing the watch face back into view, at which point it lights up again. Works well.

You’ll see it’s monitoring my heart rate, and measuring my movement. But in the meantime, if I want to call or message someone, I can hit the small button on the side and get a list of 12 commonly called friends:

Apple Watch Friends

Move the crown around, click the picture, and I can call or iMessage them directly. Text or voice clip. Yes, directly on the watch, even if my iPhone is upstairs or atop the cookery books in the kitchen; it has a microphone and a speaker, and works from anywhere over local WiFi. I can even see who is phoning me and take their calls on the watch.

If I need to message anyone else, I can press the crown button in and summon Siri; the accuracy of Siri is remarkable now. One of my sons sent an iMessage to me when I was sitting outside the Pharmacy in Boots, and I gave a full sentence reply (verbally) then told it to send – 100% accurately despite me largely whispering into the watch on my wrist. Must have looked strange.

There are applications on the watch but these are probably a less used edge case; in my case, the view on my watch looks just like the layout i’ve given in the iPhone Watch app:

Apple Watch Applications

So, I can jump in to invoke apps that aren’t set as glances. My only surprise so far was finding that FaceBook haven’t yet released their Watch or Messenger apps, though Instagram (which they also own) is there already. Eh, tap tap on my wrist to tell me Paula Radcliffe had just completed her last London Marathon:

BBC News Paula Radcliffeand a bit later:

Everton 3 Man Utd 0

Oh dear, what a shame, how sad (smirk – Aston Villa fan typing). But if there’s a flurry of notifications, and you just want to clear the lot off in one fell swoop, just hard press the screen and…

Clear All Notificatios

Tap the X and zap, all gone.

There are a myriad of useful apps; I have Dark Sky (which gives you a hyper local forecast of any impending rain), City Mapper (helps direct you around London on all different forms of Transport available), Uber, and several others. They are there in the application icons, but also enabled from the Watch app on my phone (Apps, then the subset selected as Glances):

Ians Watch Apps Ians Watch Glances

With that, tap tap on my wrist:

Apple Watch Stand Up!

Hmmm – i’ve been sitting for too long, so time to get off my arse. It will also assess my exercise in the day and give me some targets to achieve – which it’ll then display for later admiration. Or disgust.

There is more to come. I can already call a Uber taxi directly from the watch. The BBC News Glance rotates the few top stories if selected. Folks in the USA can already use it to pay at any NFC cash terminal with a single click (if the watch comes off your wrist, it senses this and will insist on a PIN then). Twitter gives notifications and has a glance that reports the top trend hashtag when viewed.

So far, the battery is only getting from 100% down to 30% in regular use from 6:00am in the morning until 11:30pm at night, so looking good. Boy, those Amazon shares are going up; that’ll pay for my watch many times over:

Watch on Arm

Overall, impressed so far, very happy with it, and i’m sure the start of a world where software steps submerge into a world of simple notifications and responses to same. And i’m sure Jane (my wife) will want one soon. Just have to wean her out of her desire for the £10,000+ gold one to match her gold coloured MacBook.

Hooked, health markets but the mind is wandering… to pooh and data privacy

Hooked by Nir Eyal

One of the things I learnt many years ago was that there were four fundamental basics to increasing profits in any business. You sell:

  • More Products (or Services)
  • to More People
  • More Often
  • At higher unit profit (which is higher price, lower cost, or both)

and with that, four simple Tableau graphs against a timeline could expose the business fundamentals explaining good growth, or the core reason for declining revenue. It could also expose early warning signs, where a small number of large transactions hid an evolving surprise – like the volume of buying customers trending relentlessly down, while the revenue numbers appeared to be flying okay.

Another dimension is that a Brand equates to trust, and that consistency and predictability of the product or service plays a big part to retain that trust.

Later on,  a more controversial view was that there were two fundamental business models for any business; that of a healer or a dealer. One sells an effective one-shot fix to a customer need, while the other survives by engineering a customers dependency to keep on returning.

With that, I sometimes agonise on what the future of health services delivery is. One the one hand, politicians verbal jousts over funding and trying to punt services over to private enterprise. In several cases to providers of services following the economic rent (dealer) model found in the American market, which, at face value, has a business model needing per capita expense that no sane person would want to replicate compared to the efficiency we have already. On the other hand, a realisation that the market is subject to radical disruption, through a combination of:

  • An ever better informed, educated customer base
  • A realisation that just being overweight is a root cause of many adverse trends
  • Genomics
  • Microbiome Analysis
  • The upcoming ubiquity of sensors that can monitor all our vitals

With that, i’ve started to read “Hooked” by Nir Eyal, which is all about the psychology of engineering habit forming products (and services). The thing in the back of my mind is how to encourage the owner (like me) of a smart watch, fitness device or glucose monitor to fundamentally remove my need to enter my food intake every day – a habit i’ve maintained for 12.5 years so far.

The primary challenge is that, for most people, there is little newsworthy data that comes out of this exercise most of the time. The habit would be difficult to reinforce without useful news or actionable data. Some of the current gadget vendors are trying to encourage use by encouraging steps competition league tables you can have with family and friends (i’ve done this with relatives in West London, Southampton, Tucson Arizona and Melbourne Australia; that challenge finished after a week and has yet to be repeated).

My mind started to wander back to the challenge of disrupting the health market, and how a watch could form a part. Could its sensors measure my fat, protein and carb intake (which is the end result of my food diary data collection, along with weekly weight measures)? Could I build a service that would be a data asset to help disrupt health service delivery? How do I suss Microbiome changes – which normally requires analysis of a stool samples??

With that, I start to think i’m analysing this the wrong way around. I remember an analysis some time back when a researcher assessed the extent drug (mis)use in specific neighbourhoods by monitoring the make-up of chemical flows in networks of sewers. So, rather than put sensors on people’s wrists (and only see a subset of data), is there a place for technology in sewer pipes instead? If Microbiomes and the Genetic makeup of our output survives relatively intact, then sampling at strategic points of the distribution network would give us a pretty good dataset. Not least as DNA sequencing could allow the original owner (source) of output to connect back to any pearls of wisdom that could be analysed or inferred from their contributions, even if the drop-off points happened at home, work or elsewhere.

Hmmm. Water companies and Big Data.

Think i’ll park that and get on with the book.

New Mobile Phone or Tablet? Do this now:

Find My iPhone - Real MapIf you have an iPhone or iPad, install “Find My iPhone”. If you have an Android phone or tablet, install “Android Device Manager”. Both free of charge, and will prevent you looking like a dunce on social media if your device gets lost or stolen. Instead, you can get your phone (or tablets) current location like that above – from any Internet connection.

If you do, just login to iCloud or Android Device Manager on the web, and voila – it will draw its location on a map – and allow various options (like putting a message on the screen, or turn it into a remote speaker that the volume control can’t mute, or to wipe the device).

Phone lost in undergrowth and the battery about to die? Android phones will routinely bleat their location to the cloud before all power is lost, so ADM can still remember where you should look.

So, how does a modern smartphone know work out where you are? For the engineering marvel that is the Apple iPhone, it sort of works like this:

  1. If you’re in the middle of an open field with the horizon visible in all directions, your handset will be able to pick up signals from up to 14 Global Positioning System (GPS) Satellites. If it sees only 2 of them (with the remainder obscured by buildings, structures or your car roof, etc), it can work out your x and y co-ordinates to within 3 meters – worldwide. If it can see at least 3 of the 14 satellites, then it can work out your elevation above sea level too.
  2. Your phone will typically be communicating its presence to a local cell tower. Your handset knows the approximate location of these, albeit in distances measured in kilometers or miles. It’s primary use is to suss which worldwide time zone you are in; that’s why your iPhone sets itself to the correct local time when you switch on your handset at an airport after your flight lands.
  3. Your phone will sense the presence of WiFi routers and reference a database that associates the routers unique Ethernet address with the location where it is consistently found (by other handsets, or by previous data collection when building online street view maps). Such signals are normally within a 100-200 meters range. This range is constrained because WiFi usually uses the 2.4GHz band, which is the frequency at which a microwave oven agitates and heats water; the fact the signal suffers badly in rain is why it was primarily intended for internal use inside buildings.

A combination of the above are sensed and combined to drill down to your phones timezone, it’s location as being in a mobile phone cell area (can be a few hundred yards in dense populated areas, or miles in large rural areas or open countryside); to being close to a specific wifi router, or (all else being well, your exact GPS location to within 10 feet or so.

A couple of extra capabilities feature on latest iPhone and Android handsets to extend location coverage to areas in large internal buildings and shopping centres, where the ability for a handset to see any GPS satellites are severely constrained or absent altogether.

  • One is Low Energy Bluetooth Beacons. Your phone can sense the presence of nearby beacons (or, at your option, be one itself); these are normally associated with a particular retail organisation (one half of a numeric identifier) and another unique to each beacon unit (it is up to the organisation to map the location and associated attributes – like “this is the Perfume Department Retail Sale Counter on Floor 2 of the Reading Department Store”. An application can tell whether it can sense the signal at all, if you’re within 10′ of the beacon, or if the handset is immediately adjacent to the beacon (eg: handset being held against a till).

You’ll notice that there is no central database of bluetooth beacon locations and associated positions and attributes. The handset manufacturers are relatively paranoid that they don’t want a handset user being spammed incessantly as they walk past a street of retail outlets; hence, you must typically opt into the app of specific retailers to get notifications at all, and to be able to switch them off if they abuse your trust.

  • Another feature of most modern smartphone handsets is the presence of miniature gyroscopes, accelerometers and magnetic sensors in every device. Hence the ability to know how the phone is positioned in both magnetic compass direction and its orientation in 3D space at all times. It can also sense your speed by force and direction of your movements. Hence even if in an area or building with no GPS signal, your handset can fairly accurately suss your position from the last moment it had a quality location fix on you, augmented by the directions and speeds you’ve followed since. An example of history recorded around a typical shopping centre can look like this:

Typically, apps don’t lock onto your positioning full time; users will know how their phone batteries tend to drain much faster when their handsets are used with all sensors running full time in a typical app like Google Maps (in Navigation mode) or Waze. Instead, they tend to fill a location history, so a user can retrieve their own historical movement history or places they’ve recently visited. I don’t know of any app that uses this data, but know in Apples case, you’d have to give specific permission to an app to use such data with your blessing (or it would get no access to it at all). So, mainly for future potential use.

As for other location apps – Apple Passbook is already throwing my Starbucks card onto my iPhone’s lock screen when I’m close to a Starbucks location, and likewise my boarding card at a Virgin Atlantic Check-in Desk. I also have another app (Glympse) that messages my current map location, speed and eta (continuously updated) to any person I choose to share that journey with – normally my wife when on the train home, or my boss of affected by travel delays. But am sure there is more to come.

In the meantime, I hope people just install “Find my iPhone” or “Android Device Manager” on any phone handset you buy or use. They both make life less complicated if your phone or tablet ever goes missing. And you don’t get to look like a dunce for not taking the precautions up front that any rational thinking person should do.

Grain Brain: modern science kills several fundamental diet myths

Having tracked my own daily food consumption (down to carbs, protein, fat levels, plus nett calories) and weekly weight since June 2002, I probably have an excessive fascination with trying to work out which diets work. All in an effort to spot the root causes of my weight ebbs and flows. I think i’ve sort of worked it out (for me here) and have started making significant progress recently simply by eating much fewer calories than my own Basal Metabolic Rate.

Alongside this has been my curiosity about Microbiomes that outnumber our own cells in our bodies by 10:1, and wondering what damage Antibiotics wreak on them (and their otherwise symbiotic benefits to our own health) – my previous blog post here. I have also been agonising over what my optimum maintenance regime should be when I hit my target weight levels. Above all, thinking a lot about the sort of sensors everyone could employ to improve their own health as mobile based data collection technology radically improves.

I don’t know how I zeroed in on the book “Grain Brain”, but it’s been quite a revelation to me, and largely boots both the claims and motivations of newspapers, the pharmaceutical industry and many vogue diets well into touch. This backed up by voluminous, cited research conducted over the last 30 years.

A full summary would be very too long, didn’t read territory. That said, the main points are:

  1. Little dietary fat and less than 20% of cholesterol consumed makes it into your own storage mechanisms; most cholesterol is manufactured by your liver
  2. There is no scientific basis to support the need for low cholesterol foods; allegations that there is an effect at blocking arteries is over 30 years old and statistically questionable. In fact, brain functions (and defence against Alzheimer’s and other related conditions) directly benefit from high cholesterol and high fat diets.
  3. The chief source of body fat is from consumption of Carbohydrates, not fat at all. So called low fat diets often substitute carbs and sugars, which further exacerbate the very weight problems that consumers try to correct.
  4. Gluten as found in cereals is a poison. Whereas some plants open encourage consumption of seeds by animals to facilitate distribution of their payload, wheat gluten is the other sort of material – designed specifically to discourage consumption. There is material effect on body functions that help distribute nutrition to the brain.
  5. Excessive consumption of carbs, and the resulting effect on weight, is a leading cause of type 2 diabetes. It also has an oxidising effect on cholesterol in the body, reducing it’s ability to carry nutrients to the brain (which is, for what it’s worth, 80% fat).
  6. Ketosis (the body being in a state where it is actively converted stored fats into energy) is a human norm. The human body is designed to be able to manage periods of binge then bust systematically. Hence many religions having occasional fasting regimes carry useful health benefits.
  7. The human genome takes 60-70,000 years to evolve to manage changes in diet, whereas human consumption has had a abrupt charge from heavy fat and protein diets to a diet majoring on cereal and carbs in only the last 10,000 years. Our relatively recent diet changes have put our bodies under siege.

The sum effect is guidance err on the side of much greater fat/protein content, and less carbs in the diet, even if it means avoiding the Cereals Aisle at the supermarket at all costs. And for optimum health, to try to derive energy from a diet that is circa 80% fat and protein, 20% carbs (my own historical norm is 50-55% carbs). Alcohol is generally a no-no, albeit a glass of red wine at night does apparently help.

Note that energy derived from each is different; 1g of protein is typically provides 4 kcals, 1g of fat is 9 kcals, and 1g of carbs is 3.75 kcals. Hence there is some arithmetic involved to calculate the “energy derived” mix from your eating (fortunately, the www.weightlossresources.co.uk web site does this automatically for you, converting your food intake detail into a nice pie chart as you go).

There is a lot more detail in the book relating to how various bodily functions work, and what measures are leading indicators of health or potential issues. That’s useful for my sensor thinking – and to see whether widespread regular collection of data would become a useful source for spotting health issues before they become troublesome.

One striking impression i’m left with is how much diet appears to have a direct effect on our health (or lack thereof), and to wonder aloud if changes to the overall carbs/protein/fat mix we consume would fix many of the problems addressed by the NHS and by Pharmaceutical Industries at source. Type 2 Diabetes and ever more common brain ailments in old age appear to be directly attributable to what we consume down the years, and our resulting weight. Overall, a much bigger subject, and expands into a philosophical discussion of whether financial considerations drive healer (fix the root cause) or dealer (encourage a dependency) behaviours.

For me personally, the only effect is what my diet will look like in 2015 after I get to my target weight and get onto maintenance. Most likely all Bread and Cereals out, Carb/Cake treats heavily restricted, Protein and Fat in.

I think this is a great book. Bon Appetite.

Footnote:  I’m also reminded that the only thing that cured my wifes psoriasis on her hands and feet for a considerable time were some fluids to consume prescribed by a Chinese Herbal doctor, and other material applied to the skin surface. He cited excess heat, need for yin/yan balance and prescribed material to attempt to correct things. Before you go off labelling me as a crackpot, this was the only thing that cured her after years of being prescribed steroid creams by her doctor; a nurse at her then doctors surgery suggested she try going to him under a condition of her anonymity, as she thought she’d lose her job if the doctors knew – but suggested he was able to arrest the condition in many people she knew had tried.

I suspect that the change in diet and/or setting conditions right for symbiotic microbiomes in her skin (or killing off the effect of temporarily parasitic ones) helped. Another collection of theories to add to the mix if technology progresses to monitor key statistics over millions of subjects with different genetic or physiological characteristics. Then we’ll have a better understanding, without relying on unfounded claims of those with vested interests.

 

iPhone 6 Plus – Initial Impressions

iPhone 6 Plus

Having measured my Nexus 5 (in it’s protective case), it was just over 3″ wide – nominally exactly the same width as the new iPhone 6 Plus. Hence I was happy, despite the 6 Plus being 1/2″ taller, that it would fit in my pocket. With that in mind and with its impressive looking specs, I pre-ordered a Space Grey 64GB iPhone 6 Plus the minute I could do a quick hop, skip and jump through their online store (unlike previous releases, the website was up before the iOS Apple Store app on my iPad). Promised to be delivered on launch day (September 19th), it was delivered that day at 8:50am. With that, the last weeks journey began.

Overall, delighted with the device. It is very snappy, the fingerprint reader works fast and reliably, and I duly installed my complete set of apps on it. Just a few issues with iOS 8.0.0 as shipped:

  1. When out and about, it will occasionally lose all WiFi connectivity. This only happened about once per day on the 6 Plus, and a reboot of the handset (which is now fast) fixed it.
  2. On my iPad Mini, I got the occasional keyboard freeze (normally mid typing when a notification popped up). Reboot fixed that.
  3. After upgrading my iPad Mini, icons representing Safari web pages (as saved with “Save to Home Screen”) got replaced with a blank target graphic: Missing Icon from Save to HomeScreen
  4. Finally, a few apps hadn’t been updated by the time I received the iPhone 6 Plus. One being Instagram Hyperlapse, where the screen progressively lightened as video footage was taken, and on saving, it claimed there wasn’t able to stabilise the footage. HealthKit apps also needed a fix before supporting apps were going to be put back on the App Store.

Having installed iOS 8.0.2 today (one week after the initial release of 8.0.0), (1),(2) and (4) of the above issues have been fixed, (3) is still outstanding and most of my apps have updated. Just waiting for things like the FitBit app to update HealthKit now.

The unfortunate thing is the usual carping in the media about bending iPhones (this happens even if you put all your weight on an iPhone 5S) – and only affected 9 (single digit!) out of the 10 million handsets shipped on the launch weekend. And all the usual worthless iOS vs Android verbal and written tennis.

Personally, i’m absolutely delighted with my iPhone 6 Plus. Everything is working well, and the speed and graphics quality are simply astounding. The battery lasts 2 days use, and the camera quality is wonderful. With that, i’ll leave with the apps I use:

  1. Home Screen (I have an Extras folder there containing apps I rarely or never expect to use: Compass, Tips, Voice Memos, Contacts, Find iPhone, Find Friends, Podcasts, Game Center, GarageBand and iTunesU).
  2. Work and Comms Apps
  3. Social Networks
  4. Money and Shopping
  5. Television and Video
  6. Travel
  7. Reading Material

Click to see larger images. Enjoy.

Ian Waring iPhone 6+ HomeScreen

Ian Waring iPhone 6+ Work & Comms Page

Ian Waring iPhone 6+ Social Page

Ian Waring iPhone 6+ Money & Shopping

Ian Waring iPhone 6+ TV and Video Page

Ian Waring iPhone 6+ Travel Page

Ian Waring iPhone 6+ Reading Page

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.