Kibo: Teaching Robotics to kids?

Kibo Robotic Kit - Kickstarter

I’m going to be punch drunk on the number of initiatives to support teaching programming to young kids, so my priority is to see ScratchJr make it into UK schools – if indeed the teachers think it would be a positive influence to fire up the imagination of their classes of 5-7 year old prospective programmers on their iPads.

That said, another US initiative has gone live on Kickstarter, this time for Kibo – a robot that kids program with a sequence of command bricks. No compute hardware needed with this – it’s all in the box.

The full details (and funding page) can be found here. They’re already halfway to their target. What do you think?

CloudKit – now that’s how to do a secure Database for users

Data Breach Hand Brick Wall Computer

One of the big controversies here relates to the appetite of the current UK government to release personal data with the most basic understanding of what constitutes personal identifiable information. The lessons are there in history, but I fear without knowing the context of the infamous AOL Data Leak, that we are destined to repeat it. With it goes personal information that we typically hold close to our chests, which may otherwise cause personal, social or (in the final analysis) financial prejudice.

When plans were first announced to release NHS records to third parties, and in the absence of what I thought were appropriate controls, I sought (with a heavy heart) to opt out of sharing my medical history with any third party – and instructed my GP accordingly. I’d gladly share everything with satisfactory controls in place (medical research is really important and should be encouraged), but I felt that insufficient care was being exercised. That said, we’re more than happy for my wife’s Genome to be stored in the USA by 23andMe – a company that demonstrably satisfied our privacy concerns.

It therefore came as quite a shock to find that a report, highlighting which third parties had already been granted access to health data with Government mandated approval, ran to a total 459 data releases to 160 organisations (last time I looked, that was 47 pages of PDF). See this and the associated PDFs on that page. Given the level of controls, I felt this was outrageous. Likewise the plans to release HMRC related personal financial data, again with soothing words from ministers in whom, given the NHS data implications, appear to have no empathy for the gross injustices likely to result from their actions.

The simple fact is that what constitutes individual identifiable information needs to be framed not only with what data fields are shared with a third party, but to know the resulting application of that data by the processing party. Not least if there is any suggestion that data is to be combined with other data sources, which could in turn triangulate back to make seemingly “anonymous” records traceable back to a specific individual.Which is precisely what happened in the AOL Data Leak example cited.

With that, and on a somewhat unrelated technical/programmer orientated journey, I set out to learn how Apple had architected it’s new CloudKit API announced this last week. This articulates the way in which applications running on your iPhone handset, iPad or Mac had a trusted way of accessing personal data stored (and synchronised between all of a users Apple devices) “in the Cloud”.

The central identifier that Apple associate with you, as a customer, is your Apple ID – typically an email address. In the Cloud, they give you access to two databases on their cloud infrastructure; one a public one, the other private. However, the second you try to create or access a table in either, the API accepts your iCloud identity and spits back a hash unique to your identity and the application on the iPhone asking to process that data. Different application, different hash. And everyone’s data is in there, so it’s immediately unable to permit any triangulation of disparate data that can trace back to uniquely identify a single user.

Apple take this one stage further, in that any application that asks for any personal identifiable data (like an email address, age, postcode, etc) from any table has to have access to that information specifically approved by the handset owners end user; no explicit permission (on a per application basis), no data.

The data maintained by Apple, besides holding personal information, health data (with HealthKit), details of home automation kit in your house (with HomeKit), and not least your credit card data stored to buy Music, Books and Apps, makes full use of this security model. And they’ve dogfooded it so that third party application providers use exactly the same model, and the same back end infrastructure. Which is also very, very inexpensive (data volumes go into Petabytes before you spend much money).

There are still some nuances I need to work. I’m used to SQL databases and to some NoSQL database structures (i’m MongoDB certified), but it’s not clear, based on looking at the way the database works, which engine is being used behind the scenes. It appears to be a key:value store with some garbage collection mechanics that look like a hybrid file system. It also has the capability to store “subscriptions”, so if specific criteria appear in the data store, specific messages can be dispatched to the users devices over the network automatically. Hence things like new diary appointments in a calendar can be synced across a users iPhone, iPad and Mac transparently, without the need for each to waste battery power polling the large database on the server waiting for events that are likely to arrive infrequently.

The final piece of the puzzle i’ve not worked out yet is, if you have a large database already (say of the calories, carbs, protein, fat and weights of thousands of foods in a nutrition database), how you’d get that loaded into an instance of the public database in Apple’s Cloud. Other that writing custom loading code of course!

That apart, really impressed how Apple have designed the datastore to ensure the security of users personal data, and to ensure an inability to triangulate data between information stored by different applications. And that if any personal identifiable data is requested by an application, that the user of the handset has to specifically authorise it’s disclosure for that application only. And without the app being able to sense if the data is actually present at all ahead of that release permission (so, for example, if a Health App wants to gain access to your blood sampling data, it doesn’t know if that data is even present or not before the permission is given – so the app can’t draw inferences on your probably having diabetes, which would be possible if it could deduce if it knew that you were recording glucose readings at all).

In summary, impressive design and a model that deserves our total respect. The more difficult job will be to get the same mindset in the folks looking to release our most personal data that we shared privately with our public sector servants. They owe us nothing less.

A first look at Apple HomeKit

Apple HomeKit Logo

Today’s video from Apple’s Worldwide Developers Conference viewing concerned HomeKit, which is the integration platform to control household appliances from your iPhone. Apple have defined a common set of Accessory Profiles, which are configured into a Home > Zone > Room hierarchy (you can define several ‘home’ locations, but one of them is normally selected as the primary one). Native devices include:

  • Garage Door Openers (with associated lighting)
  • Lights
  • Door locks
  • Thermostats
  • IP (Internet Protocol) Cameras
  • Switches

Currently, there are a myriad of different per vendor standards to control home automation products, but Apple are providing functionality to enable hardware (or software) bridges between disparate protocols and their own. Once a bridge has been discovered, the iPhone sees all the devices sitting the other side of the bridge as if they were directly connected to the iPhone and using the Apple provided interface protocols.

Every device type has a set of characteristics, such as:

  • Power State
  • Lock State
  • Target State
  • Brightness
  • Model Number
  • Current Temperature
  • etc

When devices are first defined, each has a compulsory “identify me” action. Hence if you’re sitting on the floor, trying to work out which of twelve identical-looking lightbulbs in the room to give an appropriate name, the “identify me” action on the iPhone pick list will result in the matching bulb blinking twice; for a security camera, blinking a colour LED, and so forth.

Each device, it’s room name, zone (like “upstairs”, “back garden”) and home name, plus the common characteristic actions, are encoded and enacted using Siri – Apple’s voice control on the iPhone. “Switch on all downstairs lights”, “Set the room temperature to 20 degrees C” and so forth are spoken into your iPhone handset. That is the default User Interface for the whole Home Automation Setup. The HomeKit resident database is in turn also available for use by vendor specific products via the HomeKit API, should a custom application be desirable.

There are of course extensive security controls to frustrate any attempt for anyone to be able to do “man in the middle” attacks, or to subvert the security of your device connections. For developers, Apple provide a software simulator so that you can test your software against a wide range of device types, even before the hardware is made available to you.

Most of the supporting detail to build compliant devices is found in the MFI (Made for iDevices) Guidelines, which are only available the other side of a license agreement with Apple here. The full WWDC presentation on HomeKit (just under an hour long) is called “Introduction to HomeKit” and present in the list of video sessions from WWDC here.

Overall, very impressive. That’s the home stuff largely queued up, just awaiting news of a bridge I think. Knowing how simple the voice setup is on Android JellyBean for a programmer (voice enabling an app is circa 20 lines of JavaScript), i’m sure a Google equivalent is eminently possible; if Google haven’t done their own API, then a bridge to Apple’s ecosystem (if the licensing allows it) should not be a major endeavour.

So, the only missing thing was talk of iBeacon support. However, that is a different use case. There are already pilots that sense presence of a low energy bluetooth beacon, and bring specific applications onto the lock screen. Examples include the Starbucks payment card app coming forward to make itself immediately available when you’re close to a Starbucks counter, or the Virgin Atlantic app making your boarding card available when you approach the check-in desk at an airport. Both are features of Apple’s PassBook loyalty card app – which is already used by hundreds of retailers, supermarkets and airlines.

The one thing about iBeacon is that you can enable your iPhone 5S to be a low energy beacon in it’s own right. You have full control over this and your presence is not made available to anything but applications on your own iPhone handset – over which, in the final analysis, you have total control. One use case already is pairing your Pebble Smartwatch with your iPhone 5S handset, so that if your phone leaves your immediate location by a specified short distance (say, 2 meters), you’re aggressively told immediately.

So, lots to look forward to in the Autumn. Quite a measured approach compared to the “Internet of Things” which other vendors are hyping with impunity (and quoting staggering revenue numbers which I find difficult to map onto any reality – starting with what folks seem to suggest is even a current huge market size already).

My next piece of homework will be to look at CloudKit, now that Apple are dogfooding it’s use in their own products while releasing it to third party developers. Hopefully, a good sign that Apple are now providing cloud services that match the resilience of competitive offerings for the first time – even if they are specific to Apple’s own platforms. But that’s all the other side of finishing my company’s end of year tax return prep work first!

Further snippets about Apple’s new Health App

Apple Health App Icon

Following on from my introductory post yesterday, i’ve now downloaded and viewed another of the WWDC videos – and have some more information about the Health APIs capabilities as far as device support is concerned.

Four specific Accessory Device types that follow Low Energy Bluetooth GATT Specificiations have immediate built in pairing and data storage capability with the iPhone HealthKit capabilities in iOS 8 out of the box:

  • Heart Rate Monitors
  • Glucose Sensor
  • Blood Pressure Monitor (including the optional Heat Rate data – including energy expended metadata – if provided by the device)
  • Health Thermometer

For these, no specific application needs to be supplied to work with these four device types. There are a set of best practices to implement optional characteristics (eg: to confirm a chest heart monitor is in contact and is able to supply data). There are also optional services that should be implemented if possible, such as a battery service to notify the user if the device is running out of power.

Apple showed a few screenshots of the Health App during their devices presentation, which included these as an indication of what will be provided by default – if there is a set of sensors to feed this data into it:

Health App Screenshot

and when you dip into the Vital Signs option:

Health App Vital Signs

Other accessories can be associated with an application that communicates with the device via the iOS ExternalAccessory framework, CoreBluetooth, USB or via WiFi, but can use the HealthKit framework APIs to store the data from your app into the HealthKit database. Withing’s WiFi Bathroom Scales one such example!

There is capability to put associated yes/no user requests on the Notifications screen via the Apple Notification Center Service (ANCS) where appropriate. For example, to confirm a provide an on/off which or similar binary change in the handset notifications, if this is desired.

The recommended bedtime reading for HealthKit accessory interfacing are (a) the Bluetooth Accessory Design Guidelines for Apple Products (in the Bluetooth for Developers site) and (b) documentation relating to Apples MFi program (MFi – “Made for i-devices” I guess – contains the same set of interface guidelines used by HomeKit and to add Hearing Aid Audio Transport to Apple iOS devices).

Apple also list a specific site for iBeacon, which has possibilities for handshaking applications with iPhone handsets based on local proximity – but really there for different location-based services (like a security guard being checked in and out as they patrol a building, or a health visitor attending an at-home patient – without having to rely solely on relatively power-hungry GPS co-ordinate sampling). But that’s a much wider story.

In the meantime, applications that:

  • monitor or record food intake (like the excellent site i’ve been feeding data into daily now for over 12 years)
  • notify a health professional of defined “out of band” data readings from a patient
  • emergency contact (outside of the “in case of emergency” sheet available on the lock screen in iOS 8)
  • anything with the ability to share/download health data with the end users specific permission to a GP or Hospital (the user can subset this down in fine detail)
  • any approved diagnostic aid, having been subjected to regulatory approval

are the scope of individual application developers code. All share the same, heavily secured database.

With this, Apples good work should ensure a vibrant community of use to further embed iPhone handsets into their users lives. All we need now is further devices – iWatch anyone? – that can make full use of the capabilities in the Health App. It all looks very ready to go.

An initial dive into Apples new Health App (and HealthKit API)

Apple HealthKit Icon

Apple announced their new Health application (previously known during rumours as HealthBook) and the associated HealthKit Application Programming Interface (API) at their Worldwide Developers Conference earlier this week. A video of the associated conference presentation that focussed exclusively on it at WWDC was put up yesterday, and another that preceded it – showing how you interface low energy Bluetooth sensors to an iPhone and hence to feed it – should be up shortly.

Even though the application won’t be here until iOS 8 releases (sometime in the Autumn), the marketing folks have already started citing the already frequent use of iPhones in Health and Fitness applications here (the campaign title is “Strength” and the video lasts for exactly one minute).

Initial discoveries:

  1. The application is iPhone only. No iPad version at first release (if ever).
  2. A lot of the set-up work for an application provider relates to the measures taken, and the associated weight/volume metrics used. These can be complex (like mg/DL, calories, steps, temperature, blood pressure readings, etc) and are stored with corresponding timestamps.
  3. The API provides a rich set of unit conversion functions (all known count, Imperial and Metric measure combinations), so these shouldn’t be needed in your application code.
  4. Access to the data is authorised by class (measure type). Apple have been really thorough on the security model; users get fine grained control on which data can be accessed by each application on the handset. Even to the extent that no-one can ask “Is this user sampling blood pressure on this device”? Apps can only ask “Are there blood pressure readings that my application has permission to access please?”. The effect is that  apps can’t tell the difference between “what isn’t sampled” or “what is sampled but denied access” to them; hence inferences that the user may have diabetes is impossible to deduce from the yes/no answer given. Well thought out security.
  5. There is provision for several devices taking duplicated readings (eg: having a FitBit step counter and the handset deducing step count itself from it’s own sensors). The API queries can be told which is the default device, so that when stats are mapped out, secondary device data can be used if and where there are gaps in the primary sensors data. I guess the use case is wearing your Fitbit out running when leaving your phone handset at home (or vice versa); if both are operating simultaneously, the data samples reported in the time slots mapped come only from the primary device.
  6. Readings are stored in one locally held Object orientated database for all measures taken, by all monitoring devices you use. All health applications on your handset use this single database, and need to be individually authorised for each class of data readings you permit them to be exposed to. No permission, no access. This is the sort of detail tabloid newspapers choose to overlook in order to get clickbait headlines; don’t believe scare stories that all your data is immediately available to health professionals or other institutions – it is patently not the case.

The end result is that you consolidate all your health related data in one place, and can selectively give access to subsets of it to other applications on your iPhone handset (and to revoke permissions at any time). The API contains a statistics functions library and the ability to graph readings against a timeline, as demonstrated by the Health Application that will be present on every iPhone running iOS 8. The side effect of this is that the iPhone is merely acting as a data collection device, and is not dishing out advice – something that would otherwise need regulatory approvals.

Vendors/users of all manner of sensors, weighing scales, Boditrax machines, monitors, etc can add support for their devices to feed data into the users Health database on the users handset. I’m just waiting for the video of the WWDC session that shows how to do this to be made available on my local copy of the WWDC app. More insights may come once I have the opportunity to hear that through.

In the meantime, Mayo Clinic have developed an application that can message a health professional if certain readings go outside safe bounds that they have set for their patient (with the patients permission!). One provider in the USA is giving the ability to feed data – with the patients permission – directly into their doctors patient database. I suspect there are a myriad of use cases that applications can be developed for; there is already quite a list of institutions piloting related applications:

Apple HealthKit Pilot Users

The one point to leave with is probably the most important of all. Health data is a valuable asset, and must be protected to avoid any exposure of the user to potential personal or financial prejudice. Apple have done a thorough piece of work to ensure that for the users of their handsets.

The reward is likely to be that an iPhone will cement itself even further into the daily lives of it’s owners just as they have to date – and without unwanted surprises.

Footnote: now i’ve listened to the associated Health App Devices Presentation from WWDC, i’ve added an extra blog post with more advanced information on the Health Apps capabilities and device support here.

For Enterprise Sales, nothing sells itself…

Trusted Advisor

I saw a great blog post published on the Andreessen Horowitz (A16Z) web site asking why Software as a Service offerings didn’t sell themselves here. A lot of it stems from a misunderstanding what a good salesperson does (and i’ve been blessed to work alongside many good ones throughout my career).

The most successful ones i’ve worked with tend to work there way into an organisation and to suss the challenges that the key executives are driving as key business priorities. To understand how all the levers get pulled from top to bottom of the org chart, and to put themselves in a position of “trusted advisor”. To be able to communicate ideas that align with the strategic intent, to suggest approaches that may assist, and to have references ready that demonstrate how the company the salesperson represents have solved similar challenges for other organisations. At all times, to know who the customer references and respects across their own industry.

Above all, to have a thorough and detailed execution plan (or set of checklists) that they follow to understand the people, their processes and their aspirations. That with enough situational awareness that they know who or what could positively – and negatively – affect the propensity of the customer to spend money. Not least to avoid the biggest competitor of all – an impression that “no decision” or a project stall will leave them in a more comfortable position than enacting a needed change.

When someone reaches board level, then their reference points tend to be folks in the same position at other companies. Knowing the people networks both inside and outside the company are key.

Folks who I regard as the best salespeople i’ve ever worked with tend to be straight forward, honest, well organised, articulate, planned, respectful of competitors and adept at working an org chart. And they also know when to bring in the technical people and senior management to help their engagements along.

The antithesis are the “wham bam thankyou mam”, competitors killed at all costs and incessant quoters of speeds and feeds. For those, i’d recommend reading a copy of “The Trusted Advisor” by Maister, Green and Galford.

Trust is a prize asset, and the book describes well how it is obtained and maintained in an Enterprise selling environment. Also useful to folks like me who tend to work behind the scenes to ensure salespeople succeed; it gives some excellent insight into the sort of material that your sales teams can carry into their customers and which is valued by the folks they engage with.

Being trusted and a source of unique, valuable insights is a very strong position for your salespeople to find themselves in. You owe it to them to be a great source of insights and ideas, either from your own work or curated from other sources – and to keep customers informed and happy at all costs. Simplicity sells.

Apple iOS Autumn 2014 release: what you’ll see

Apple Health AppIt looks like John Gruber of Daring Fireball was right on the money, expecting only software enhancements to both iOS (8) and MacOS OS/X (10.10 aka Yosemite), plus some associated development tools. Most blogs out there are picking things through in detail, hence i’ll try to go the other way – and start with changes apparent to the user, and work back from there.

Lock Screen improvements

The first thing is that there is an “in the event of an emergency” card you (or anyone else!) can call up from the lock screen. Not only to contain key medical data in the event of an emergency, but also associated contact details – so if you lose your iPhone or iPad, there is a fair chance of a good samaritan being able to return it to you.

In the event that you lose your iPhone/iPad in an area where it is not discovered, “Find my iPhone” will receive and store a last gasp “this is where I am” location when the battery charge drops below a certain threshold. Hence it’s last known position will be available to you long after the charge goes in the battery – which should make it much easier to locate.

Another feature is that some applications can appear in one corner of the lock screen when you are in proximity to specific locations (eg: Starbucks outlet, ticket office, airline check-in). Hence a useful application to complete a transaction is always automatically available to you.

Family features

For environments like my son’s family, there will be an ability to daisy chain up to 6 Apple IDs (and their associated iDevices) as a single entity. Parents can assign Parental controls to their kids devices, and if the kids try to order anything from iTunes (or in-app purchases), approval will be sought from one of their parents – who on acceptance, will be charged against their own credit card. Joining the families devices in this way also gives a shared photo library, shared access to media (where desired), and allows parents to see the location of all devices using “Find my iPhone”.

The ability to set Parental Controls will no doubt help my son, who once walked in on his 10 year old Aspergers/ADHD son’s bedroom to be greeted by Siri saying “I don’t understand what you mean by Who is Postman Pat’s Boyfriend”.


Apple have put in some of the functionality of competitive messaging platforms, so you can send voice messages and video to other users over iMessage inline with your normal text stream. You can also elect to reverse yourself out of group conversations at any time. That said, the more impressive thing is that if you receive a message on your iPhone, you can raise the handset to your ear, say something like “Hiya – in a meeting, will be back to you in 25 minutes max” and take the phone away from your head. The act of doing so sends that audio message back to the person who’d messaged you immediately.

When the iPhone is plugged into a power source or car adaptor, Siri is available from the lock screen by saying “Hey, Siri” – just like my Nexus 5 responds (at any time) to “OK, Google”. Good to send text messages vocally and to instruct navigation in a hands-free manner.

Health and HealthKit

Don’t believe what you read in the newspapers. Apple announced an in-iPhone database and display program called “Health” (what was known as Healthbook in pre-release rumours). This is designed to act as an interface to countless 3rd party devices like step counters (FitBit), heart monitors, blood sugar sensors etc – and to place all that data into a consolidated database and presentation application running on the users iPhone handset.

That said, the resulting data is heavily protected; just like Android, you have to specifically authorise access to sections of that data to any application that wants to gain access to it. Hence the one application cited – from the Mayo Clinic – would download data into their systems, or to be alerted when readings deviate from specific thresholds for emergency attention. That said, the end user has to specifically authorise what part of the data in the Health database could be exposed to the Mayo application; no permission, there is no access. This is something the mainstream press completely miss; you have full control over your data, and nothing travels to your GP or Hospital without your explicit (and revocable at any time) permission.

Home Automation (HomeKit)

Apple also announced an application programming interface that permits access to home control equipment, like electronic locks, lights, heating, fire alarms and so forth. While they have signed up many of the existing home automation vendors to give a uniform interface for the iPhone or iPad, there is currently no associated user interface at the time of writing. Instead, the user can instruct Siri (the voice control on an iPhone/iPad) to perform one or more steps (aka “Scenes”) to issue commands, such as “Lock the front door” or “Going to Bed” (to lock the house, garage and alter lighting levels around the house). Still early days.


Really for folks with wall to wall Apple devices from Macs down to iPhones. The devices can sense when they are in close proximity to each other, and can hand off work and communications traffic between them for applications developed using Apple’s Continuity API. So, you can get your Mac to place a phone call from a number on the Mac screen via a close by iPhone, or to see messages received on your iPhone in your Mac notifications – and even move in-progress work live between devices. Where your Cell provider allows it, you can even use your iPhone to place calls over WiFi (in effect turning your Mac into a Femto cell) if cell coverage around you is otherwise poor.


Most of the rest of the announcements were aimed at developers. Despite what Tim Cook said about Android, almost all the enhancements (outside of programming language Swift and the Gaming APIs (SpriteKit, Metal) allow deep embedding of third party applications into iOS for the first time; this is something Android has done for years.

There are thousands of changes everywhere, with tidy ups of the User Interface on both Mac OS/X and on iOS (which now look surprisingly similar) and neat tricks everywhere. There is also functionality under the hood to enable iOS to (at last) handle different dot dimension screens.

I’m watching a few of the WWDC videos (in the iOS WWDC App), in particular those related to HealthKit and the Health App, so see how they integrate with back end systems (a professional interest!).

So, all ready for developers to get themselves ready for the next slew of Apple hardware announcements in the Autumn. Looking forward to it!

Expectations of Apple announcements at WWDC 2014

Jony Ive Beats Headphones

We’re nearly there for the announcements at this years Apple Worldwide Developers Conference 2014. Lots of speculation as normal, but I suspect the most plausible predictions are those from John Gruber on his Daring Fireball blog here.

The keynote is 2 hours long and can be watched live using Apple’s WWDC app, which is downloadable from the Apple App Store.

The Sapphire plant where Apple are reputed to be building screens for the next iPhone aren’t expected to come on stream (at least volume wise) yet, so i’d suspect that new phone handsets will arrive later in the year. While I thought Beats headphones would give Apple a youth-orientated brand to challenge Xiaomi in future growth markets – much as Toyota have their own sub-brands in Scion and Lexus in the car industry – it sounds like it’s use is more to land the impressive Jimmy Iovine and to sell a multi-platform music streaming service only. Certainly the trend is that purchasing tracks is out, and streaming services absorbing a lot of future growth potential.

I’m particularly looking out for Apple’s first foray into health and home automation applications – both using sensor devices from a wide variety of other vendors. But would be delighted if there are more impressive surprises queued up. We shall see – just 100 minutes to go at the time of writing!

More evidence of the Relentless Migration from CapEx to OpEx

Google Self Driving Car

There’s been quite a lot of commentary in the last week following Google co-founder Sergey Brin’s presentation at the Re/code Conference; he got to show this video of the next iteration of their self driving cars. For a bit of history leading up to that announcement, i’d recommend watching two videos on the progress of this project to date, and then the video Sergey showed last week:

  1. Sebastian Thrun – the project lead – giving a presentation about self driving cars in 2011 and showing a few of them in action here (it lasts 4 mins, 14 seconds).
  2. A video that Google produced with a twist near the end here (3 mins 1 second long).
  3. And the video of the new exploratory design announced this week here (2 mins 53 seconds).

My brain diverted another way to most. Have you ever seen and experienced Uber? You open an app on your Smartphone, which identifies where you are located. You tell it where you’d like to travel to, and it will tell you (a) how long a wait until a taxi will arrive to collect you and (b) the fixed cost of the journey. If you accept both, your taxi is scheduled, collects you, drops you off and the charge made to your credit card. Done! The system is set so that both driver and passenger rate their experience, so that good service from both ends of the transaction is maintained.

It’s probably well known that most cars purchased are tremendously under utilised and taking up valuable parking space in Cities all over the world. There are separate innovations where drivers can clock on and off at any time they wish, and obviously less resources available results in the pricing rising – to encourage more Uber drivers back online to service the demand. There are also periods of exceptional demand where Uber will jack the prices right up – transparently to all – to ensure there are the right number of drivers available to service the very busy customer demand periods (like rush hours).

Uber have stirred controversy in the Taxi industry because anyone (with lack of bad references) can be a Uber driver, and part time working is a personal choice. Those who work full time, as self employed drivers, often get much higher pay than most routine licensed Taxi drivers; in New York, reckoned to be north of $90,000/year gross and (after car finance and depreciation costs) around $60,000/year. Drivers who operate part time can use the income to offset the cost of their cars, partly or completely, if that is their choice.

The bit that caught my imagination was what would happen if a City (or a private company) bought a fleet of these Google cars and hooked them into Uber. After use, they go to their next collection point or back to a well researched cache – ready for the best possible service to the next likely passenger. Or to the Petrol Station to be refueled (and I hope a manufacturer recall doesn’t end up with fleets of them working back to their factory, all at once!).

I guess in the early days, there will be idiots on the road who’ll try to psych them out, but once an integral piece of local life, I think a Google/Uber combination would be tremendous. Not least, as yet another glowing example that paying for a shared resource is much cheaper than the inefficiencies inherent in expensive, rarely used Capital assets.

CapEx is the past, OpEx is the future.