IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see:
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley:

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Reinventing Healthcare

Comparison of US and UK healthcare costs per capita

A lot of the political effort in the UK appears to circle around a government justifying and handing off parts of our NHS delivery assets to private enterprises, despite the ultimate model (that of the USA healthcare industry) costing significantly more per capita. Outside of politicians lining their own pockets in the future, it would be easy to conclude that few would benefit by such changes; such moves appear to be both economically farcical and firmly against the public interest. I’ve not yet heard any articulation of a view that indicates otherwise. But less well discussed are the changes that are coming, and where the NHS is uniquely positioned to pivot into the future.

There is significant work to capture DNA of individuals, but these are fairly static over time. It is estimated that there are 10^9 data points per individual, but there are many other data points – which change against a long timeline – that could be even more significant in helping to diagnose unwanted conditions in a timely fashion. To flip the industry to work almost exclusively to preventative and away from symptom based healthcare.

I think I was on the right track with an interest in Microbiome testing services. The gotcha is that commercial services like uBiome, and public research like the American (and British) Gut Project, are one-shot affairs. Taking a stool, skin or other location sample takes circa 6,000 hours of CPU wall time to reconstruct the 16S rRNA gene sequences of a statistically valid population profile. Something I thought I could get to a super fast turnaround using excess capacity (spot instances – excess compute power you can bid to consume when available) at one or more of the large cloud vendors. And then to build a data asset that could use machine learning techniques to spot patterns in people who later get afflicted by an undesirable or life threatening medical condition.

The primary weakness in the plan is that you can’t suss the way a train is travelling by examining a photograph taken looking down at a static railway line. You need to keep the source sample data (not just a summary) and measure at regular intervals; an incidence of salmonella can routinely knock out 30% of your Microbiome population inside 3 days before it recovers. The profile also flexes wildly based on what you eat and other physiological factors.

The other weakness is that your DNA and your Microbiome samples are not the full story. There are many other potential leading indicators that could determine your propensity to become ill that we’re not even sampling. The questions of which of our 10^18 different data points are significant over time, and how regularly we should be sampled, are open questions

Experience in the USA is that in environments where regular preventative checkups of otherwise healthy individuals take place – that of Dentists – have managed to lower the cost of service delivery by 10% at a time where the rest of the health industry have seen 30-40% cost increases.

So, what are the measures that should be taken, how regularly and how can we keep the source data in a way that allows researchers to employ machine learning techniques to expose the patterns toward future ill-health? There was a good discussion this week on the A16Z Podcast on this very subject with Jeffrey Kaditz of Q Bio. If you have a spare 30 minutes, I thoroughly recommend a listen:

That said, my own savings are such that I have to refocus my own efforts elsewhere back in the IT industry, and my MicroBiome testing service Business Plan mothballed. The technology to regularly sample a big enough population regularly is not yet deployable in a cost effective fashion, but will come. When it does, the NHS will be uniquely positioned to pivot into the sampling and preventative future of healthcare unhindered.

Politicians and the NHS: the missing question


The inevitable electioneering has begun, with all the political soundbites simplified into headline spend on the NHS. That is probably the most gross injustice of all.

This is an industry lined up for the most fundamental seeds of change. Genomics, Microbiomes, ubiquitous connected sensors and quite a realisation that the human body is already the most sophisticated of survival machines. There is also the realisation that weight and overeating are a root cause of downstream problems, with a food industry getting a free ride to pump unsuitable chemicals into the food chain without suffering financial consequences for the damage caused. Especially at the “low cost” end of the dietary spectrum.

Politicians, pharma and food lobbyists are not our friends. In the final analysis, we’re all being handed a disservice because those leading us are not asking the fundamental question about health service delivery, and to work back from there.

That question is: “What business are we in?”.

As a starter for 10, I recommend this excellent post on Medium: here.

Why did Digital Equipment Corporation Fail?

Digital Equipment Corp Logo

I answered that question on Quora, but thought i’d put a copy of my (long) answer here. Just one ex-employees perception from prisoner badge #50734!

I managed the UK Software Products Group in my last two years at DEC and had a 17 year term there (1976-1993). There are a wide variety of components that contributed to the final state, though failing to understand industry trends across the company was not among them. The below is a personal view, and happy for any ex colleagues to chip in with their own perspectives.

The original growth 1957 through to Jan 1983 has based on discrete, industry based product lines. In combination, they placed demand on central engineering, manufacturing and sales to support their own business objectives, and generally ran the show to dominate their own industry segments. For example, Laboratory Data Products, Graphic Arts (Newspapers!), Commercial OEM, Tech OEM, MDC (Manufacturing, Distribution and Control), ESG (Engineering Systems Group), Medical Systems and so forth.

The finance function ran as a separate reporting entity with management controls that went top to bottom with little ability for product lines to unduly behave in any way except for the overall corporate good.

The whole lot started to develop into a chaotic internal economy, albeit one that meshed together well and covered any cracks. The product lines were removed in Jan 1983, followed by a first ever stall in a hitherto unblemished Stock Market Performance. The company became much more centrally planned, and one built around a one company, one strategy, one architecture focus (the cynics in the company paraphrased this as one company, one egg, one basket). All focus on getting VAX widely deployed, given it’s then unique ability to run exactly the same binaries from board products all the way up to high end, close to mainframe class processors.

The most senior leadership started to go past it’s sell by date in the late 1980s. While the semiconductor teams were, as expected, pumping out impressive $300 VAX silicon, the elements of the leadership became fixated on the date on which DEC would finally overtake IBM in size. They made some poor technology investment calls in trying to extend VAX into IBM 3090 scale territory, seemingly oblivious to being nibbled from underneath on the low end.

Areas of the company were being frustrated at the high end focus and the inability for the Executive to give clear guidance on the next generation processor requirements. They kept on being flip-flopped between 32-bit and 64-bit designs, and brought out MIPS based workstations in a vain attempt to at least stay competitive performance wise until the new architecture was ready to ship.

Bob Palmer came to prominence by stopping the flip-flopping and had the semiconductor team ship a 64 bit design that completely outperformed every other chip in the industry by 50-100% (which ended up being the case for nearly 10 years). He then got put in charge of worldwide manufacturing, increasing the productivity by 4x in 2 years.

The company needed to increase its volume radically or reduce headcount to align competitively with the market as it went horizontally orientated (previously, IBM, DEC and the BUNCH – Burroughs, Univac, NCR, CDC and Honeywell were all vertically orientated).

Ken Olsen got deposed by the board around 1992, and they took the rather unusual step of reaching out to Bob Palmer, who was then a direct report of SVP Jack Smith, to lead the company.

While there was some early promise, the company focussed around a small number of areas (PCs, Components & Peripherals, Systems, Consumer Process Manufacturing, Discrete Manufacturing and Defence, and I think Health). The operating practice was that any leaders who missed their targets two quarters in a row were fired, and the salesforce given commission targets for the first time.

The whole thing degenerated from there, such that the company made equivalent losses in Palmers reign greater than the retained profits made under Olsen 1957-1982. He sued Intel for patent infringement, which ended up with Intel settling including the purchase of semiconductor operations in Hudson. He likewise sued Microsoft, which ended up with Microsoft lending DEC money to get its field force trained up on Microsoft products (impressive ju-jitsu on their part). Then sold the whole company to Compaq.

Text in some of the books about DEC include some comments by C Gordon Bell (technical god in DECs great years) which will not endear him to a place on Bob Palmers Christmas Card List (but words which many of us would agree with).

There was also a spoof Harvard Business Review article, written by George Van Treeck (a widely respected employee on the Marketing Notes conference maintained on the company network), which outlined the death of Digital – written in 1989 or so. Brilliant writing (I still have a copy in my files here), and he guessed the stages with impressive accuracy way back then. His words are probably a better summary than this, but until then, I trust this will give one persons perception.

It is still, without doubt, the finest company i’ve ever had the privilege to work for.

Rest in Peace, Ken Olsen. You did a great job, and the world is much better for your lifes work.

Starting with the end in mind: IT Management Heat vs Light

A very good place to startOne source of constant bemusement to me is the habit of intelligent people to pee in the industry market research bathwater, and then to pay handsomely to drink a hybrid mix of the result collected across their peers.

Perhaps betrayed by an early experience of one research company coming in to present to the management of the vendor I was working at, and finding in the rehearsal their conjecture that sales of specific machine sizes had badly dipped in the preceding quarter. Except they hadn’t; we’d had the biggest growth in sales of the highlighted machines in our history in that timeframe. When I mentioned my concern, the appropriate slides were corrected in short order, and no doubt the receiving audience impressed with the skill in their analysis that built a forecast starting with an amazingly accurate, perceptive (and otherwise publicly unreported) recent history.

I’ve been doubly nervous ever since – always relating back to the old “Deep Throat” hints given in “All the Presidents Men” – that of, in every case, “to follow the money”.

Earlier today, I was having some banter on one of the boards of “The Motley Fool” which referenced the ways certain institutions were imposing measures on staff – well away from a useful business use that positively supported better results for their customers. Well, except of providing sound bites to politicians. I can sense that in Education, in some elements of Health provision, and rather fundamentally in the Police service. I’ve even done a drains-up some time ago that reflected on the way UK Police are measured, and tried trace the rationale back to source – which was a senior politician imploring them to reduce crime; blog post here. The subtlety of this was rather lost; the only control placed in their hands was that of compiling the associated statistics, and to make their behaviours on the ground align supporting that data collection, rather than going back to core principles of why they were there, and what their customers wanted of them.

Jeff Bezos (CEO of Amazon) has the right idea; everything they do aligns with the ultimate end customer, and everything else works back from there. Competition is something to be conscious of, but only to the extent of understanding how you can serve your own customers better. Something that’s also the central model that W. Edwards Deming used to help transform Japanese Industry, and in being disciplined to methodically improve “the system” without unnecessary distractions. Distractions which are extremely apparent to anyone who’s been subjected to his “Red Beads” experiment. But the central task is always “To start with the end in mind”.

With that, I saw a post by Simon Wardley today where Gartner released the results of a survey on “Top 10 Challenges for I&O Leaders”, which I guess is some analogue of what used to be referred to as “CIOs”. Most of which felt to me like a herd mentality – and divorced from the sort of issues i’d have expected to be present. In fact a complete reenactment of this sort of dialogue Simon had mentioned before.

Simon then cited the first 5 things he thought they should be focussed on (around Corrective Action), leaving the remainder “Positive Action” points to be mapped based on that appeared upon that foundation. This in the assumption that those actions would likely be unique to each organisation performing the initial framing exercise.

Simon’s excellent blog post is: My list vs Gartner, shortly followed by On Capabilities. I think it’s a great read. My only regret is that, while I understand his model (I think!), i’ve not had to work on the final piece between his final strategic map (for any business i’m active in) and articulating a pithy & prioritised list of actions based on the diagram created. And I wish he’d get the bandwidth to turn his Wardley Maps into a Book.

Until then, I recommend his Bits & Pieces Blog; it’s a quality read that deserves good prominence on every IT Manager’s (and IT vendors!) RSS feed.

CloudKit – now that’s how to do a secure Database for users

Data Breach Hand Brick Wall Computer

One of the big controversies here relates to the appetite of the current UK government to release personal data with the most basic understanding of what constitutes personal identifiable information. The lessons are there in history, but I fear without knowing the context of the infamous AOL Data Leak, that we are destined to repeat it. With it goes personal information that we typically hold close to our chests, which may otherwise cause personal, social or (in the final analysis) financial prejudice.

When plans were first announced to release NHS records to third parties, and in the absence of what I thought were appropriate controls, I sought (with a heavy heart) to opt out of sharing my medical history with any third party – and instructed my GP accordingly. I’d gladly share everything with satisfactory controls in place (medical research is really important and should be encouraged), but I felt that insufficient care was being exercised. That said, we’re more than happy for my wife’s Genome to be stored in the USA by 23andMe – a company that demonstrably satisfied our privacy concerns.

It therefore came as quite a shock to find that a report, highlighting which third parties had already been granted access to health data with Government mandated approval, ran to a total 459 data releases to 160 organisations (last time I looked, that was 47 pages of PDF). See this and the associated PDFs on that page. Given the level of controls, I felt this was outrageous. Likewise the plans to release HMRC related personal financial data, again with soothing words from ministers in whom, given the NHS data implications, appear to have no empathy for the gross injustices likely to result from their actions.

The simple fact is that what constitutes individual identifiable information needs to be framed not only with what data fields are shared with a third party, but to know the resulting application of that data by the processing party. Not least if there is any suggestion that data is to be combined with other data sources, which could in turn triangulate back to make seemingly “anonymous” records traceable back to a specific individual.Which is precisely what happened in the AOL Data Leak example cited.

With that, and on a somewhat unrelated technical/programmer orientated journey, I set out to learn how Apple had architected it’s new CloudKit API announced this last week. This articulates the way in which applications running on your iPhone handset, iPad or Mac had a trusted way of accessing personal data stored (and synchronised between all of a users Apple devices) “in the Cloud”.

The central identifier that Apple associate with you, as a customer, is your Apple ID – typically an email address. In the Cloud, they give you access to two databases on their cloud infrastructure; one a public one, the other private. However, the second you try to create or access a table in either, the API accepts your iCloud identity and spits back a hash unique to your identity and the application on the iPhone asking to process that data. Different application, different hash. And everyone’s data is in there, so it’s immediately unable to permit any triangulation of disparate data that can trace back to uniquely identify a single user.

Apple take this one stage further, in that any application that asks for any personal identifiable data (like an email address, age, postcode, etc) from any table has to have access to that information specifically approved by the handset owners end user; no explicit permission (on a per application basis), no data.

The data maintained by Apple, besides holding personal information, health data (with HealthKit), details of home automation kit in your house (with HomeKit), and not least your credit card data stored to buy Music, Books and Apps, makes full use of this security model. And they’ve dogfooded it so that third party application providers use exactly the same model, and the same back end infrastructure. Which is also very, very inexpensive (data volumes go into Petabytes before you spend much money).

There are still some nuances I need to work. I’m used to SQL databases and to some NoSQL database structures (i’m MongoDB certified), but it’s not clear, based on looking at the way the database works, which engine is being used behind the scenes. It appears to be a key:value store with some garbage collection mechanics that look like a hybrid file system. It also has the capability to store “subscriptions”, so if specific criteria appear in the data store, specific messages can be dispatched to the users devices over the network automatically. Hence things like new diary appointments in a calendar can be synced across a users iPhone, iPad and Mac transparently, without the need for each to waste battery power polling the large database on the server waiting for events that are likely to arrive infrequently.

The final piece of the puzzle i’ve not worked out yet is, if you have a large database already (say of the calories, carbs, protein, fat and weights of thousands of foods in a nutrition database), how you’d get that loaded into an instance of the public database in Apple’s Cloud. Other that writing custom loading code of course!

That apart, really impressed how Apple have designed the datastore to ensure the security of users personal data, and to ensure an inability to triangulate data between information stored by different applications. And that if any personal identifiable data is requested by an application, that the user of the handset has to specifically authorise it’s disclosure for that application only. And without the app being able to sense if the data is actually present at all ahead of that release permission (so, for example, if a Health App wants to gain access to your blood sampling data, it doesn’t know if that data is even present or not before the permission is given – so the app can’t draw inferences on your probably having diabetes, which would be possible if it could deduce if it knew that you were recording glucose readings at all).

In summary, impressive design and a model that deserves our total respect. The more difficult job will be to get the same mindset in the folks looking to release our most personal data that we shared privately with our public sector servants. They owe us nothing less.

Email: is 20% getting through really a success?

Baseball Throw

Over the weekend, I sent an email out to a lot of my contacts on LinkedIn. Because of the number of folks i’m connected to, I elected to subscribe to Mailchimp, the email distribution service recommended by most of the experts I engage in the WordPress community. I might be sad, but it’s been fascinating to watch  the stats roll in after sending that email.

In terms of proportion of all my emails successfully delivered, that looks fine:

Emails Delivered to LinkedIn Contacts

However, 2 days after the email was sent, readership of my email message, with the subject line including the recipients Christian name to avoid one of the main traps that spam gets caught in, is:

Emails Seen and UnOpened

Eh, pardon? Only 47.4% of the emails I sent out were read at all? On first blush, that sounds really low to an amateur me. I would have expected it for folks on annual leave, but still not as low as less than half of all messages sent out. In terms of device types used to read the email:

Desktop vs Mobile Email Receipt

which I guess isn’t surprising, given the big volume of readers that looked at the email in the first hour of when it was sent (which was at around 9:00pm on Saturday night). There was another smaller peak between 7am-8am on Sunday morning, and then fairly level tides with small surges around working day arrival, lunch and departure times. In terms of devices used:

Devices used to read Email

However, Mailchimp insert a health warning, saying that iOS devices do handshake the email comms reliably, whereas other services are a lot more fickle – so the number of Apple devices may tend to over report. That said, it reinforces the point I made in a post a few days ago about the importance of keeping your email subject line down to 35 characters – to ensure it’s fully displayed on an iPhone.

All in, I was still shocked by the apparent number of emails successively delivered but not opened at all. Thinking it was bad, I checked and found that Mailchimp reckon the average response aimed into folks aligned to Computers and Electronics (which is my main industry), voluntarily opted in, is 17.8%, and click throughs to provided content around the 1.9% mark. My email click through rate is running at 2.9%. So, my email was 2x the industry norm for readership and 50% above normal click-through rates, though these are predominantly people i’ve enjoyed working with in the past – and who voluntarily connected to me down the years.

So, sending an email looks to be as bad at getting through as expecting to see Tweets from a specific person in your Twitter stream. I know some of my SMS traffic to my wife goes awry occasionally, and i’m told Snapchat is one of the few messaging services that routinely gives you an indication that your message did get through and was viewed.

Getting guaranteed attention of a communication is hence a much longer journey than I expected, and probably (like newspaper ads of old) relying on repeat “opportunities to see”. But don’t panic – i’m not sending the email again to that list; it was a one-time exercise.

This is probably a dose of the obvious to most people, but the proportion of emails lost in action – when I always thought it a reliable distribution mechanism – remains a big learning for me.

Urgency, Importance and the Eisenhower Box

Eisenhower Box - Urgent, Important, non-urgent, non-important

I’ve seen variations of this matrix many times, though the most extensive use witnesses was by Adrian Joseph just after he joined Trafficmaster. The real theory is that nothing should be in the urgent and important square; it’s normally a symptom of bad planning or a major unexpected (but key) surprise.

When I think back to things i’ve done that have triggered major revenue and profit spectaculars, almost all fit in the important but not urgent box; instead, the pressure to move quickly was self inflicted, based on a clarity of purpose and intensive focus to do something that made a big difference to customers. The three major ones were:

  1. Generating 36 pages of text of ideas on how to increase software sales through Digital’s Industrial Distribution Division, then the smallest software Sales “Region” at £700K/year. Having been told to go implement, I never made it past the first 3 ideas, but relentlessly executed them. It became the biggest region two years later at £6m/year.
  2. Justifying and getting funding to do the DECdirect Software Catalogue. The teams around me were fantastic, giving me bandwidth to lock myself away for long periods of absence for nearly three months to work the structure and content of the work with Bruce Stidston and his team at USP Advertising plc. That business went 0-$100m in 18 months at margins that never dipped below 89% margin.
  3. Getting the Microsoft Distribution Business at Metrologie from £1m/month to £5m/month in 4 months, in a price war, and yet doubling margins at the same time. A lot of focus on the three core needs that customers were expressing, and then relentlessly delivering against them.

The only one I recall getting into the Urgent/Important segment was a bid document for a sizable supply contract that HMSO (Her Majesty’s Stationery Office), where I was asked to provide a supplementary chapter covering Digitals’ Servers, Storage, Comms and Software products. This to be added to a comprehensive document produced by the account manager covering all the other product areas for the company. I’d duly done this in the format originally requested by her, but asked to see the rest of the document to make sure everything was covered – and that we’d left no gaps between the main document and my own – with two days to go. At which point she said she’d had no time, and had decided to no-bid the work (without telling any of us!).

The sales team really needed the revenue, so they agreed to let me disappear home for two days to build the full proposal around the work i’d done, including commercial terms, marketing plan and a summary of all the sales processes needed to execute the relationship – but just for the vendor we were accountable for. We got the document to Norwich with 30 minutes to spare. Two weeks later, we were told we’d won the business for the vendor we represented.

The lesson was to put more progress checks in as the project was unfolding, and to ensure we never got left in that position again, independent of how busy we were with other things at the same time.

With that, i’ve never really hit the urgent/important corner again – which I think is a good thing. Plenty that is important though – but forcing adherence to what Toyota term “takt time” to measure progress, and to push ourselves along.

12 years of data recording leads to dose of the obvious

Ian Waring Weight Loss Trend Scatter Graph

As mentioned yesterday, I finally got Tableau Desktop Professional (my favourite Analytics software) running on Amazon Workspaces – deployed for all of $35 for the month instead of having to buy my own Windows PC. With that, a final set of trends to establish what I do right when I consistently lose 2lbs/week, based on an analysis of my intake (Cals, Protein, Carbs and Fat) and Exercise since June 2002.

I marked out a custom field that reflected the date ranges on my historical weight graph where I appeared to consistently lose, gain or flatline. I then threw all sorts of scatter plots (like the one above, plotting my intake in long periods where I had consistent weight losses) to ascertain what factors drove the weight changes i’ve seen in the past. This to nominally to settle on a strategy going forward to drop to my target weight as fast as I could in a sustainable fashion. Historically, this has been 2lbs/week.

My protein intake had zero effect. Carbs and Fat did, albeit they tracked the effect of my overall Calorie intake (whether in weight or in the number of Calories present in each – 1g of Carbs = 3.75 Kcals, and 1g of Fat = 9 Kcals; 1g of Protein is circa 4 Kcals). The WeightLossResources recommended split of Kcals from the mix to give an optimum balance in their view (they give a daily pie-chart of Kcals from each) is 50% Carbs, 30% Fat and 20% Protein.

So, what are the take-homes having done all the analysis?

Breathtakingly simple. If I keep my food intake, less exercise calories, at circa 2300-2350 calories per day, I will lose a consistent 2lbs. The exact balance between carbs, protein and fat intake doesn’t matter too materially, as long as the total is close, though my best every long term loss had me running things close to the recommended balance. All eyes on that pie chart on the WLR web site as I enter my food then!

The stupid thing is that my current BMR (Basal Metabolic Rate is the minimum level of energy your body needs when at rest to function effectively including your respiratory and circulatory organs, neural system, liver, kidneys, and other organs) is 2,364, and before the last 12 week Boditrax competition at my gym, it was circa 2,334 or so. Increased muscle through lifting some weights put this up a little.

So, the basic message is to keep what I eat down to the same calorie value, less the calories from any exercise, down to the same level as my BMR, which in turn will track down as weight goes. That sort of guarantees that any exercise I take over and above what I log – which is only long walks with Jane and gym exercise – will come off my fat reserves.

Simple. So, all else being equal, put less food in my mouth, and i’ll lose weight. The main benefit of 12 years of logging my intake is I can say authoritatively – for me – the levels at which this is demonstrably true. And that should speed my arrival at my optimum weight.

What do IT Vendors/Distributors/Resellers want?

What do you want? Poster

Off the top of my head, what are the expectations of the various folks along the flow of vendor to end user of a typical IT Product or Service? I’m sure i’ve probably missed some nuances, and if so, what is missing?


  • Provide Product and/or Services for Resale
  • Accountable for Demand Creation
  • Minimise costs at scale by compensating channels for:
    • Customer Sales Coverage and Regular Engagement of each
    • Deal Pipeline, and associated activity to increase:
      • Number of Customers
      • Range of Vendor Products/Services Sold
      • Customer Purchase Frequency
      • Product/Service Mix in line with Vendor objectives
    • Investment in skills in Vendor Products/Services
    • Associated Technical/Industry Skills useful to close vendor sales
    • Activity to ensure continued Customer Success and Service Renewals
    • Engagement in Multivendor components to round out offering
  • Establish clear objectives for Direct/Channel engagements
    • Direct Sales have place in Demand Creation, esp emerging technologies
    • Direct Sales working with Channel Partner Resources heavily encouraged
    • Direct Sales Fulfilment a no-no unless clear guidelines upfront, well understood by all
    • Avoid unnecessary channel conflict; actively discourage sharing results of reseller end user engagement history unless presence/relationship/history of third party reseller with end user decision makers (not just purchasing!) is compelling and equitable


  • Map vendor single contracts/support terms to thousands of downstream resellers
  • Ensure the spirit and letter of Vendor trading/marketing terms are delivered downstream
  • Break Bulk (physical logistics, purchase, storage, delivery, rotation, returns)
  • Offer Credit to resellers (mindful that typically <25% of trading debt in insurable)
  • Centralised Configuration, Quotation and associated Tech Support used by resellers
  • Interface into Vendor Deal Registration Process, assist vendor forecasting
  • Assistance to vendor in provision of Accreditation Training


  • Have Fun, Deliver Good Value to Customers, Make Predictable Profit, Survive
  • Financial Return for time invested in Customer Relationships, Staff knowledge, Skills Accreditations, own Services and institutional/process knowledge
  • Trading terms in place with vendor(s) represented and/or distributor(s) of same
  • Manage own Deal Pipeline, and associated activity to increase one or more of:
    • Number of Customers
    • Range of Vendor Products/Services Sold
    • Customer Purchase Frequency
    • Product/Service Mix in line with Vendor objectives
    • Margins
  • Assistance as needed from Vendor and/or Distributor staff
  • No financial surprises

So, what have I missed?

I do remember, in my relative youth, that as a vendor we used to work out what our own staffing needs were based on the amount of B2B revenue we wanted to achieve in each of catalogue/web sales, direct sales, VARs and through IT Distribution. If you plug in the revenue needs at the top, it gives the number of sales staff needed, then the number of support resources for every n folks at the layer before – and then the total advertising/promotion needed in each channel. It looked exactly like this:

1991 Channel Mix Ready Reckoner

Looking back at this and comparing to today, the whole IT Industry has gotten radically more efficient as time has gone by. That said, I good ready reckoner is to map in the structure/numbers of whoever you feel are the industry leader(s) in your market today, do an analogue of the channel mix they use, and see how that pans out. It will give you a basis from which to assess the sizes and productivity of your own resources – as a vendor at least!