IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Nadella: Heard what he said, knew what he meant

Satya Nadella

That’s a variation of an old “Two Ronnies” song in the guise of “Jehosaphat & Jones” entitled “I heard what she said, but knew what she meant” (words or three minutes into this video). Having read Satya Nadella’s Open Letter to employees issued at the start of Microsoft’s new fiscal year, I did think it was long. However, the real delight was reading Jean-Louis Gassee – previously the CTO of Apple – not only pulling it apart, but then having a crack at showing how it should have been written:

Team,

This is the beginning of our new FY 2015 – and of a new era at Microsoft. I have good news and bad news.The bad news is the old Devices and Services mantra won’t work. For example: I’ve determined we’ll never make money in tablets or smartphones.

So, do we continue to pretend we’re “all in” or do we face reality and make the painful decision to pull out so we can use our resources – including our integrity – to fight winnable battles? With the support of the Microsoft Board, I’ve chosen the latter.

We’ll do our utmost to minimize the pain that will naturally arise from this change. Specifically, we’ll offer generous transitions arrangements in and out of the company to concerned Microsoftians and former Nokians.

The good news is we have immense resources to be a major player in the new world of Cloud services and Native Apps for mobile devices.

We let the first innings of that game go by, but the sting energizes us. An example of such commitment is the rapid spread of Office applications – and related Cloud services – on any and all mobile devices. All Microsoft Enterprise and Consumer products/services will follow, including Xbox properties.

I realize this will disrupt the status quo and apologize for the pain to come. We have a choice: change or be changed.

Stay tuned.

Satya.

Jean-Louis Gassee’s  full take-home on the original is provided here. Satya Nadella should hire him.

11 steps to initiate a business spectacular – true story

Nuclear Bomb Mushroom

I got asked today how we grew the Microsoft Business at (then) Distributor Metrologie from £1m/month to £5m/month, at the same time doubling the margin from 1% to 2% in the thick of a price war. The sequence of events were as follows:

  1. Metrologie had the previous year bought Olivetti Software Distribution, and had moved its staff and logistics into the company’s High Wycombe base. I got asked to take over the Management of the Microsoft Business after the previous manager had left the company, and the business was bobbing along at £1m/month at 1% margins. Largest customer at the time was Dixons Stores Group, who were tracking at £600K sales per month at that stage.
  2. I was given the one purchasing person to build into a Product Manager, and one buyer. There was an existing licensing team in place.
  3. The bit I wasn’t appraised of was that the Directors had been told that the company was to be subject to a Productivity Improvement Plan, at the same time the vendor was looking to rationalise it’s UK Distributor numbers from 5 to 4. This is code for a prewarning that the expected casualty was…. us.
  4. I talked to 5 resellers and asked what issues they had dealing with any of the Microsoft distributors. The main issue was staff turnover (3 months telesales service typical!), lack of consistent/available licensing expertise and a minefield of pricing mistakes that lost everyone money.
  5. Our small team elected to use some of our Microsoft funds to get as many front line staff as possible Microsoft Sales certified. I wasn’t allowed to take anyone off the phones during the working week, but managed to get 12 people in over a two day weekend to go from zero to passing their accreditation exam. They were willing to get that badge to get them better future career prospects. A few weeks later we trained another classful on the same basis; we ended up with more Sales accredited salespeople than all the other distributors at the time.
  6. With that, when someone called in to order PCs or Servers, they were routinely asked if they wanted software with them – and found (to their delight) that they had an authoritative expert already on the line who handled the order, without surprises, first time.
  7. If you’re in a price war, you focus on two things; one is that you isolate who your key customers are, and secondly you profile the business to see which are the key products.
  8. For the key growth potential customers, we invested our Microsoft co-op funds in helping them do demand creation work; with that, they had a choice of landing an extra 10% margin stream new business dealing with us, or could get 1% lower prices from a distributor willing to sell at cost. No contest, as long as our pricing was there or thereabouts.
  9. The key benchmark products were Microsoft Windows and Microsoft Office Professional. Whenever deciding who to trade with, the first phone call was to benchmark the prices of those two part numbers, or slight variations of the same products. However, no-one watched the surrounding, less common products. So, we priced Windows and Office very tightly, but increased the selling prices by 2-3% on the less common products. The default selling price for a specific size of reseller (which mapped into which sales team looked after their account) was put on the trading platform to ensure consistency.
  10. Hand offs to the licensing team, if the business landed, were double-bubbled back to the field/internal salesperson team handling each account – so any more complex queries were handed off, handled professionally, priced and transacted without errors.
  11. We put all the measures in place, tracking the number of customers buying Microsoft software from us 1 month in 3, 2 months in 3 and every month. We routinely incented each sales team to increase the purchase frequencies in their account base on call out days, with programs that were well supported and fun in the office.

The business kept on stepping up. Still a few challenges; we at least twice got reverse ram raids, emptying returned stock back into our warehouse on day 30 of a 31 day month, making a sudden need for sales on the last trading day a bit of a white knuckle ride to offset the likely write down credit (until Microsoft could in turn return the cost to us). The same customer had, at the time, a habit of deciding not to pay it’s suppliers at month end at the end of key trading months, which is not a good thing when you’re making 1% margins assuming they’d pay you to terms.

One of the side effects of the Distribution business is that margins are thin, but volume grows aggressively – at least until you end up with a very small number of really big distributors left standing. A bit like getting wood shavings from wood on a lathe – you want just enough to peel off and the lathe turning faster and faster – but shy away from trying to be too greedy, digging the chisel in deeper and potentially seizing up the lathe.

With a business growing 40%+ per year and margins in the 1-2% range, you can’t fund the growth from retained profits. You just have to keep going back to the stock market every year, demonstrating growth that makes you look like one of the potential “last men standing”, and get another cash infusion to last until next year. And so it goes on, with the smaller distributors gradually falling away.

With the growth from £1m/month to £5m/month in 4 months – much less than the time to seek extra funds to feed the cash position to support the growth – the business started to overtrade. Vendors were very strict on terms, so it became a full time job juggling cash to keep the business flowing. Fortunately, we had magnificent credit and finance teams who, working with our resellers, allowed us the room to keep the business rolling.

With that, we were called into a meeting with the vendor to be told that we were losing the Microsoft Business, despite the big progress we’d made. I got headhunted for a role at Demon Internet, and Tracy (my Product Manager of 4 months experience) got headhunted to become Marketing Manager at a London Reseller. I stayed an extra month to complete our appeal to the vendor, but left at the end of June.

About 2 weeks into my new job, I got a call from my ex-boss to say the company’s appeal had been successful at European level, and that their Distribution Contract with the vendor was to continue. A great end to that story. The company later merged with one of the other distributors, and a cheque for £1000 arrived in the post at home for payment of stock options i’d been awarded in my last months there.

So, the basics are simple, as are the things you need to focus on if you’re ever in a price war (i’ve covered the basics in two previous blog posts, but the more advanced things are something i’d need to customise for any specific engagement). But talking to the customer, and working back to the issues delivering a good and friction free experience to them, is a great way to get things fixed. It has demonstrably worked for me every time – so far!

Fixed! Tableau on my Mac using Amazon WorkSpaces

AWS Logo

I found out today that we may need to wait another month for Tableau Desktop Professional for the Mac to be released, and i’ve been eager to finish off my statistical analysis project. I’ve collected 12 years worth of daily food intake courtesy of WeightLossResources, which splits out to calories, carbs, protein, fat and exercise calories – and is tabulated against weekly weight readings.

Google Fusion Tables – in which I did a short online course – can do most things except to calculate and draw a straight line, or exponential equivalent, through a scatter plot. This is meat and drink to Tableau, but which unfortunately (for Mac, Chromebook and iPad user me) runs only on Microsoft Windows.

I got a notification this morning that Amazon Web Services – as promised at their AWS Summit 2014 in London last week – had released Amazon WorkSpaces hosted within Europe. This provisions quite a meaty PC for you, but which you can operate through provided client software on your local PC, Mac, Android Tablet or iPad. There is also a free add-on to sync the content of a local Windows or Mac Directory with the virtual storage on the hosted PC, so you can hook in access to files on your local device if needed. There are more advanced options for corporate users, including Active Directory Support and the ability to use that to sideload apps for a user community – though that is way in advance of what i’m doing here.

There are a number of options, from the “Basic” single CPU, 3.75GB memory, 50GB disk PC up to one with 2 CPUs, 7GB of memory, 100GB of disk and the complete Microsoft Office Professional Suite on board. More here. Prices from $35 to $75/PC per month.

I thought i’d have a crack at provisioning one for the month, and to give me 2 weeks to play with a trial copy of Tableau Desktop Professional (i’ve not used it since V7, and the current release is 8.1). Within 20 minutes of requesting it off my AWS console, I received an email saying it had been provisioned and was ready to go. So…

WorkSpaces Set Up

 

You tell it what you want, and it goes away for 20 minutes provisioning your request (I managed to accidentally do this for a US region, but deleted that and selected Ireland instead – it provisioned just the one in the Ireland datacentre). Once done, it sent me an email with a URL and a registration code for my PC (it will do this for each user if you provision several at once):

AWS WorkSpaces Registration

 

Tap in the registration code from the email received, it does the initial piece of the client end of the configuration, then asks me to login:

AWS Workspaces Login

 

Once i’d done that, it then invited me to install the client software, which I did for Mac OS/X locally, and emailed the links for Android and iOS to my email address to pick up on those devices. For what it’s worth, the Android version said my Nexus 5 wasn’t a supported device (I guess it needs a tablet), but the iOS version installed fine on my iPad Mini.

AWS Workspaces Client Setup

 

And in I went. A Windows PC. Surprisingly nippy, and I felt no real difference between this and what I remember of a local Windows 7 laptop I used to have at Computacenter some 18 months ago now:

AWS Workspaces Microsoft Windows

 

The main need then was to drop a few files onto the hard disk, but I had to go revisit the Amazon WorkSpaces web site and download the Sync package for Mac OS/X. Once installed on my Mac, it asked me for my PC’s registration code again (wouldn’t accept it copy/pasted in on that one screen, so I had to carefully re-enter a short string), asked which local Mac directory I wanted to use to sync with the hosted PC, and off it went. Syncs just like dropbox, took a few minutes to populate that with quite a few files I had sitting there already. Once up, I used the provided Firefox to download Tableau Desktop Professional, the Excel driver I needed (as I don’t have Microsoft Office on my basic version here) and – voila. Tableau running fine on AWS WorkSpaces, on my MacBook Air:

Tableau Desktop Professional Running

 

Very snappy too, and i’m now back at home with my favourite Analytics software of all time – on my Mac, and directly on my iPad Mini also. The latter with impressive keyboard and mouse support, just a two finger gesture (not that one) away at all times.

So, I now have the tools to complete the statistical analysis storyboard of my 12 years of nutrition and weight data – and to set specific calorie and carb content to hit my 2lbs/week downward goal again (i’ve been tracking at only half that rate in the last 6 months).

In the meantime, i’ve been really impressed with Amazon WorkSpaces. Fast, Simple and inexpensive – and probably of wide applicability to lots of Enterprise customers I know. A Windows PC that I can dispose of again as soon as i’ve finished with it, for a grand sum of less than £21 for my months use. Tremendous!

Programming and my own sordid past

Austin Maestro LCP5

Someone asked me what sort of stuff i’ve programmed down my history. I don’t think i’ve ever documented it in one place, so i’m going the attempt a short summary here. I even saw that car while it was still in R&D at British Leyland! There are lots of other smaller hacks, but to give a flavour of the more sizable efforts. The end result is why I keep technically adept, even though most roles I have these days are more managerial in nature, where the main asset attainable is to be able to suss BS from a long distance.

Things like Excel, 1-2-3, Tableau Desktop Professional and latterly Google Fusion Tables are all IanW staples these days, but i’ve not counted these as real programming tools. Nor have I counted use of SQL commands to extract data from database tables directly from MySQL, or within Microsoft SQL Server Reporting Services (SSRS), which i’ve also picked up along the way. Ditto for the JavaScript based UI in front of MongoDB.

Outside of these, the projects have been as follows:

JOSS Language Interpreter (A Level Project: PAL-III Assembler). This was my tutors University project, a simple language consisting of onto 5 commands. Wrote the syntax checker and associated interpreter. Didn’t even have a “run” command; you just did a J 0 (Jump to Line Zero) to set it in motion.

Magic Square Solver (Focal-8). Managed to work out how to build a 4×4 magic square where every row, column, diagonals and centre four squares all added up to the same number. You could tap any number and it would work out the numbers for you and print it out.

Paper Tape Spooler (Basic Plus on RSTS/E). My first job at Digital (as trainee programmer) was running off the paper tape diagnostics my division shipped out with custom-built hardware options. At the time, paper tape was the universal data transfer medium for PDP-8 and PDP-11 computers. My code spooled multiple copies out, restarting from the beginning of the current copy automatically if the drive ran out of paper tape mid-way through. My code permitted the operator to input a message, which was printed out in 8×7 dot letter shapes using the 8 hole punch at the front of each tape – so the field service engineer could readily know what was on the tape.

Wirewrap Optimiser (Fortran-11 on RSX-11M). At the time my division of DEC was building custom circuit boards for customers to use on their PDP-8 and PDP-11 computers, extensive use was made of wire-wrapped backplanes into which the boards plugged into the associated OmniBus, UniBus or Q-Bus electronics. The Wirewrap program was adapted from a piece of public domain code to tell the operator (holding a wirewrap gun) which pins on a backplane to wire together and in what sequence. This was to nominally minimise the number of connections needed, and to make the end result as maintainable as possible (to avoid having too many layers of wires to unpick if a mistake was made during the build).

Budgeting Suite (Basic Plus on RSTS/E). Before we knew of this thing called a Spreadsheet (it was a year after Visicalc had first appeared on the Apple ][), I coded up a budget model for my division of DEC in Basic Plus. It was used to model the business as it migrated from doing individual custom hardware and software projects into one where we looked to routinely resell what we’d engineered to other customers. Used extensively by the Divisional board director that year to produce his budget.

Diagnostics (Far too many to mention, predominantly Macro-11 with the occasional piece of PAL-III PDP-8 Assembler, standalone code or adapted to run under DEC-X/11). After two years of pushing bits to device registers, and ensuring other bits changed in sync, it became a bit routine and I needed to get out. I needed to talk to customers … which I did on my next assignment, and then escaped to Digital Bristol.

VT31 Light Pen Driver in Macro-11 on RSX-11M. The VT31 was a bit mapped display and you could address every pixel on it individually. The guy who wrote the diagnostic code (Bob Grindley) managed to get it to draw circles using just increment and decrement instructions – no sign of any trig functions anywhere – which I thought was insanely neat. So neat, I got him to write it up on a flowchart which I still have in my files to this day. That apart, one of our OEM customers needed to fire actions off if someone pressed the pen button when the pen was pointing at a location somewhere on the screen. My RSX-11M driver responded to a $QIO request to feed back the button press event and the screen location it was pointing at when that occured, either directly, or handled as an Asynchronous System Trap (AST in PDP-11 parlance). Did the job, I think used in some aerospace radar related application.

Kongsberg Plotter Driver (Press Steel Fisher, Macro-11 on RSX-11M). Pressed Steel Fisher were the division of British Leyland in Cowley, Oxford who pressed the steel plates that made Austin and Morris branded car bodies. The Kongsberg Plotter drew full size stencils which were used to fabricate the car-size body panels; my code drove the pen on it from customers own code converted to run on a PDP-11. The main fascination personally was being walked through one workshop where a full size body of a yet announced car was sitting their complete. Called at that stage the LCP5, it was released a year later under the name of an Austin Maestro – the mid range big brother to the now largely forgotten Mini Metro.

Spanish Lottery Random Number Generator (De La Rue, Macro-11 on RSX-11M). De La Rue had a secure printing division that printed most of the cheque books used in the UK back in the 1980’s. They were contracted by the Spanish Lottery to provide a random number generator. I’m not sure if this was just to test things or if it was used for the real McCoy, but I was asked to provide one nonetheless. I wrote all the API code and unashamedly stole the well tested random generator code itself from the sources of single user, foreground/background only Operating System RT-11. A worked well, and the customer was happy with the result. I may have passed up the opportunity to become really wealthy in being so professional 🙂

VAX PC-11 Paper Tape Driver (Racal Redac, Thorn EMI Wookey Hole, others, Macro-32 on VAX/VMS). Someone from Educational Services had written a driver for the old PC11 8-hole Paper Tape Reader and Punch as an example driver. Unfortunately, if it ran out of paper tape when outputting the blank header or trailer (you had to leave enough blank tape either end to feed the reader properly), then the whole system crashed. Something of an inconvenience if it was supposed to be doing work for 100’s of other users at the same time. I cleaned up the code, fixed the bug and then added extra code to print a message on the header as i’d done earlier in my career. The result was used in several applications to drive printed circuit board, milling and other manufacturing machines which still used paper tape input at that stage.

Stealth Tester, VAX/VMS Space Invaders (British Aerospace, VAX Fortran on VAX/VMS). Not an official project, but one of our contacts at British Aerospace in Filton requested help fixing a number of bugs in his lunchtime project – to implement space invaders to work on VAX/VMS for any user on an attached VT100 terminal. The team (David Foddy, Bob Haycocks and Maurice Wilden) nearly got outed when pouring over a listing when the branch manager (Peter Shelton) walked into the office unexpectedly, though he left seemingly impressed by his employees working so hard to fix a problem with VAX Fortran “for BAE”. Unfortunately, I was the weak link a few days later; the same manager walked into the Computer Room when I was testing the debugged version, but before they’d added the code to escape quickly if the operator tapped control-C on the keyboard. When he looked over my shoulder after seeing me frantically trying to abort something, he was greeted by the Space Invaders Superleague, complete with the pseudonyms of all the testers onboard. Top of that list being Flash Gordon’s Granny (aka Maurice Wilden) and two belonging to Bob Haycocks (Gloria Stitz and Norma Snockers). Fortunately, he saw the funny side!

VMS TP Monitor Journal Restore (Birds Eye Walls, Macro-32 on VAX/VMS). We won an order to supply 17 VAX computers to Birds Eye Walls, nominally for their “Nixdorf Replacement Project”. The system was a TP Monitor that allowed hundreds of telesales agents to take orders for Birds Eye Frozen Peas, other Frozen goods and Walls Ice Cream from retailers – and play the results into their ERP system. I wrote the code that restored the databases from the database journal in the event of a system malfunction, hence minimising downtime.

VMS TP Monitor Test Suite (Birds Eye Walls, Macro-32 and VAX Cobol on VAX/VMS). Having done the database restore code, I was asked to write some test programs to do regression tests on the system as we developed the TP Monitor. Helped it all ship on time and within budget.

VMS Print Symbiont Job Logger (Birds Eye Walls, Macro-32 on VAX/VMS). One of the big scams on the previous system was the occasional double printing of a customer invoice, which doubled as a pick list for the frozen food delivery drivers. If such a thing happened inadvertently or on purpose, it was important to spot the duplicate printing and ensure the delivery driver only received one copy (otherwise they’d be likely to receive two identical pick lists, take away goods and then be tempted to lose one invoice copy; free goods). I had to modify the VMS Print Symbiont (the system print spooler) to add code to log each invoice or pick list printed – and for subsequent audit by other peoples code.

Tape Cracking Utilities (36 Various Situations, Macro-32 on VAX/VMS). After moving into Presales, the usual case was to be handed some Fortran, Cobol or other code on an 800 or 1600bpi Magnetic Tape to port over and benchmark. I ended up being the district (3 offices) expert on reading all sorts of tapes from IBM, ICL and a myriad of other manufacturers systems I built a suite of analysis tools to help work out the data structures on them, and then other Macro-32 code to read the data and put them in a format usable on VAX/VMS systems. The customer code was normally pretty easy to get running and benchmarks timed after that. The usual party trick was to then put the source code through a tool called “PME”, that took the place of the source code debugger and sampled the PC (Program Counter) 50 times per second as the program ran. Once finished, an associated program output a graph showing where the users software was spending all its time; a quick tweak in a small subroutine amongst a mountain of code, and zap – the program ran even faster. PME was productised by author Bert Beander later on, the code becoming what was then known as VAX Performance and Coverage Analyzer – PCA.

Sales Out Reporting System (Datatrieve on VAX/VMS). When drafted into look after our two industrial distributors, I wrote some code that consolidated all the weekly sales out reporting for our terminals and systems businesses (distributors down to resellers that bought through each) and mapped the sales onto the direct account team looking after each end user account that purchased the goods. They got credit for those sales as though they’d made the sales themselves, so they worked really effectively at opening the doors to the routine high volume but low order value fulfilment channels; the whole chain working together really effectively to maximise sales for the company. That allowed the End User Direct Account Teams to focus on the larger opportunities in their accounts.

Bakery Recipe Costing System (GW-Basic on MS-DOS). My father started his own bakery in Tetbury, Gloucestershire, selling up his house in Reading to buy a large 5-storey building (including shopfront) at 21, Long Street there. He then took out sizable loans to pay for an oven, associated craft bakery equipment and shop fittings. I managed to take a lot of the weight off his shoulders when he was originally seeing lots of spend before any likely income, but projecting all his cashflows in a spreadsheet. I then wrote a large GW-Basic application (the listing was longer than our combined living and dining room floors at the time) to maintain all his recipes, including ingredient costs. He then ran the business with a cash float of circa 6% annual income. If it trended higher, then he banked the excess; if it trended lower, he input the latest ingredient costs into the model, which then recalculated the markups on all his finished goods to raise his shop prices. That code, running on a DEC Rainbow PC, lasted over 20 years – after which I recoded it in Excel.

CoeliacPantry e-Commerce Site (Yolanda Cofectionery, predominantly PHP on Red Hat Linux 7.2). My wife and fathers business making bread and cakes for suffers of Coeliac Disease (allergy to the gluten found in wheat products). I built the whole shebang from scratch, learning Linux from a book, then running on a server in Rackshack (later EV1servers) datacentre in Texas, using Apache, MySQL and PHP. Bought Zend Studio to debug the code, and employed GPG to encode passwords and customer credit card details (latter maintained off the server). Over 300 sales transactions, no chargebacks until we had to close the business due to ill-health of our baker.

Volume/Value Business Line Mapping (Computacenter, VBA for Excel, MS-Windows). My Volume Sales part of the UK Software Business was accountable for all sales of software products invoiced for amount under £100,000, or where the order was for a Microsoft SELECT license; one of my peers (and his team of Business Development Managers) focussed on Microsoft Enterprise Agreements or single orders of £100,000 or more. Simple piece of Visual Basic for Applications (VBA) code that classified a software sale based on these criteria, and attributed it to the correct unit.

MongoDB Test Code (self training: Python on OS/X). I did a complete “MongoDB for Python Developers” course having never before used Python, but got to grips with it pretty quickly (it is a lovely language to learn). All my test code for the various exercises in the 6 week course were written in Python. For me, my main fascination was how MongoDB works by mapping it’s database file into the address space above it’s own code, so that the operating systems own paging mechanism does all the heavy lifting. That’s exactly how we implemented Virtual Files for the TP Monitor for Birds Eye Walls back in 1981-2. With that, i’ve come full circle.

Software Enabled (WordPress Network): My latest hack – the Ubuntu Linux Server running Apache, MySQL, PHP and the WordPress Network that you are reading words from right now. It’s based on Digital Ocean servers in Amsterdam – and part of my learning exercise to implement systems using Public Cloud servers. Part of my current exercise trying to simplify the engagement of AWS, Google Cloud Services and more in Enterprise Accounts, just like we did for DECdirect Software way back when. But that’s for another day.

 

Public Clouds, Google Cloud moves and Pricing

Google Cloud Platform Logo

I went to Google’s Cloud Platform Roadshow in London today, nominally to feed my need to try and rationalise the range of their Cloud offerings.  This was primarily for my potential future use of their infrastructure and to learn to what I could of any nuances present. Every provider has them, and I really want to do a good job to simplify the presentation for my own sales materials use – but not to oversimplify to make the advice unusable.

Technically overall, very, very, very impressive.

That said, i’m still in three minds about the way the public cloud vendors price their capacity. Google have gone to great lengths – they assure us – to simplify their pricing structure against industry norms. They were citing industry prices coming down by 6-8% per year, but the underlying hardware following Moores law much more closely – at 20-30% per annum lower.

With that, Google announced a whole raft of price decreases of between 35-85%, accompanied by simplifications to commit to:

  • No upfront payments
  • No Lock-in or Contracts
  • No Complexity

I think it’s notable that as soon as Google went public with that a few weeks back, they were promptly followed by Amazon Web Services, and more recently by Microsoft with their Azure platform. The outside picture is that they are all in a race, nip and tuck – well, all chasing the volume that is Amazon, but trying to attack from underneath, a usual industry playbook.

One graph came up, showing that when a single virtual instance is fired up, it costs around 7c per hour if used up to 25% of the month – after which the cost straight lines down. If that instance was up all month, then it was suggested that the discount of 30% would apply. That sort of suggests a monthly cost of circa $36.

Meanwhile, the Virtual Instance (aka Droplet) running Ubuntu Linux and my WordPress Network on Digital Ocean, with 30GB flash storage and a 3TB/month network bandwidth, currently comes out (with weekly backups) at a fixed $12 for me. One third the apparent Google price.

I’m not going to suggest they are in any way comparable. The Digital Ocean droplet was pretty naked when I ran it up for the first time. I had to very quickly secure it (setting up custom iptables to close off the common ports, ensure secure shell only worked from my home fixed IP address) and spend quite a time configuring WordPress and associated email infrastructure. But now it’s up, its there and the monthly cost very predictable. I update it regularly and remove comment spam volumes daily (ably assisted by a WordPress add-in). The whole shebang certainly doesn’t have the growth potential that Google’s offerings give me out of the box, but like many developers, it’s good enough for it’s intended purpose.

I wonder if Google, AWS, Microsoft and folks like Rackspace buy Netcraft’s excellent monthly hosting provider switching analysis. They all appear to be ignoring Digital Ocean (and certainly not appearing to be watching their churn rates to an extent most subscription based businesses usually watch like a hawk) while that company are outgrowing everyone in the industry at the moment. They are the one place that are absorbing developers, and taking thousands of existing customers away from all the large providers. In doing so, they’ve recently landed a funding round from VC Andreessen Horowitz (aka “A16Z” in the industry) to continue to push that growth. Their key audience, that of Linux developers, being the seeds from which many valuable companies and services of tomorrow will likely emerge.

I suspect there is still plenty time for the larger providers to learn from their simplicity – of both pricing, and the way in which pre-configured containers of common Linux-based software stacks (WordPress, Node.js, LAMP, email stacks, etc) can be deployed quickly and inexpensively. If indeed, they see Digital Ocean as a visible threat yet.

In the meantime, i’m trying to build a simple piece of work that can articulate how all the key Public Cloud vendor services are each structured, from the point of view of the time-pressured, overly busy IT Manager (the same as I did for the DECdirect Software catalogue way back when). I’m scheduled to have a review of AWS at the end of April to this end. The presence of a simple few spreads of comparative collateral appears to be the missing reference piece in the Industry to date.

Focus on End Users: a flash of the bleeding obvious

Lightbulb

I’ve been re-reading Terry Leahy’s “Management in 10 Words”; Sir Terry was the CEO of Tesco until recently. I think the piece in the book introduction relating to sitting in front of some Government officials was quite funny – if it weren’t a blinding dose of the obvious that most IT organisations miss:

He was asked “What was it that turned Tesco from being a struggling supermarket, number three retail chain in the UK, into the third largest retailer in the World?”. He said: “It’s quite simple. We focussed on delivering for customers. We set ourselves some simple aims, and some basic values to live by. And we then created a process to achieve them, making sure that everyone knew what they were responsible for”.

Silence. Polite coughing. Someone poured out some water. More silence. “Was that it?” an official finally asked. And the answer to that was ‘yes’.

The book is a good read and one we can all learn from. Not least as many vendors in the IT and associated services industry and going in exactly the opposite direction compared to what he did.

I was listening to a discussion contrasting the different business models of Google, Facebook, Microsoft and Apple a few days back. The piece I hadn’t rationalised before is that of this list, only Apple have a sole focus on the end user of their products. Google and Facebook’s current revenue streams are in monetising purchase intents to advertisers, while trying to not dissuade end users from feeding them the attention and activity/interest/location signals to feed their business engines. Microsoft’s business volumes are heavily skewed towards selling software to Enterprise IT departments, and not the end users of their products.

One side effect of this is an insatiable need focus on competition rather than on the user of your products or services. In times of old, it became something of a relentless joke that no marketing plan would be complete without the customary “IBM”, “HP” or “Sun” attack campaign in play. And they all did it to each other. You could ask where the users needs made it into these efforts, but of the many I saw, I don’t remember a single one of those featured doing so at all. Every IT vendor was playing “follow the leader” (and ignoring the cliffs they may drive over while doing so), where all focus should have been on your customers instead.

The first object lesson I had was with the original IBM PC. One of the biggest assets IBM had was the late Philip “Don” Estridge, who went into the job running IBM’s first foray into selling PCs having had personal experience of running an Apple ][ personal computer at home. The rest of the industry was an outgrowth of a hobbyist movement trying to sell to businesses, and business owners craved “sorting their business problems” simply and without unnecessary surprises. Their use of Charlie Chaplin ads in their early years was a masterstroke. As an example, spot the competitive knockoff in this:

There isn’t one! It’s a focus on the needs of any overworked small business owner, where the precious asset is time and business survival. Trading blows trying to sell one computer over another completely missing.

I still see this everywhere. I’m a subscriber to “Seeking Alpha“, which has a collection of both buy-side and sell-side analysts commentating on the shares of companies i’ve chosen to watch. More often than not, it’s a bit like sitting in an umpires chair during a tennis match; lots of noise, lots of to-and-fro, discussions on each move and never far away from comparing companies against each other.

One of the most prescient things i’ve heard a technology CEO say was from Steve Jobs, when he told an audience in 1997 that “We have to get away from the notion that for Apple to win, Microsoft have to lose”. Certainly, from the time the first iPhone shipped onwards, Apple have had a relentless focus on the end user of their products.

Enterprise IT is still driven largely by vendor inspired fads and with little reference to end user results (one silly data point I carry in my head is waiting to hear someone at a Big Data conference mention a compelling business impact of one of their Hadoop deployments that isn’t related to log file or Twitter sentiment analyses. I’ve seen the same software vendor platform folks float into Big Data conferences for around 3 years now, and have not heard one yet).

One of the best courses I ever went on was given to us by Citrix, specifically on selling to CxO/board level in large organisations. A lot of it is being able to relate small snippets of things you discover around the industry (or in other industries) that may help influence their business success. One example that I unashamedly stole from Martin Clarkson was that of a new Tesco store in South Korea that he once showed to me:

I passed this onto to the team in my last company that sold to big retailers. At least four board level teams in large UK retailers got to see that video and to agonise if they could replicate Tesco’s work in their own local operations. And I dare say the salespeople bringing it to their attention gained a good reputation for delivering interesting ideas that may help their client organisations future. That’s a great position to be in.

With that, i’ve come full circle from and back to Tesco. Consultative Selling is a good thing to do, and that folks like IBM are complete masters at it; if you’re ever in an IBM facility, be sure to steal one of their current “Institute for Business Value” booklets (or visit their associated group on LinkedIn). Normally brim full of surveys and ideas to stimulate the thought processes of the most senior users running businesses.

We’d do a better job in the IT industry if we could replicate that focus on our end users from top to bottom – and not to spend time elbowing competitors instead. In the meantime, I suspect those rare places that do focus on end users will continue to reap a disproportionate share of the future business out there.

Tech Careers delivering results, slowed by silly Nuances

Caution: Does Stupid Things

Early in my career, I used to write and debug device drivers, which was a mix of reading code in octal/hex, looking at source listings, pouring over crash dumps and trying to work out which code paths executed in parallel. Each potentially in conflict with each other if you weren’t extra careful. Doing that for a time gets you used to being able to pattern match the strangest of things. Like being able to tell what website someone is looking at from far across the room, or reading papers on a desk upside down, or being able to scroll through a spreadsheet looking for obvious anomalies at pretty impressive speeds.

The other thing it gives you is “no fear” whenever confronted by something new, or on the bleeding edge. You get a confidence that whatever may get thrown at you, that you can work your way around it. That said, I place great value in stuff that’s well designed, and that has sensible defaults. That speeds your work, as you’re not having to go back and spell out in minute detail what every smallest piece of the action needs to do.

I’ve been really blessed with Analytics products like Tableau Desktop Professional, and indeed more recently with Google Spreadsheets and Google Fusion Tables. These are the sort of tools I use routinely when running any business, so that I can simply, in a data-based way, adjudge what is and isn’t working business-wise.

The common characteristic of these tools are that they all guess what you need to show most of the time, and don’t delay you by having to go through every piece of text, every line, every smallest detail with individual calls for font, font size, colo(u)r and the need to cut the graph display of a line once the last data point is rolled out – and not, as one product does, just throw all future months stuck on a flat line once the plot goes into future months with no data yet present.

There have been several times when i’ve wanted to stick that “Does Stupid Things” sign on Microsoft SQL Server Reporting Services.

I diligently prototyped (as part of a Business improvement project) a load of daily updated graphs/reports for a team of managers using Tableau Desktop Professional. However, I was told that the company had elected to standardise on a Microsoft Reporting product, sitting above a SQL Services based Datamart. In the absence of the company wanting to invest in Tableau Server, I was asked to repurpose the Tableau work into Microsoft SQL Services Reporting Services (aka “SSRS”). So I duly read two books, had a very patient and familiar programmer show the basics and to set me up with Visual Studio, get the appropriate Active Directory Access Permissions, and away I went. I delivered everything before I found no line Management role to go back to, but spent some inordinate time between the two dealing with a few “nuances”.

Consider this. I built a table to show each Sales Team Manager what their units Revenue and Profit was, year to date, by month, or customer, or vendor. The last column of the table was a percentage profit margin, aka “Profit” divided by “Revenue”. The gotcha with this is that if something is given away for free, (nominally negative) profit over revenue throws a divide by zero error. So simple to code around, methinks:

=iif(revenue>0, profit/revenue, 0)

Which, roughly translated, tells the software to calculate the percentage profit if revenue is greater than zero, otherwise just stick zero in as the answer. So, I rerun the view, and get #error in all the same places and the same 13 examples of attempted divide by zeroes in as before.

Brain thinks – oh, there must be some minuscule revenue numbers in the thousandths of pence in there, so change the formula to:

=iif(revenue>1,profit/revenue, 0)

so that the denominator is at least one, so the chance of throwing a divide by zero error is extremely remote. The giveaway would need to be mind bogglingly huge to get anything close to a divide by zero exception. Re-run the view. Result: Same 13 divide by zero #error exceptions.

WTF? Look at where the exceptions are being thrown, and the revenue is zero, so the division shouldn’t even be being attempted. So off to Google with “SQL Services Reporting iif divide by zero” I go. The answer came from a Microsoft engineer who admits, nominally for performance reasons, both paths of the iif statement get executed at the same time as a performance shortcut, so that whichever half needs to give it’s result, it’s already populated and ready to use. So, the literal effect of:

=iif(revenue>0, profit/revenue,0)

works like this:

  • Calculate 0 on the one side.
  • Calculate Profit/Revenue on the other.
  • If Revenue > 0, pick the second option, else pick the first.
  • If either side throws an exception (like divide by zero), blat the answer, substitute “#Error” instead.

Solution is to construct two nested “iif” statements in such a way that the optimiser does’t execute the division before the comparison with zero is made.

With that, I’m sure wearing underpants on your head has the same sort of perverse logic somewhere. This is simply atrociously bad software engineering.

Office for the iPad; has the train already left the station?

Meeting notes by @Jargonautical

One asset I greatly admire (and crave!) is the ability to communicate simply, but with panache, speed and reasoned authority. That’s one characteristic of compelling journalism, of good writing and indeed a characteristic of some of the excellent software products i’ve used. Not to throw in the kitchen sink, but to be succinct and to widen focus only to give useful context supporting the central brass tacks.

I’ve now gone 15 months without using a single Microsoft product. I spend circa £3.30/month for my Google Apps for Business account, and have generally been very impressed with Google Spreadsheet and with Google Docs in there. The only temporary irritant along the way was the inability for Google Docs to put page numbers in the Table of Contents of one 30 page document I wrote, offering only html links to jump to the content – which while okay for a web document, was as much use as a cow on stilts for the printed version. But it keeps improving by leaps and bounds every month. That issue solved, and now a wide array of free add-ons to do online review sign-offs, adding bibliographies and more.

This week, i’ve completed all the lessons on a neat piece of Analytics software called Google Fusion Tables, produced by Google Research and available as a free Google Drive add-on. To date, it appears to do almost everything most people would use Tableau Desktop for, including map-based displays, but with a much simpler User Interface. I’m throwing some more heavy weight lifting at it during the next couple of days, including a peek at it’s Python-accessible API – that nominally allows you to daisy chain it in as part of an end-to-end a business process. The sort of thing Microsoft had Enterprises doing with VBA customisations way back when.

My reading is also getting more focussed. I’ve not read a newspaper regularly for years, dip into the Economist only once or twice every three months, but instead go to other sources online. The behaviour is to sample less than 10 podcasts every week, some online newsletters from authoritative sources, read some stuff that appears in Medium, but otherwise venture further afield only when something swims past in my Twitter stream.

This morning, this caught my eye, as posted by @MMaryMcKenna. Lucy Knight (@Jargonautical) had posted her notes made during a presentation Mary had made very recently. Looking at Lucy’s Twitter feed, there were some other samples of her meeting note taking:

Meeting Notes: Minimal Viable Product

Meeting Notes Cashflow Modelling in Excel

Meeting Notes: Customer Service

Aren’t they beautiful?

Lucy mentions in her recent tweets that she does these on an iPad Mini using an application called GoodNotes, which is available for the princely sum of £3.99 here (she also notes that she uses a Wacom Bamboo stylus – though a friend of hers manages with a finger alone). Short demo here. I suspect my attempts using the same tool, especially in the middle of a running commentary, would pale in comparison to her examples here.

With that, there are reports circulating today that the new Microsoft CEO, Satya Nadella, will announce Microsoft Office for iOS this very afternoon. I doubt that any of the Office components will put out work of the quality of Lucy’s iPad Meeting Notes anytime soon, but am open to being surprised.

Given we’ve had over three years of getting used to having no useful Microsoft product (outside of Skype) on the volume phone or tablet devices here, I wonder if that’s a route back to making money on selling software again, or supporting Office 365 subscriptions, or a damp squib waiting to happen.

My bet’s on the middle of those three by virtue of Microsofts base in Large Enterprise accounts, but like many, I otherwise feel it’s largely academic now. The Desktop software market is now fairly well bombed (by Apple and Google) into being a low cost conduit to a Services equivalent instead. The Server software market will, I suspect, aim the same way within 2 years.