Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Microbiomes and a glimpse to doctors becoming a small niche

Microbiomes, Gut and Spot the Salmonella

When I get up in the morning, I normally follow a path on my iPad through email, Facebook, LinkedIn, Twitter, Google+, Feedly (for my RSS feeds) and Downcast (to update my Podcasts for later listening). This morning, Kevin Kelly served up a comment on Google+ that piqued my interest, and that led to a long voyage of discovery. Much to my wifes disgust as I quoted gory details about digestive systems at the same time she was trying to eat her breakfast. He said:

There are 2 reasons this great Quantified Self experiment is so great. One, it shows how important your microbial ecosystem is. Two, it shows how significant DAILY genome sequencing will be.

He then gave a pointer to an article about Microbiomes here.

The Diet Journey

I’ve largely built models based on innocent attempts to lose weight, dating back to late 2000 when I tried the Atkins diet. That largely stalled after 3 weeks and one stone loss. Then fairly liberated in 2002 by a regime at my local gym, when I got introduced (as part of a six week program) to the website of Weight Loss Resources. This got me in the habit of recording my food intake and exercise very precisely, which translated branded foods and weights into daily intake of carbs, protein and fat. That gave me my calorie consumption and nutritional balance, and kept track alongside weekly weight readings. I’ve kept that data flowing now for over 12 years, which continues to this day.

Things i’ve learnt along the way are:

  • Weight loss is heavily dependent on me consuming less calories than my Basal Metabolic Rate (BMR), and at the same time keeping energy deduced from carbs, protein and fat at a specific balance (50% from Carbs, 20% Protein, 30% fat)
  • 1g of protein is circa 4.0 Kcals, 1g of carbs around 3.75 Kcals, and fat around 9.0 Kcals.
  • Muscle weighs 2x as much as fat
  • There is a current fixation at gyms with upping your muscle content at first, nominally to increase your energy burn rate (even at rest)
  • The digestive system is largely first in, first out; protein is largely processed in acidic conditions, and carbs later down the path in alkaline equivalents. Fat is used as part of both processes.
  • There are a wide variety of symbiotic (opposite of parasite!) organisms that assist the digestive process from beginning to end
  • Weight loss is both heat and exhaust. Probably other forms of radiation too, given we are all like a light bulb in the infrared spectrum (I always wonder how the SAS manage to deploy small teams in foreign territory and remain, for the most part, undetected)

I’ve always harboured a suspicion that taking antibiotics have an indiscriminate bombing effect on the population of microbiomes there to assist you. Likewise the effect of what used to be my habit of drinking (very acidic) Diet Coke. But never seen anyone classify the variety and numbers of Microbiomes, and to track this over time.

The two subjects had the laboratory resources to examine samples of their own saliva, and their own stool samples, and map things over time. Fascinating to see what happened when one of them suffered Salmonella (the green in the above picture), and the other got “Delhi Belly” during a trip abroad.

The links around the article led to other articles in National Geographic, including one where the author reported much wider analysis of the Microbiomes found in 60 different peoples belly buttons (here) – he had a zoo of 58 different ones in his own. And then to another article where the existence of certain microbiome mutations in the bloodstream were an excellent leading indicator of the presence of cancerous tumours in the individual (here).

Further dips into various Wikipedia articles cited examples of microbiome populations showing up in people suffering from various dilapidating illnesses such as ME, Fibromyalgia and Lyme disease, in some instances having a direct effect on driving imbalances to cause depression. Separately, that what you ate often had quite an effect in altering the relative sizes of parts of the Microbiome population in short order.

There was another article that suggested new research was going to study the Microbiome Zoo present in people’s armpits, but I thought that an appropriate time to do an exit stage left on my reading. Ugh.

Brain starts to wander again

Later on, I reflected for a while on how I could supply some skills i’ve got to build up data resources – at least should suitable sensors be able to measure, sample and sequence microbiomes systematically every day. I have the mobile phone programming, NoSQL database deployment and analytics skills. But what if we had sensors that everyone could have on them 24/7 that could track the microbiome zoo that is you (internally – and I guess externally too)? Load the data resources centrally, and I suspect the Wardley Map of what is currently the NHS would change fundamentally.

I also suspect that age-old Chinese Medicine will demonstrate it’s positive effects on further analysis. It was about the only thing that solved my wifes psoriasis on her hands and feet; she was told about the need to balance yin/yan and remove heat put things back to normal, which was achieved by consumption of various herbs and vegetation. It would have been fascinating to see how the profile of her microbiomes changed during that process.

Sensors

I guess the missing piece is the ability to have sensors that can help both identify and count types microbiomes on a continuous basis. It looks like a laboratory job at the moment. I wonder if there are other characteristics or conditions that could short cut the process. Health apps about to appear from Apple and Google initiatives tend to be effective at monitoring steps, heart rate. There looks to be provision for sensing blood glucose levels non-invasively by shining infrared light on certain parts of the skin (inner elbow is a favourite); meanwhile Google have patented contact lenses that can measure glucose levels in the blood vessels in the wearers eyes.

The local gym has a Boditrax machine that fires an electrical up one foot and senses the signal received back in the other, and can relate body water, muscle and fat content. Not yet small enough for a mobile phone. And Withings produce scales that can report back weight to the users handset over bluetooth (I sometimes wonder if the jarring of the body as you tread could let a handset sensors deduce approximate weight, but that’s for another day).

So, the mission is to see if anyone can produce sensors (or an edible, communicating pill) that can effectively work, in concert with someones phone and the interwebs, to reliably count and identify biome mixes and to store these for future analysis, research or notification purposes. Current research appears to be in monitoring biome populations in:

  1. Oral Cavity
  2. Nasal
  3. Gastrointestinal Organs
  4. Vaginal
  5. Skin

each with their own challenges for providing a representative sample surface sufficient to be able to provide regular, consistent and accurate readings. If indeed we can miniaturize or simplify the lab process reliably. The real progress will come when we can do this and large populations can be sampled – and cross referenced with any medical conditions that become apparent in the data provider(s). Skin and the large intestine appear to have the most interesting microbiome profiles to look at.

Long term future

The end result – if done thoroughly – is that the skills and error rates of GP provided treatment would become largely relegated, just as it was for farm workers in the 19th century (which went from 98% of the population working the land to less than 2% within 100 years).

With that, I think Kevin Kelly is 100% correct in his assessment – that the article shows how significant DAILY genome sequencing will be. So, what do we need to do to automate the process, and make the fruits of its discoveries available to everyone 24/7?

Footnote: there look to be many people attempting to automate subsets of the DNA/RNA identification process. One example highlighted by MIT Review today being this.

12 years of data recording leads to dose of the obvious

Ian Waring Weight Loss Trend Scatter Graph

As mentioned yesterday, I finally got Tableau Desktop Professional (my favourite Analytics software) running on Amazon Workspaces – deployed for all of $35 for the month instead of having to buy my own Windows PC. With that, a final set of trends to establish what I do right when I consistently lose 2lbs/week, based on an analysis of my intake (Cals, Protein, Carbs and Fat) and Exercise since June 2002.

I marked out a custom field that reflected the date ranges on my historical weight graph where I appeared to consistently lose, gain or flatline. I then threw all sorts of scatter plots (like the one above, plotting my intake in long periods where I had consistent weight losses) to ascertain what factors drove the weight changes i’ve seen in the past. This to nominally to settle on a strategy going forward to drop to my target weight as fast as I could in a sustainable fashion. Historically, this has been 2lbs/week.

My protein intake had zero effect. Carbs and Fat did, albeit they tracked the effect of my overall Calorie intake (whether in weight or in the number of Calories present in each – 1g of Carbs = 3.75 Kcals, and 1g of Fat = 9 Kcals; 1g of Protein is circa 4 Kcals). The WeightLossResources recommended split of Kcals from the mix to give an optimum balance in their view (they give a daily pie-chart of Kcals from each) is 50% Carbs, 30% Fat and 20% Protein.

So, what are the take-homes having done all the analysis?

Breathtakingly simple. If I keep my food intake, less exercise calories, at circa 2300-2350 calories per day, I will lose a consistent 2lbs. The exact balance between carbs, protein and fat intake doesn’t matter too materially, as long as the total is close, though my best every long term loss had me running things close to the recommended balance. All eyes on that pie chart on the WLR web site as I enter my food then!

The stupid thing is that my current BMR (Basal Metabolic Rate is the minimum level of energy your body needs when at rest to function effectively including your respiratory and circulatory organs, neural system, liver, kidneys, and other organs) is 2,364, and before the last 12 week Boditrax competition at my gym, it was circa 2,334 or so. Increased muscle through lifting some weights put this up a little.

So, the basic message is to keep what I eat down to the same calorie value, less the calories from any exercise, down to the same level as my BMR, which in turn will track down as weight goes. That sort of guarantees that any exercise I take over and above what I log – which is only long walks with Jane and gym exercise – will come off my fat reserves.

Simple. So, all else being equal, put less food in my mouth, and i’ll lose weight. The main benefit of 12 years of logging my intake is I can say authoritatively – for me – the levels at which this is demonstrably true. And that should speed my arrival at my optimum weight.

New Learnings, 12 week Boditrax Challenge, still need Tableau

The Barn Fitness Club Cholsey

One of the wonderful assets at my excellent local gym – The Barn Fitness Club in Cholsey – is that they have a Boditrax machine. This looks like a pair of bathroom scales with metal plates where you put your feet, hooked up to a PC. It bounces a small charge through one foot and measures the signal that comes back through the other. Measuring your weight at the same time and having previously been told your age, it can then work out the composition of your body in terms of fat, muscle, water and bone. The results are dropped on the Boditrax web site, where you can monitor your progress.

For the last 12 weeks, the gym has run a 12 week Boditrax challenge. Fortunately, I pay James Fletcher for a monthly Personal Training session there, where he takes readings using this machine and adjusts my 3x per week gym routine accordingly. The end results after 12 weeks have been (top  graph my weight progress, the bottom my composition changes):

Boditrax Challenge Ian W Weight Tracking

Boditrax Challenge Ian W Final Results

The one difference from previous weight loss programmes i’ve followed is the amount of weight work i’d been given this time around. I used to be always warned that muscle weighs considerably more than fat, so to try to keep to cardio work to minimise both. The thinking these days appears to be to increase your muscle mass a little, which increases your metabolic rate – to burn more calories, even at standstill.

The one thing i’ve done since June 3rd 2002 is to tap my food intake and exercise daily into the excellent Weight Loss Resources web site. Hence I have a 12 year history of exact figures for fat, carbs and protein intake, weight and corresponding weight changes throughout. I used these in a recent Google Course on “Making sense of Data”, which used Google Fusion tables, trying to spot what factors led to a consistent 2lbs/week weight loss.

There are still elements of the storyboard I still need to fit in to complete the picture, as Fusion Tables can draw a scatter plot okay, but can’t throw a weighted trend line through that cloud of data points. This would give me a set of definitive stories to recite; what appears so far is that I make sustainable 2lbs/week losses below a specific daily calorie value if I keep my carbs intake down at a specific level at the same time. At the moment, i’m tracking at around 1lb/week, which is half the rate I managed back in 2002-3 – so i’m keen to expose the exact numbers I need to follow. Too much, no loss; too little, body goes into a siege mentality – and hence the need for a happy medium.

I tried to get a final fix on the exact nett intake and carb levels in Google Spreadsheets, which isn’t so adept at picking data subsets with filters – so you end up having the create a spreadsheet for each “I wonder if” question. So, i’ve largely given up on that until I can get my mits on a Mac version of Tableau Desktop Professional, or can rent a Windows Virtual Desktop on AWS for $30 for 30 days to do the same on it’s Windows version. Until then, I can see the general picture, but there are probably many data points from my 3,800+ weeks of sampled data that plot on top of each other – hence the need for the weighted trend line in order to expose the definitive truth.

The nice thing about the Boditrax machine is that it knows your Muscle and Fat composition, so can give you an accurate reading for your BMR – your Basal Metabolic Rate. This is the minimum level of energy your body needs when at rest to function effectively including your respiratory and circulatory organs, neural system, liver, kidneys, and other organs. This is typically circa 70% of your daily calorie intake, the balance used to power you along.

My BMR according to the standard calculation method (which assumes a ‘typical’ %muscle content) runs about 30 kcals under what Boditrax says it actually is. So, I burn an additional 30 Kcals/day due to my increased muscle composition since James Fletchers training went into place.

Still a long way to go, but heading in the correct direction. All I need now is that copy of Tableau Desktop Professional so that I can work out the optimum levels of calorie and carbs intake to maximise the long term, relentless loss – and to ensure I track at those levels. In the meantime, i’ll use the best case I can work out from visual inspection of the scatter plots.

I thoroughly recommend the Barn Fitness Club in Cholsey, use of their Boditrax machine and regular air time with any of their Personal Trainers. The Boditrax is only £5/reading (normally every two weeks) and an excellent aid to help achieve your fitness goals.

Just waiting to hear the final result of the 12 week Boditrax challenge at the Club – and to hope i’ve done enough to avoid getting the wooden spoon!

Boditrax Challenge Home Page

 

In the meantime, it’s notable that my approx nett calorie intake level (calories eaten less exercise calories) to lose 2lbs/week appears to be my current BMR – which sort of suggests the usual routine daily activity I don’t log (walking around the home, work or shops) is sufficient to hit the fat reserves. An hour of time with Tableau on my data should be enough to confirm if that is demonstrably the case, and the level of carbs I need to keep to in order to make 2lbs/week a relentless loss trend again.

So, how do Policing Statistics work?

Metropolitan Police Sign

I know I posted a previous note on the curious measures being handed down to police forces to “reduce crime”. While the police may be able to influence it slightly, in the final analysis they only have direct control over one part of the value chain – that of producing the related statistics (I really don’t think they commit all the crimes on which they are measured!). The much longer post was this: http://www.ianwaring.com/2014/04/05/police-metrics-and-the-missing-comedy-of-the-red-beads/

I’ve just had one of my occasional visits back to “Plumpergeddon” – not recommended in work environments for reasons that will become apparent later – which documents the ebbs and flows of the legal process following a mugging and theft (of a MacBook and a wallet containing a debit card) in London in November 2011. It is, to put it mildly, a shocking story.

The victim of the crime – and owner of the MacBook – had installed a piece of software on his machine that – once he’d enabled a tick box on an associated web site – started to “phone home” at regular intervals. Taking pictures of the person using the computer, shots of what was on the screen at the same time, and both tagged with it’s exact geographic location. He ended up with over 6,000 pictures, including some which showed sale of goods on eBay that matched purchases made on his stolen credit cards.

I’m not sure exactly how the flow of incidents get rolled up into the crime statistics that the Met publish, but having done a quick trawl through the Plumpergeddon Blog, starting at the first post here and (warning: ever more NSFW as the story unfolds, given what the user started paying for and viewing!) moving up to the current status 29 pages later, the count looks like:

  • 1 count of mugging
  • 1 theft of a MacBook Pro Personal Computer, plus Wallet containing Company Debit Card
  • 2 counts of obtaining money (from a cashpoint with a stolen card) by deception
  • 9 counts of obtaining goods (using a stolen debit card, using a PIN) by deception
  • 2 counts of obtaining goods (using a stolen debit card, signing for them) by deception
  • 11 counts of demonstrably selling stolen goods

So, I make that 26 individual crime incidents.

The automated data collection started off within 4 weeks of the theft phoning home (it took one shot of the user, a screenshot and reported location and connection details every 10 minutes of active use). He ended up assembling circa 6,000 pieces of evidence (including screenshots of the person using his MacBook, and screenshots documenting the disposal of the goods purchased with the stolen card using three separate accounts on eBay). All preserved with details of the physical location of the MacBook and the details of the WiFi network it was connected to.

Many ebbs and flows along the way, but the long and short of it was that the case was formally dropped “for lack of evidence”. This was then followed by a brief piece of interest when some media activity started picking up, but it then sort of ebbed away again. In May 2013, news came back as The case file is back with the officer, and the case is closed pending further leads.”

Four weeks ago, the update said:

I Am No Longer the Victim. Apparently. I was told last night in a police station by a Detective Constable that because the £7,000 I was defrauded of was returned by my bank after 3-4 weeks, and the laptop was replaced by my insurance company after 4 months, I am no longer considered the victim for either of those crimes. I was told that my bank and insurance company are now the victims.

I assume this must mean that when a victim of an assault receives compensation, the attackers subsequently go free? Any UK based lawyers, police or other legal types care to shed some light on this obscure logic?

Cynical little me suspects i’m being told this because the police don’t want to pursue charges over those crimes, even though (as most readers will know and as I said in my previous post) I’ve done practically all the legwork for them.

I must admit to be completely appalled that a case like this. Given the amount of evidence submitted, it should have solved a string of fraudulent transactions and matching/associated Sale of Stolen Goods, that could have incremented the Metropolitan Police “crimes solved” counter like  jackpot machine. 26 crimes solved with all the evidence collecting leg work already done for them.

So, where does this case sit on the Metropolitan Police Statistics? Does it count as all 26 incidents “solved” because the insurance company have paid out and the debit card company have reversed the fraudulent transactions?And above all, is the Home Secretary really satisfied that she’s seeing an appropriate action under her “reducing crime” objective here??

The guy is still free and on the streets without any intervention since the day the crimes were committed. Free to become the sort of one-man crime wave that Bill Bratton managed to systematically get off the streets in New York during his first tenure as Police Chief there (I recall from his book The Turnaround that 70 individuals in custody completely changed the complexion of life in that City). Big effect when you can systematically follow up to root causes, as he did then.

However, back in London, I wonder how this string of events are mapped onto the crime statistics being widely published and cited. Any ideas?