Back in 2014 I was sat in President Peña Nieto’s chair in the Mexico City security services bunker, as a guest of the Assuntos Internos of the Policia Federal. I was in the country to talk to senior figures about crime statistics, how they are manipulated, and the importance of challenging this practice. Looking around the impressive screens of information, my eye was drawn to the Air Traffic Control display. It showed all of the planes in the air over Mexico – using perfect geolocation data, because any slight failure could result in disaster and the deaths of Mexican citizens or foreign nationals. The screen struck me because next to it was a display of military and police deployments in the country. Resources directed to areas by the use of Mexican crime data on violent crime and homicide.
What if that data wasn’t accurate? What if the operations were in the wrong place? The stark truth is, this was no less dangerous to citizens and the state than inaccuracy in the flight data.
In the United Kingdom crime data had been manipulated and corrupted for decades, police deployments and resource allocations misplaced and incorrect with disastrous effect. For example in London, 2011, riots gripped the capital for five days at the cost of hundreds of millions of pounds in the destruction of property, caused injury, and wasted human life. The police could not control the situation due to a lack of available staff. This situation arose because data was manipulated to justify reductions in the numbers of officers on duty, which directly affected the ability to bring them back for emergencies.
My journey as a police whistleblower began a year before this, when I saw the corrupt data for the first time and predicted what would happen, but my warnings were ignored.
“it was clear to me that the extortion, rape, violence and homicide rates clearly identified genuine corruption at a state level in some areas.”
As a police officer I had one duty, to serve the public without fear or favour – to do what was right by them no matter what the personal cost – and without being frightened to act even against my superiors if the need arose.
In 2013 this is precisely what I did, triggering a parliamentary inquiry into the manipulation of crime data records which rocked the United Kingdom. Rape, serious violence, and serious acquisitive crimes were being artificially deflated by burying them amongst lesser offences, or not recording them at all. In some cases recorded crimes were effectively deleted. This was a national issue, millions of victims let down by the very people there to protect them and it was driven by government initiated targets, locally set targets, and a promotion system which required examples of ‘success’.
The true cost of such action wasn’t monetary: it was the impact on the lives of victims the public authorities have failed.
The price was also trust.
The true gift of the inquiry, of my whistleblowing, was twofold. Firstly, the long overdue truth was delivered for all to see and, secondly, it presented an opportunity to restore faith in the system.
In the Autumn of 2016 I was invited back to Mexico City, this time to review twenty years worth of crime data and try to establish if similar problems could be uncovered.
The final report was a challenging read for my Mexican hosts but I asked them to keep in mind my time in the bunker. Not to leave themselves with the same ‘what if’ I had seen and to digest the potential to make real, lasting, positive change for every Mexican citizen. Politically, I’ve always had to explain the manipulation of crime figures in these terms: An increased number of reports of offences means the public will trust in the system of law and order and that translates to electoral capital which can only be bought by deed alone.
In summary, I made the following findings:
Crime in Mexico is significantly under-recorded and under-reported
The system of reporting crime presents repeated access barriers to victims and causes and unacceptable level of attrition at each stage
Mexican crime data shows significant statistical deviance over time and risks across specific trigger data are clearly identifiable
Detailed analysis shows clear signs of data manipulation across all states and all crime types
“The levels of deviance from statistical norms in the most serious categories were as high as 514% (homicide) meaning that unintentional homicide was likely to be a tool for disguising true murder rates. Rape fared not much better, peaking at over 300%.”
The team I worked with, based in a pyramid styled office suite near the heart of historic and vibrant Coyoacan, had carried out some extensive analytical work already by the time I was brought on board and, using economics based statistical modelling, had identified some clear anomalies in Mexico’s recorded crime data.
In particular, clear patterns were visible in the intentional and unintentional homicide rates, independently verifiable when set against state health data. The patterns undeniably indicated the manipulation of crime recording and though their investigations were decent, they were missing experience.
It seemed highly likely to me from the outset, looking at timelines in the context of the data, that changes in national and regional governments and individual public office holders directly impacted upon the manipulation, which we later confirmed by adding election dates as data points. From the start, the case was also clear that two types of analysis should always run concurrently, not least to prevent ‘system gaming’ – whereby a methodology is learned and exploited by those operating within such a regime.
In order to ensure an independent analysis, the work I completed drew on none of the existing research and was approached clean, starting over again with the raw data alone. These spreadsheet dumps in CSV format came from national recording systems and covered the period January 1997 to August 2016. This was sufficient for me to establish patterns and statistical norms, and comprehensive enough to allow a robust examination to be carried out and for reliable conclusions to be drawn. Where there was a shortfall in 2016 data, due to the year being incomplete so, I estimated it using a simple method of calculation which was tested against previous years with an accuracy threshold of approximately +/- 5%.
The state database dumps provided 19 years’ worth of recorded crime data across the whole country, showing crime levels fairly consistent between 1.4 and 1.6 million offences per year. Crime was rising in the middle of the last decade, but the data showed a fall commencing around the last presidential election in 2012. Recorded crime rates at this low a level were exceptional in the modern world, in particular considering the growth rate of the Mexican population (1.4% approximately) to around 120,000,000 people. Crime should, in reality, proportionately increase in line with population growth. This is a fact of human nature.
A realistic, expected growth rate for recorded crime, based upon a figure reduced down from population expansion, would have been around 1.25%, providing a rough guide to expected changes in Mexican recorded crime rates over time. In fact, this growth was largely followed in the key figures until 2012, when a significant departure from expectations occurred and the gap had continued to grow. With it being well established through the crime survey data (independent of the state recorded figures) that approximately 95% of crime was going unrecorded every year it was subsequently possible to reverse engineer the figures to provide a true picture of crime in Mexico. These calculations were made working with the raw data alone. Only a tiny fragment of offences were eventually recorded by the authorities and the true figure of crime in Mexico could be safely estimated to be 28,997,933 offences per year, based upon the most recent year. This reverse engineering was retrospectively cross checked against previous, independent research which had been held away during the ‘clean analysis’.
The figures were compellingly accurate when set against the Inegi estimates (the independent body which runs the Mexican crime survey), which placed crime levels at 33.1 million offences per year in 2013.
“Crime should, in reality, proportionately increase in line with population growth. This is a fact of human nature.”
A further discrepancy arose when the number of reported crimes (denouncements as they are called, which mirrors many of the European reporting systems) was set against the number of recorded crimes. Approximately 5 million initial reports of crime are made per year, yet only 1,5 million or so went on to be recorded and investigated.
In the absence of official denouncement figures (which didn’t appear to exist) but using official estimates, a multiplier (2.95) was established in my analysis which, when applied to recorded crime, provided an accurate estimate of reports made. Statistically, the basic conclusions were:
95% of all crime was not officially recorded by the authorities
Only 15.5% of all crime was reported across the country
Only 34% of those reported crimes were recorded and investigated (5% of all crime).
These initial findings were a clear indicator of ‘ill health’ in the overall system of crime reporting and recording in Mexico with such significant levels of attrition (effectively the gap in between reality and what is investigated).
By comparing Mexico against the United Kingdom, in particular considering the much less harsh socio-economic conditions, the scale and nature of the problem revealed itself promptly.
Population 65 Million
Police Record 4.3 million crimes each year
The government estimates crime is truly 6.5 million offences per year
6.62% of the population are recorded by the police as victims each year
The government estimates 10% of people are victims of crime each year
Population 120 Million
Authorities Record 1.5 million crimes each year.
The survey data indicates crime is truly 28.9 million offences per year
1.27% of the population are recorded by the authorities as victims each year
The survey data indicates 24.16% of people are victims of crime each year
In all of this, internationally, the most important piece of data is often overlooked and fundamentally it’s the only piece of information which matters: behind every recorded crime is a person.
“the most important piece of data is often overlooked and fundamentally it’s the only piece of information which matters: behind every recorded crime is a person.”
Of the fragment of crime which had been recorded and investigated it was necessary to understand where the red flags existed which could inform me if, indeed, something was wrong or untoward. The starting point was to establish ‘normal’, a threshold by which anomalies could be quickly established in the headline data. A key indicator was in the annual movement in recorded crime figures, helpfully established over nearly two decades. Normal was a percentage change of 3.77% year on year. Any movement or change over 4% in a year on year comparison was a clear ‘red flag’ of data or crime record manipulation. While, in the original crime totals, I’d seen limited movement in crime rates over time, looking at the data against statistical norms provided a clear view that something was very wrong and, more precisely, ‘when’. The most consistent deviation from statistical norms commenced in 2012 and ran until 2015 and my new red flags in the headline threshold tests for statistical deviance allowed me to narrow focus to a four year drill down analysis of the data.
At a regional level the threshold flag was reapplied from the established national normal. Each state was then given a risk score for every year – in order of deviance percentile – and a cumulative risk score – a total of the risk scores across the relevant time period – which allowed them to be ranked in a simple RAG (Red, Amber, Green) pattern. This analysis could be expanded across any time range simplistically, allowing for retrospective testing in specific years alone should the need arise; for example in a corruption investigation. Rather than apply a scattergun approach, the analysis could then focus on the individual states and periods identified as being ‘riskiest’ due to the breach of threshold flags over time.
The purpose of the next stage of the drill down was to identify any anomalies in the recorded crime data itself – which included evidence of category manipulation or any other such identifiable issue which would warrant a physical audit of records. It was, however, necessary to establish a headline indicator of ‘normal’ at a state level, which could direct investigations into category manipulation and under-recording. A simple indicator which I used was the headline classification of high impact crimes as a percentage of all crime recorded – the norm being 29%. High Impact crimes, much like in the UK are those subject to the most scrutiny and, as such, are the most likely to be ‘gamed’ for the sake of easing public or official opinion of a negative nature. Indicators of normality should always be looked upon as ‘health’ indicators, though, as opposed to conclusive proof.
Taking a first detailed look a state level crime data produced interesting results and allowed for further detailed analysis, but the starting point was the exploration of the ‘high impact’ health indicator. At this stage it also became possible to start looking at individual crime types for recognisable signs of data manipulation and corruption in recording practices. For example, one state showed the highest statistical deviance risk over time and also a ‘high impact’ level of 18%, well below the statistical norm of 29%.
This state, at first glance, displayed ‘normal’ recording patterns in respect of homicide and violent crime, with intentional offences taking leading positions where a key visual indicator of crime record manipulation in these types would be a reversal, (whereby less serious, unintended offences are higher, the purpose being to reduce the number of serious crimes). By adding a third category however – the lesser offence of threatening behaviour – the picture transformed and gave the first heavy indication that offence levels were being artificially suppressed by recording minor offences. The similarity of the recording pattern was just too close to dismiss as coincidental and fitted with international trends in category gaming. (Violent crime is by nature random or passionate and the patterns across three categories should not mirror each other anywhere near so closely). Where a less serious category mirrors inflation of a serious crime, this is a clear sign of category manipulation to artificially deflate high impact offending rates.
With the clear a familiar pattern identified in violent crime, I knew the next obvious indicator would be in property crimes, and it became immediately apparent that this category of offences may also be subject to ongoing manipulation. Again the similarity of the recording pattern was too close to dismiss as coincidental and indicated more serious offences were being substitute for lesser ones, thereby causing artificial deflation.
The final category examined, with a view to determining whether data manipulation was taking place, was sexual offences including rape. The pattern was distinct. The similarity of the recording pattern was once again too close to dismiss as coincidental and clearly indicated a usage of other sexual offences to artificially deflate reported rapes. This was the same practice exploited in the United Kingdom, which I exposed to the Committee.
Repeating the same analysis across a number of states reproduced the same warnings and enabled additional ‘gaming’ to be identified, for example in the serious category of violent robbery versus the lesser non-violent robbery.
The same as investigating any crime or crime scene, you are looking for patterns or items which trigger your intuition – abnormalities which lead you to focus more tightly on gathering specific evidence. My rule is to keep headline analytics simple and follow your gut instinct then explore things further and the key to analysing crime data is to understand this approach and integrate simple, mathematical analysis into more complex, academic standard analytical models.
“My rule is to keep headline analytics simple and follow your gut instinct then explore things further and the key to analysing crime data is to understand this approach”
What became very clear during my first detailed analysis stage was that measuring statistical deviance alone did not provide all of the answers to the problem of discovering the extent of data manipulation in Mexican recorded crime. In fact, some states had relatively normal or even low rates of headline statistical deviance, yet displayed worrying trends within the key offence categories vulnerable to gaming. This could be taken as evidence that the manipulation of crime data had become ‘standard practice’, normality, over a prolonged period of time and may well explain why the scale and extent of the problem has evaded detection over many years. As a consequence of developing this understanding, it became essential to develop a bespoke warning system, built specifically to fit the structure and nature of Mexican data, which could produce an overall risk indication to guide full audits but was also capable of directing tightly focused investigations. This followed the key principles of simple, intuitive analysis leading to more detailed evidential exploration.
Crucial to the success of any such system was the ability to reduce the likelihood of the system being gamed by those operating within it by building it on rules which are almost impossible to trick. The result was a three stage system.
With the stage 1 assessment complete, the second stage was applied to the at risk time period (2012-2016) and with the deviance risk for each state carried forwards as a component. Risks were then assessed based on the deviation from High Impact Norms, a Rape vs Sexual Offences differential, an Homicide differential, and an Intentional Violence vs Threats differential. A further warning marker was any absence of data – the non-recording of information should always be treated as automatically high risk as it cannot be quantified. This second stage assessment allowed problematic states to be identified across the board, as well as allowing for individual ‘red flags’ to be explored, thus eliminating the possibility of system gaming.
The red flags identified at this stage of the data analysis were:
Statistical deviance from normal percentage changes over time (Normal was a 3.77% differential)
Statistical deviance from normal percentage of recorded high impact crime versus total crime (Normal was a 29% differential)
The positive/negative differential between reported rapes and less serious sexual offences (Normal was a 61% differential)
The positive/negative differential between reported intentional and unintentional homicides (Normal was an 88% differential)
The positive/negative differential between intentional violence and threat offences (Normal was a 48% differential)
The absence of recorded data was automatically high risk.
Viewing the states with the highest cumulative risk scores at the stage 2 assessment started to provide a more interesting and rounded view of obvious manipulation in Mexican Crime Recording and looking at the deviance from the High Impact national average in these at risk states showed the departures from normal in a clear and simplistic fashion. It also captured higher than average recording (which may in fact have indicated other problems within the state – for example, under-resourcing as a contributing factor, though this would require focused investigation).
Clearly identifiable disparities in the comparative categories of the sexual offences became immediately visible and in some states there was a clear direct parity between the use of the more serious and lesser offences – a pairing – while in other states the higher proportion of lesser offences was equally visible. In some cases, the simplistic comparison allowed me to clearly identify states where (for example) unintentional homicide was being disproportionately over-recorded, often at quite exceptional levels, while also identifying the states where offending rates were in close parity. The differential I specifically set up identified the scale of departure from national norms, with potential over-recording of minor offences as high as almost 500% and, when looking at the states individually, there were signs of offence categories ‘mirroring’ or ‘shadowing’ each other – indicating malpractice was taking place.
In certain states it was clear from the differential alone something was very wrong, and overlaying state health and death data as part of a deeper investigation confirmed this.
The same comparison allowed me to identify states where intentional violence was being disproportionately under-recorded when set against the lesser offence of threats. The differential specifically identified the scale of departure from national norms, with potential over-recording of the minor offence as high as almost 150%.
Working with the overall, cumulative, analysis I was able to identify states which were at risk when considered across all categories, and outlined how this should be used to drive decision making as to where to focus full audit efforts. It was, however, important to understand the modular risk identification components in order to ensure the potency of any audit regime. Due to the identification system, it was possible to identify and audit states based on their risk level in individual categories, such as Homicide or Rape, alone – an important tool as it ensured that ‘embedded normality’ across time and a broad spectrum of offending categories was not able to remain undiscovered as even a single anomaly would be highlighted. With the caveat that embedded normality must be permanently captured by individual category analysis, it was possible (and simple) to expand the Stage 2 analysis to incorporate additional offence differentials – for example, property crime and extortion.
The benefit of the modular system was that ‘norms’ could be simplistically adjusted over time as data changes and should a new ‘dumping’ category be identified, it could be used to form an expanded Stage 2 model with minimal effort.
“The differential I specifically set up identified the scale of departure from national norms, with potential over-recording of minor offences as high as almost 500%”
The regional level audit assessment I conducted took the state with the highest cumulative risk at Stage 2 (the anonymised State 31) and used the drill-down to create a structured audit requirement base upon sound, professional assessment of the meaning of the data and the visual patterns shown.
In that case study, I found High Impact crime had clearly been artificially maintained at low levels since October 2013 and crime reductions since that year identified non-recording was being deployed as a standard practice. I identified that a full audit would need to review denouncement records as well as confirmed investigations in order to establish the true scale of general under-recording, and to identify the specific malpractice in the process.
The data indicated a normal and healthy recording relationship between serious, intentional violence and lesser, unintentional violence and as such no audit of unintentional violent crimes was required as it followed no significant mirror or shadow pattern. The data did, however, indicate a close relationship (mirroring and shadowing) between the use of threats and intentional violence as recording categories, potentially displaying a significant, artificial reduction in the serious offence rates. The most recent data indicated this practice was increasing in frequency, with the use of the lesser threats category spiking against the reduction in serious violence in 2016 and there was a brief period of particularly clear shadowing between October 2014 and September 2015, which was indicative of a change in management practices before the escalation of lesser offence recording in 2016.
The regional data indicated the artificial deflation of intentional homicide rates over a two year period, changing from mirroring to shadowing, and manipulation of the records over the most recent year with a loose shadow pattern. The data also indicated the artificial maintenance of more serious offences at low levels by using category gaming – Extortion and Dispossession were barely visible, though the fragmented spikes shadowed the lesser categories. Fraud and Breach of Trust were also clearly tightly shadowed against each other and peaks follow the more serious offences. The data also showed a failure to record property damage over the last twelve months, as such a decline in low level crime was unlikely. The data also displayed ongoing category gaming (shadowing) until a mirror point in May 2013, artificially deflating the serious robbery figures and there was clear and significant under-recording of the more serious offence commencing in 2013 a continuing to the present – a drop from 300 offences to near zero was alarming.
The data also clearly indicated ongoing category manipulation to artificially deflate rape figures over the entire twenty year period and the recording pattern was too closely matched to indicate normal offending. While there was a loose shadow pattern, the mirroring effect in the peaks and troughs was clear – the recording pattern was too closely matched to indicate normal offending and, while it was speculation at that point, it may have been the case that ‘technicalities’ of rape as a legal definition were at play (in the UK the police officers would often inappropriately applying the court charging standards in the investigative environment, which would produce a similar pattern to recorded offences).
The repeating, identifiable patterns were endless across the states, and there was no escape from discovery due to the modular capability of the stage 2 trigger process.
“The repeating, identifiable patterns were endless across the states, and there was no escape from discovery”
The starting point for general comparison purposes was the simplistic data provided by looking at the United Kingdom annual recording rate of 4.3 million crimes on a population of 65 million, versus the official recording rate in Mexico, an average of approximately 1.5 million, on a population of 120,000 million. Crime affecting between 6 and 10% of a population would appear normal in a developed nation with limited socio-economic problems, as is displayed in the UK.
The proportion of victim’s officially recorded in Mexico at 1.27% was, to say the least illogical. It was, in fact, the first alarm bell. Applying the UK recording rate as a proportion – albeit a crude comparison – would give Mexico a recorded crime rate of 7.9 million crimes a year, which would appear more proportionate. Applying a denouncement multiplier at 2.95, assuming a significant increase in public trust, reported crime would increase from approximately 5 million to 23.3 million crimes a year and reduce the overall unreported rate from 95% to 20%. This would clearly be an exaggeration in the latter segment of the calculation, even the UK only records 66% of crime, but an annual crime rate of around 8 million crimes would be no cause for concern when set against Mexico’s national backdrop.
True recorded crime rate in Mexico should be around the 8 million mark each year and the under-recording level was 6.5 million crimes per year.
The secondary point of concern was the level of under-reporting and subsequent under-recording of reports made.
With the unreported crime established at 95%, and the true picture of crime at 28.9 million offences per year, a denouncement rate of 15.5% indicated significant trust issues and access barriers. Reducing the trust gap to a differential of 44% unreported offences, the number of denouncements made would rise to 19 million per year.
Set against the corrected figure for reported crime of 8 million, this would represent a conversion of 42% of reports into confirmed and recorded crimes. These figures would be within a much safer range and the current level of a 34% conversion of already tiny percentage of reported incidents was a serious cause for concern.
Denouncements rate in Mexico should be around the 19 million mark each year and the under-reporting level was 14 million crimes per year.
Setting a sensible growth rate for crime rates below the increase in population size, 1.25%, based upon the current, under-recorded crime levels, crime in the 2016 period should have been edging over the 1.8 million mark by the year end. Projecting the year end figure with a 5% threshold of accuracy, the final figure was likely to be 1.5 million crimes. Crime in Mexico had shown annual declines over the last few years, which was illogical.
The recorded crime rate in Mexico should have been around the 1.8 million mark by the end of 2016, taking into account the recording failures and deficits, and the year end under-recording of crime would have been a shortfall of approximately 328,000 crimes within the constrictions of the existing system.
The Mexican system of reporting offences provides a number of barriers to the effective recording of crime. The staging of denouncement, assessment, confirmation, investigation, assessment, court preparation, and prosecution provides a system which is simplistically ‘played’. Based upon my experience and investigations in London, set against the obvious trust and access barriers faced in Mexico, the system was obviously flawed and open to purposeful attrition at each stage. The erosion of victim faith at each point of attrition had clearly contributed to the collapse in trust leading to 95% of crime being unreported, and this rate was likely to continue to grow over time.
The system of reporting and recording crime in Mexico was directly contributory to the under-reporting ad under-recording of crime, and continued operation in this structure was untenable. I estimated an increase to 98% unreported crime within five years.
There were clear abnormalities in the variance between crime reported year on year across Mexico, with annual national fluctuations of up to almost 8% upwards and over 11% downwards. At a state level deviance from the statistical norm of 3.77% is as high as 49%.
These were alarming shifts and it appeared that they can often be directly linked to changes in local and national administrations. This same warning marker of political effect was also displayed in the UK and has now been removed by the abolition of central and local performance targets related to crime. Such high shifts are also clear indicators of socio-economic health but, crucially, are the primary indicator that crime data and recording practices are being manipulated. This is backed up by the secondary indicator which comes from deviance from the normal rates of recording for ‘High Impact’ crimes, currently 29%. My investigative analysis had shown that states with abnormal or deviant recording levels of these crimes display open hallmarks of data manipulation and system gaming. Deviance from this norm in some states was as high as -18%, with only 11% of crimes being recorded as high impact, and the pattern of this statistical behaviour in high crime states was a typical indication of data manipulation and corruption in crime recording practices.
Political ‘interference’ was having a direct impact on crime recording practices in Mexico and crime records and, subsequently, data was being manipulated to meet requirements set at a policy level. Further, it was clear that serious offence numbers were being artificially deflated through the ‘gaming’ of crime categories to produce more favourable image of crime in many areas.
There was clear evidence of widespread ‘category gaming’ across all major crime types, despite the low level of actual crime recording, whereby lesser offences were being used to ‘bury’ serious offence numbers. This was clear in the data itself and the graphical representation of the data. This applied to homicide, rape, serious violence, robbery, and property crimes. The levels of deviance from statistical norms in the most serious categories were as high as 514% (homicide) meaning that unintentional homicide was likely to be a tool for disguising true murder rates. Rape fared not much better, peaking at over 300%. Without audit data it was impossible to make safe estimates of true crime rates, but within the fragment of overall crime recorded the increases would be significant.
Serious crimes were being hidden within less serious offences, even in spite of the already alarmingly low rate of crime recording. In some cases such as the lower level property crimes this followed established international ‘category gaming’ practices, but it was clear to me that the extortion, rape, violence and homicide rates clearly identified genuine corruption at a state level in some areas.
Having approached the raw data with an open mind, seeking no particular conclusion and without agenda, I was anticipating the discovery of classic symptoms of data manipulation a regards crime. While I did find a number of these practices, clearly visible, within the data, I was astounded by the sheer scale of the under-reporting and under-recording taking place in Mexico. Digging further into the data I did anticipate the hallmarks of political effect, caused by changing priorities of parties, for example, but I had not foreseen I would see, with such clarity, such obvious signs of overt corruption. With all honesty I can state that I was expecting ‘bad’ but not the worst.
I was truly shocked and hoped that policy recommendations made as a result of my report could bring about swift action to begin the process of unravelling the full extent of the issues, and to pave the way for a safer Mexico for all.
On leaving Mexico City, my assessment having been presented to national media where it was broadly reported, I made the following recommendations which are now being considered after a national conference in February 2017.
1. A national review of the system of reporting crime and standardisation of data quality and reporting. The review should address the barriers to victim reporting, the disparity between actual and recorded crime, and the disparity between Mexican and International recorded crime per capita.
2. The introduction of a national crime recording and investigation standard for all offences, including the collection of denouncement, investigation, conclusion and prosecution data to be published annually. This should include a victim’s charter and statutory requirements.
3. A national awareness campaign to encourage victims to come forward and report crime.
4. The introduction of a national three stage data analysis and risk management model as standard, as outlined in this presentation. This should be accompanied by the introduction of a national audit framework.
5. The introduction of a fully empowered inspectorate to manage the three stage data analysis, enforce the recording standards and statutory requirements as recommended, be responsible for publishing crime data as outlined, and to carry out regular, ongoing audits based upon risk triggers as outlined.
On the day I left, a mass grave of over five-hundred unidentified murder victims was uncovered by the authorities.
“On the day I left, a mass grave of over five-hundred unidentified murder victims was uncovered by the authorities.“
On the 10th of May 2004 I became a Constable with Derbyshire Constabulary where I successfully passed my probation in 2006 and went on to pass my Sergeant’s exams in 2007. I worked in Derbyshire as a Response Officer with a full investigative workload including serious crime, as an Acting Sergeant on a reactive policing team, as a Temporary Sergeant on the county’s most demanding Safer Neighbourhood Team – where I received standing ovations at ward panels, and also as a Local intelligence Officer. In the latter role I became an expert in complex Crime Pattern Analysis, “Crack House Closures”, Vietnamese Cannabis Farms and Social Media investigations of gang members. I was commended for my work on “Crack House Closures” and featured in a Radio 4 documentary on cannabis farming.
On the 9th of October 2009 I transferred to the Metropolitan Police Service seeking greater career opportunities and within three months was put in place, on merit, as an Acting Sergeant on a Response Team in a busy Central London borough. In September 2010 I received three further commendations for my work in the first eleven months (for bravery, leadership and fortitude). In 2012 I was personally selected by an Assistant Commissioner for attachment to the new Capability and Support unit at Territorial Policing HQ (TPHQ). This arose because of a predictive policing model I wrote and the new position led me to design a Continuous Improvement framework and new performance risk analysis and performance diagnostic methods for the Metropolitan Police Service. I went on to work analysing performance data for Specialist Crime and Operations, as part of the monthly, senior level boards dubbed as “Crime Fighters Meetings”.
In October 2013 the Public Administration Select Committee of the House of Commons, the United Kingdom’s parliament, announced their historic inquiry into crime statistics. The action was triggered when I approached the committee chair, acting as a whistleblower while serving as a police officer at Scotland Yard. The final report, released in April 2014, found that police forces had nationally misled the public and parliament over the truth of crime figures for a number of years. The statistics were stripped of their national status and improvements ordered. The report is entitled Caught Red Handed: why we can’t count on police recorded crime statistics1, and is a matter of public record. I was commended within it for acting to the highest standards expected of a public servant. Enhancements in crime recording are still taking place now, and this is reflected in increases of serious recorded crime since the inquiry.
I left Scotland Yard in May 2014, resigning in response to my treatment as a whistleblower.
Broadly speaking, crime recording is directly impacted by political priorities, often with changes to what is considered High Impact or priority crime for a local or national government seeking to deliver a manifesto pledge of reducing certain crime levels. The downwards pressure then falls to police chiefs to perform within these parameters, who then roll the message down – often badly – which drives operational practices to adjust to the changing goals (see the UK inquiry in full for details on this).
Simplistically put, perverse incentives are at play. At a political level, the prize is votes, at a strategic law enforcement level, retention or gain of senior roles. At the lowest level – the officers on the street – it could be that they have individual performance targets, or are working under aggressive mid level management. (I would recommend the work of Dr Roger Patrick and also Dr Eli Silverman for incisive and more detailed work in this area.) I would also advise the avoidance of introducing concepts such as Systems Thinking or Black Box Thinking into improvement equations are they are subsequently misunderstood, misapplied, and over-sold with no evidence to support their use other than glossy words of often incumbent police chiefs trying to self preserve ad project images of ongoing ‘organisational learning’ – a hollow fall back position often quoted when malpractice is uncovered, in particular where it was broadly known about.
The significant Mexican under-recording problem presented not only challenging questions over access and attrition but worryingly drew into question how law enforcement resources could be appropriately allocate locally and nationally – best explained with a direct comparison to the Air Traffic Control concern I’d had in the bunker two years before.