Slate is running an interesting new article about an idea known as “Predictive Policing.”  According to the article, “Predictive policing is based on the idea that some crime is random—but a lot isn’t.”

“For example, home burglaries are relatively predictable. When a house gets robbed, the likelihood of that house or houses near it getting robbed again spikes in the following days. Most people expect the exact opposite, figuring that if lightning strike once, it won’t strike again. ‘This type of lightning does strike more than once,’ says Brantingham. Other crimes, like murder or rape, are harder to predict. They’re more rare, for one thing, and the crime scene isn’t always stationary, like a house. But they do tend to follow the same general pattern. If one gang member shoots another, for example, the likelihood of reprisal goes up.”

Readers might wonder how this is any different from COMPSAT, and the article addresses that question by suggesting that “[t]he big difference is that CompStat is more retrospective than prospective.”  Finally, the article also addresses the possible criticism that much of this seems obvious — not revolutionary:

“But isn’t a lot of this stuff intuitive? If a crime occurs on a particular block of Compton, can’t the LAPD just keep a closer eye on that area in the days after the crime? Sure, says Brantingham, but intuition can take a police officer only so far. In a city as large and complex as Los Angeles, it’s hard to perform predictive policing by gut alone. Statistical models may simply confirm police intuition 85 percent or 90 percent of the time. ‘It’s in the remaining 10 or 15 percent where police intuition may not be quite as accurate,’ says Brantingham. Malinowski calls the data ‘another tool in the toolbox.'”