Daily Archives: April 19, 2012

Homeland Security’s ‘Pre-Crime’ Screening Will Never Work

Pre-crime prevention is a terrible idea.

theatlantic.com | Apr 17 2012

By Alexander Furnas

Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick’s Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.

If you picked (d) you are a winner!

The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may  come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations — an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes — which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

The second major problem with FAST is the experimental methodology being used to develop it. According to a DHS privacy impact assessment of the research, the technology is being tested in a lab setting using volunteer subjects. These volunteer participants are sorted into two groups, one of which is “explicitly instructed to carry out a disruptive act, so that the researchers and the participant (but not the experimental screeners) already know that the participant has malintent.” The experimental screeners then use the results from the FAST sensors to try and identify participants with malintent. Presumably this is where that 70 percent number comes from.

The validity of this procedure is based on the assumption that volunteers who have been instructed by researchers to “have malintent” serve as a reasonable facsimile of real life terrorists in the field. This seems like quite a leap. Without actual intent to commit a terrorist act — something these volunteers necessarily don’t have — it is likely to be difficult to have test observations that mimic the actual subtle cues a terrorist might show. It would seem that the act of instructing a volunteer to have malintent would make that intent seem acceptable within the testing conditions, thereby altering the subtle cues that a subject might exhibit. Without a legitimate sample exhibiting the actual characteristics being screened for — a near impossible proposition for this project — we should be extremely wary of any claimed results.

The fact is that the world is not perfectly controllable and infallible security is impossible. It will always be possible to imagine incremental gains in security by instituting increasingly invasive and opaque algorithmic screening procedures. What we should be thinking about, however, is the marginal gain in security in relation to the marginal cost. A program like FAST is doomed from the word go by a preponderance of false positives. We should ask, in a world where we are already pass through full-body scanners, take off our shoes, belts, coats and only carry 3.5 oz containers of liquid, is more stringent screening really what we need and will it make us any safer? Or will it merely brand hundreds of innocent people as potential terrorists and provide the justification of pseudo-scientific algorithmic behavioral screening to greater invasions of their privacy? In this case the cost is likely to be high, and there is little evidence that the gain will be meaningful. In fact, the results may be counter-productive as TSA and DHS staff are forced to divert their attention to weeding through the pile of falsely flagged people, instead of spending their time on more time-tested common-sense screening procedures.

Thinking statistically tells us that any project like FAST is unlikely to overcome the false-positive paradox. Thinking scientifically tells us that it is nearly impossible to get a real, meaningful sample for testing or validating such a screening program — and as a result we shouldn’t trust the sparse findings we have. And thinking about the marginal trade off we are making tells us the (possible) gain is not worth the cost. Pick your reason, FAST is a bad idea.

BP’s Corexit Oil Tar Sponged Up by Human Skin

Corexit® dispersed oil residue accelerates the absorption of toxins into the skin. The results aren’t visible under normal light (top), but the contamination into the skin appear as fluorescent spots under UV light (bottom). Credit: James H “Rip” Kirby III, Surfrider Foundation

motherjones.com | Apr 17, 2012

By Julia Whitty

The Surfrider Foundation has released its preliminary “State of the Beach” study for the Gulf of Mexico from BP’s ongoing Deepwater Horizon disaster.

Sadly, things aren’t getting cleaner faster, according to their results. The Corexit that BP used to “disperse” the oil now appears to be making it tougher for microbes to digest the oil. I wrote about this problem in depth in “The BP Cover-Up.”

Gulf seafood deformities alarm scientists

The persistence of Corexit mixed with crude oil has now weathered to tar, yet is traceable to BP’s Deepwater Horizon brew through its chemical fingerprint. The mix creates a fluorescent signature visible under UV light. From the report:

The program uses newly developed UV light equipment to detect tar product and reveal where it is buried in many beach areas and also where it still remains on the surface in the shoreline plunge step area. The tar product samples are then analyzed…to determine which toxins may be present and at what concentrations. By returning to locations several times over the past year and analyzing samples, we’ve been able to determine that PAH concentrations in most locations are not degrading as hoped for and expected.

Worse, the toxins in this unholy mix of Corexit and crude actually penetrate wet skin faster than dry skin (photos above)—the author describes it as the equivalent of a built-in accelerant—though you’d never know it unless you happened to look under fluorescent light in the 370nm spectrum. The stuff can’t be wiped off. It’s absorbed into the skin.

And it isn’t going away. Other findings from monitoring sites between Waveland, Mississippi, and Cape San Blas, Florida over the past two years:

The use of Corexit is inhibiting the microbial degradation of hydrocarbons in the crude oil and has enabled concentrations of the organic pollutants known as PAH to stay above levels considered carcinogenic by the NIH and OSHA.

  •     26 of 32 sampling sites in Florida and Alabama had PAH concentrations exceeding safe limits.
  •     Only three locations were found free of PAH contamination.
  •     Carcinogenic PAH compounds from the toxic tar are concentrating in surface layers of the beach and from there leaching into lower layers of beach sediment. This could potentially lead to contamination of groundwater sources.

The full Surfrider Foundation report by James H. “Rip” Kirby III, of the University of South Florida is open-access online here.

The Army’s More Deadly Bullet: Stateside Only

battleland.blogs.time.com | Apr 18, 2012

By Mark Thompson

The Army has just ordered its first batch of 9mm Jacketed Hollow Point bullets. But it’s limiting the rounds to its law-enforcement personnel based only in the U.S. and its territories.

So how’s that for a paradox: the Army is buying deadlier bullets for use on American soil, most likely for use against Americans, than it issues to U.S. soldiers waging war in Afghanistan, who use theirs against al Qaeda and the Taliban.

“The 9mm JHP is only used by Army law-enforcement personnel in their law-enforcement role,” an Army spokeswoman told Battleland Tuesday. “This cartridge cannot be used in tactical or combat situations, and is restricted for use to the continental U.S., Hawaii, Alaska, and U.S. Territories.” She added that while the Army has approved the use of such ammo by its internal police forces in 2006, the service just issued its first contract for this kind of bullet.

Army officials decided to allow all its law-enforcement personnel to use it in the wake of several high-profile on-post shootings, including the killing of 13 people, allegedly by Army Major Nidal Hasan, at Fort Hood in 2009. The Army Criminal Investigation Command has been allowed to use the bullets since 1998.

The Army told bullet-makers in February that “the 9mm JHP cartridge is required to rapidly and effectively incapacitate a deadly criminal when the situation warrants the use of deadly force.” Such rounds are widely used by police departments around the country. The Army plans on buying between 500,000 and 1 million of the rounds annually.

Because the bullet’s dented-tip design mushrooms when it hits something – like people – it tends to be more lethal. But at the same time, it’s less likely to pass through a person and wound someone else. “The bullets must also prevent [sic] a minimal hazard to bystanders from excessive penetration,” is how the Army puts it.

“The 9mm JHP shall be used by the Department of Defense (DoD) Law Enforcement Personnel/Body Guards at CONUS facilities,” the notice added. “This cartridge is intended to be used in pistols and submachine guns such as the Sig P226, Sig P228 (M11) and the M9 pistols and the MP5 submachine gun.”

The Army announced April 10 it had awarded a contract add-on for the bullets to the Olin Corp.’s Winchester Division in East Alton, Ill., under the government’s “Only One Responsible Source and No Other Supplies or Services Will Satisfy Agency Requirements” rule.

Army officers and enlistees take an oath to defend the nation against “against all enemies, foreign and domestic.” The oath doesn’t specify different kinds of bullets depending on the location of such enemies. But the use of bullets that “expand or flatten easily in the human body” are banned in war by the Hague Convention of 1899.