Category Archives: Sci-Tech

India developing robotic soldiers

terminator

samaylive.com | Jun 9, 2013

With futuristic warfare in mind, India is working to develop robotic soldiers as part of efforts to boost unmanned fighting capabilities, joining a select group of countries in this endeavour.

Under the project being undertaken by DRDO, robots would be developed with very high level of intelligence to enable them to differentiate between a threat and a friend.

These can then be deployed in difficult warfare zones, like the Line of Control (LoC), a step that would help avert the loss of human lives.

“We are going to work for robotic soldiers. We are going to look for very high level of intelligence in it than what we are talking today… It is a new programme and a number of labs are already working in a big way on robotics,” DRDO chief Avinash Chander told news agency in an interview.

The newly-appointed DRDO chief listed the project for development of robotic soldiers as one of his “priority thrust areas” saying that “unmanned warfare in land and air is the future of warfare. Initially the robotic soldier may be assisting the man.”

He said in the initial phase of the project, the robotic soldier would be required to be told by the human soldier to identify an enemy or a combatant but “slowly in due course of time, the robotic soldier would be at the front end and the human soldier would be assisting him.”

India developing robotic soldiers to replace humans in warfare:

Chander said the need for a robotic soldier is felt to save precious human lives and already robots are used in areas where humans do no want to venture such as defusing bombs or getting inside a high-radiation territory.

“Robotic soldier is one step further. It will have multiple technologies in terms of communication with team members, ability to recognise an enemy,” Chander said.

“Today, you have neural networks, whenever the soldier tells him (robotic soldier) that this is a human solider, he will derive his own logic as to what is the difference between him and others (civilians). That learning process will keep building up,” he said.

Asked if it would be capable of being deployed in areas such as the Line of Control, Chander said, “In due course of time but not before a decade in any way.”

He said many new technologies have to be developed such as “miniature communication, materials, cognitive technologies, self-learning processes and interaction with human.”

Chander said “already five to six countries are actively working. They have not yet developed it fully but they are in fairly advanced stages. This is one of my priority areas.”

Scientists create ‘sixth sense’ brain implant to detect infrared light

Eyeye_2483718b
A brain implant which could allow humans to detect invisible infrared light has been developed by scientists in America. Photo: ALAMY

A brain implant which could allow humans to detect invisible infrared light has been developed by scientists in America.

telegraph.co.uk | Feb 15, 2013

By Nick Collins

Scientists have created a “sixth sense” by creating a brain implant through which infrared light can be detected.

Although the light could not be seen lab rats were able to detect it via electrodes in the part of the brain responsible for their sense of touch.

Similar devices have previously been used to make up for lost capabilities, for example giving paralysed patients the ability to move a cursor around the screen with their thoughts.

But the new study, by researchers from Duke University in North Carolina, is the first case in which such devices have been used to give an animal a completely new sense.

Dr Miguel Nicolelis said the advance, reported in the Nature Communications journal this week, was just a prelude to a major breakthrough on a “brain-to-brain interface” which will be announced in another paper next month.

Speaking at the annual meeting of the American Academy for the Advancement of Science in Boston on Sunday, he described the mystery work as something “no one has dreamed could be done”.

The second paper is being kept secret until it is published but Dr Nicolelis’s comments raise the prospect of an implant which could allow one animal’s brain to interact directly with another.

In the first study, rats wore an infrared detector on their head which was connected to electrodes in the part of their brain which governs touch.

When one of three ultraviolet light sources in their cage was switched on, the rats initially began rubbing their whiskers, indicating that they felt as if they were touching the invisible light.

After a month of training, they learned to link the new sensation with the light sources and were able to find which one was switched on with 100 per cent accuracy. A monkey has since been taught to perform the same task.

The study demonstrates that a part of the brain which is designed to process one sense can interpret other types of sensory information, researchers said.

It means that in theory, someone who is blind because of damage to their visual cortex could regain their sight using an implant in another part of the brain.

Dr Nicolelis said: “What we did here was to demonstrate that we could create a new sense in rats by allowing them to “touch” infrared light that mammals cannot detect.

“The nerves were responding to both touch and infrared light at the same time. This shows that the adult brain can acquire new capabilities that have never been experienced by the animal before.

“This suggests that, in the future, you could use prosthetic devices to restore sensory modalities that have been lost, such as vision, using a different part of the brain.”

The study is part of an international effort to build a whole-body suit which allows paralysed people to walk again using their brain to control the device’s movement.

Infrared sensing could be built into the suit to inform the person inside about where their limbs are and to help them “feel” objects.

Dr Nicolelis and his collaborators on the project hope to unveil the “exoskeleton” at the opening ceremony of the football World Cup in Brazil in 2014.

3D-Printed Human Embryonic Stem Cells Created for First Time

3d-printed-stem-cells.jpg1360072001
Scientists used 3D printing to form these aggregates of embryonic stem cells, shown here at 24 hours (left) and 48 hours (right) after printing.

LiveScience.com | Feb 6, 2013

By Tanya Lewis

Imagine if you could take living cells, load them into a printer, and squirt out a 3D tissue that could develop into a kidney or a heart. Scientists are one step closer to that reality, now that they have developed the first printer for embryonic human stem cells.

In a new study, researchers from the University of Edinburgh have created a cell printer that spits out living embryonic stem cells. The printer was capable of printing uniform-size droplets of cells gently enough to keep the cells alive and maintain their ability to develop into different cell types. The new printing method could be used to make 3D human tissues for testing new drugs, grow organs, or ultimately print cells directly inside the body.

Human embryonic stem cells (hESCs) are obtained from human embryos and can develop into any cell type in an adult person, from brain tissue to muscle to bone. This attribute makes them ideal for use in regenerative medicine — repairing, replacing and regenerating damaged cells, tissues or organs. [Stem Cells: 5 Fascinating Findings]

In a lab dish, hESCs can be placed in a solution that contains the biological cues that tell the cells to develop into specific tissue types, a process called differentiation. The process starts with the cells forming what are called “embryoid bodies.” Cell printers offer a means of producing embryoid bodies of a defined size and shape.

In the new study, the cell printer was made from a modified CNC machine (a computer-controlled machining tool) outfitted with two “bio-ink” dispensers: one containing stem cells in a nutrient-rich soup called cell medium and another containing just the medium. These embryonic stem cells were dispensed through computer-operated valves, while a microscope mounted to the printer provided a close-up view of what was being printed.

The two inks were dispensed in layers, one on top of the other to create cell droplets of varying concentration. The smallest droplets were only two nanoliters, containing roughly five cells.

The cells were printed onto a dish containing many small wells. The dish was then flipped over so the droplets now hung from them, allowing the stem cells to form clumps inside each well. (The printer lays down the cells in precisely sized droplets and in a certain pattern that is optimal for differentiation.)

Tests revealed that more than 95 percent of the cells were still alive 24 hours after being printed, suggesting they had not been killed by the printing process. More than 89 percent of the cells were still alive three days later, and also tested positive for a marker of their pluripotency — their potential to develop into different cell types.

Biomedical engineer Utkan Demirci, of Harvard University Medical School and Brigham and Women’s Hospital, has done pioneering work in printing cells, and thinks the new study is taking it in an exciting direction. “This technology could be really good for high-throughput drug testing,” Demirci told LiveScience. One can build mini-tissues from the bottom up, using a repeatable, reliable method, he said. Building whole organs is the long-term goal, Demirci said, though he cautioned that it “may be quite far from where we are today.”

Others have created printers for other types of cells. Demirci and colleagues made one that printed embryonic stem cells from mice. Others have printed a kind of human stem cells from connective tissues, which aren’t able to develop into as many cell types as embryonic stem cells. The current study is the first to print embryonic stem cells from humans, researchers report in the Feb. 5 issue of the journal Biofabrication.

Americans Tracked by Predator Drones: DARPA’s Big Eye 1.8-gigapixel Camera for Air Surveillance Unveiled

“We’re moving to an increasingly electronic society where our movements ARE going to be tracked…”1.8 gigapixel ARGUS-IS. World’s highest resolution video surveillance platform

Image by BAE Systems

DARPA has revealed the ARGUS-IS its mega digital camera – with a 1.8-gigapixel resolution. The camera is expected to take clear images of objects as small as 15 centimeters from an altitude of six kilometers.

DARPA has revealed the ARGUS-IS its mega digital camera – with a 1.8-gigapixel resolution. The camera is expected to take clear images of objects as small as 15 centimeters from an altitude of six kilometers.

The Defense Advanced Research Projects Agency (DARPA), which is an agency of the US Department of Defense, has finally revealed details of their next-generation eye in the sky – the ARGUS-IS. The super high-resolution photo system is expected to be attached to drones and used for precision guided air surveillance.

The so called “Autonomous Real-time Ground Ubiquitous Surveillance – Imaging System” (ARGUS-IS) is described as one of the highest-resolution surveillance systems in the world.

One gigapixel is equal to 1,000 megapixels. For comparison: Modern professional digital cameras have a resolution of about 20 megapixels.

One petabyte is equal to 1,000 terabytes. One terabyte is equal to 1,000 gigabytes

It uses four lenses with stabilizers and 368 photo matrixes, five megapixels each. The system allows a high-res picture to be taken of objects as small as 15 centimeters across from an altitude of up to six kilometers. The system is also able to view approximately 25 square kilometers of terrain at a time and track moving objects with up to 65 simultaneous windows.

With such capabilities, experts believe that six drones equipped with the camera would make it possible for the US to keep an eye on the entirety of Washington DC, while – for the sake of comparison – four such cameras would provide a complete surveillance of Paris.

At speed of 12 images per second the ARGUS-IS creates 600 gigabytes of data. During one day of operation the system would collect about six petabytes of information. As a drone cannot carry enough equipment to process such data torrents, the images would most likely be sent to two processing subsystems: one in the air and the other located on the ground.

ARGUS-IS sensor (Image by BAE Systems)
ARGUS-IS sensor (Image by BAE Systems)

However, it would be functionally impossible to send all of ARGUS’ data to the ground. That’s where DARPA’s persistics system comes in; this records information according to points of interest. Only essential information is sent to the control room on the ground for storage and later review. The technology weblog ExtremeTech says to make this happen DARPA will need a wireless device able to transmit 100Gb of data per second.

The ARGUS-IS first came to public attention about three years ago. Speculation became fact at the beginning of this year in a documentary showing video footage of the imaging system in action, although the camera itself remained shrouded in mystery for security reasons.

The footage revealed that the high-resolution camera can spot details like a bird flying around a building and the color of a person’s clothes. But it’s not able to reveal facial features. Still, experts say that drones could be sent at a lower altitude to create the right angle to record someone’s face.

What was not revealed by the documentary was the future implementations of the ARGUS-IS – or if it’s already been used by the US military.

Image by BAE Systems
Image by BAE Systems
Image by BAE Systems
Image by BAE Systems

Image by BAE Systems

11 Body Parts Defense Researchers Will Use to Track You

Slightly creepy, no? Well, it gets creepier…
. . .
wired.com | Jan 25, 2013By Noah Shachtman and Robert Beckhusen
The Ear

Cell phones that can identify you by how you walk. Fingerprint scanners that work from 25 feet away. Radars that pick up your heartbeat from behind concrete walls. Algorithms that can tell identical twins apart. Eyebrows and earlobes that give you away. A new generation of technologies is emerging that can identify you by your physiology. And unlike the old crop of biometric systems, you don’t need to be right up close to the scanner in order to be identified. If they work as advertised, they may be able to identify you without you ever knowing you’ve been spotted.

Biometrics had a boom after 9/11. Gobs of government money poured into face and iris recognition systems; the Pentagon alone spent nearly $3 billion in five years, and the Defense Department was only one of many federal agencies funneling cash in the technologies. Civil libertarians feared the worst as face-spotters were turned on crowds of citizens in the hopes of  catching a single crook.

But while the technologies proved helpful in verifying identities at entry points from Iraq to international airports, the hype — or panic — surrounding biometrics never quite panned out. Even after all that investment, scanners still aren’t particularly good at finding a particular face in the crowd, for example; variable lighting conditions and angles (not to mention hats) continue to confound the systems.

Eventually, the biometrics market — and the government enthusiasm for it — cooled off. The technological development has not. Corporate and academic labs are continuing to find new ways to ID people with more accuracy, and from further away. Here are 11 projects.

Above:

The Ear

My, what noticeable ears you have. So noticeable in fact that researchers are exploring ways to detect the ears’ features like they were fingerprints. In 2010, a group of British researchers used a process called “image ray transform” to shoot light rays at human ears, and then repeat an algorithm to draw an image of the tubular-shaped parts of the organ. The curved edges around the rim of the ear is a characteristic — and most obvious — example. Then, the researchers converted the images into a series of numbers marking the image as your own. Finally, it’s just a matter of a machine scanning your ears again, and matching it up to what’s already stored in the system, which the researchers were able to do accurately 99.6 percent of the time. In March of 2012, a pair of New Delhi scientists also tried scanning ears using Gabor filters — a kind of digital image processor similar to human vision — but were accurate to a mere 92 to 96.9 percent, according to a recent survey (pdf) of ear biometric research.

It may even be possible to develop ear-scanning in a way that makes it more reliable than fingerprints. The reason is because your fingerprints can callous over when doing a lot of hard work. But ears, by and large, don’t change much over the course of a lifespan. There’s a debate around this, however, and fingerprinting has a much longer and established history behind it. A big question is whether ear-scanning will work given different amounts of light, or when covered (even partially) by hair or jewelry. But if ear-scanners get to the point of being practical, then they could possibly work alongside fingerprinting instead of replacing them. Maybe in the future we’ll see more extreme ear modification come along as a counter-measure.

Photo: Menage a Moi/Flickr

Odor

Odor

In the early and mid-2000s, the Pentagon’s blue-sky researchers at Darpa dabbled in something called the “Unique Signature Detection Project,” which sought to explore ways to detect people by their scent, and maybe even spot and identify individuals based on their distinct smells. Darpa’s work ended in 2008. The following year, the Department of Homeland Security fielded a solicitation for research in ways that human scent can indicate whether someone “might be engaging in deception,” specifically at airports and other ports of entry.

Odor detection is still just a research project at the moment. The science is intricate, involving more than 300 chemical compounds that produce human odor. Our personal stinks can change depending on everything from what we eat to our environment. But it may be possible to distinguish our “primary odor” — separate from “secondary” odors based on our diet and “tertiary” odors based on things like soaps and shampoos. The primary odor is the one linked to our genetics, and there have already been experiments with mice, which have been found to produce distinct scents unique to individuals. In 2007, the government’s counter-terror Technical Support Working Group even started a program aimed at collecting and storing human odors for the military’s dog handlers. Dogs, of course, have been used to track people by smell for decades, and are believed to distinguish between humans based on our genetic markers.

Photo: Cabaret Voltaire/Flickr

Heartbeat

Heartbeat

Your chest moves, just a little, every time your heart beats or your lungs take in air. For years, researchers have been monkeying with radars that are sensitive enough to to detect those minuscule chest movements — but powerful enough to do it from hundreds of yards away. Even reinforced concrete walls and electromagnetic shielding won’t stop these radars, or so claim the researchers at the small, Arizona-based defense contractor VAWD Engineering, who are working on such a system for Darpa’s “Biometrics-at-a-distance” program.

The key is the Doppler Effect — the changes in frequency when one object moves relative to another. We hear it all the time, when a fire engine passes by, siren blaring. VAWD says their vehicle-mounted Sense Through Obstruction Remote Monitoring System (STORMS) can pick up even small fluctuations of chests.

STORM (pictured above) “can be used to detect, classify and identify the specific cardiac and pulmonary modulations of a… person of interest,” a company document boasts. By itself, a heartbeat or a breathing rate won’t serve as a definitive biometric. But combine it with soft biometrics (how someone subtly sways when he or she stands) and you’ve got a unique signature for that person that can’t be hidden or covered up.

VAWD says these signature will help improve disaster relief and medical care by providing a “reliable, real time medical status equal to or better than the current devices, while increasing the mobility and comfort of the patient.”

But the company also notes that its system performs “automated human life-form target tracking” even when construction materials like “Afghan mud-huts” are in the way. STORM “has already been deployed by the United States Army on one of its most advanced ground vehicles,” the company adds.

Does any of that sound like hospital work to you?

Illustration: Yale University/Wikimedia

Photo: VAWD Engineering

Voice

Most people are likely to be familiar with voice readers on gadgets like the iPhone. But what if there was software that could quickly analyze the voice of thousands, and even use those voices to identify specific people?

Russian biometrics firm Speech Technology Center — known as SpeechPro in the U.S. — has the technology. Called VoiceGrid, the system is able to automatically recognize a person’s voice as their own, provided your voice is pre-recorded in a database and can be recalled by the computer. The company has also developed a version for “large city, county, state or national system deployments.”

It’s seen use in Mexico, according to Slate, “where it is being used by law enforcement to collect, store, and search hundreds of thousands of voice-prints.” The National Security Agency has taken interest in similar technology. So has the FBI. A 2012 presentation from the National Institute of Standards and Technology — with the assistance of the FBI — also speculated on potential uses including identifying and clearing people ‘involved in illegal activities,” locating serial killers and identifying arms traffickers (.pdf). Iarpa, the intelligence community’s research agency, has also been looking into ways to solve some of its problems: audio interference mainly. In 2011, the agency concluded its Biometric Exploitation Science and Technology Program (or BEST), which made “speaker recognition advances which included improving robustness to noise, reverberation, and vocal effort, and by automatically detecting these conditions in audio channels,” spokesperson Schira Madan told Danger Room in an email. But we wonder if it’ll detect autotune.

The Iris

The Iris

Imagine a scanner than can look deep inside your eye — from 10 feet away. Actually, you don’t have to think that hard. The technology is already here. Scanners have been developed that can focus in and scan irises from a distance of 10 feet, such the IOM PassPort, developed by government contractor SRI International. The company promises the machine can scan irises at a rate of 30 people per minute — like in high-traffic areas such as airports and train stations. SRI also claims it can see through contact lenses and glasses.

But the longer-range scanners could also see other uses, aside from airports. U.S. troops field existing, short-range and handheld iris scanners to build databases of Afghan eyes as part of a plan to use biometric data to tell civilians apart from insurgents. The Department of Homeland Security has tested iris scanners at a Border Patrol station along the Texas-Mexico border. The FBI has been working on an iris database for federal prisoners, and Google uses them at company data centers. But these systems can be fussy, and require that the targets don’t move too much.

There might be another way. The Pentagon’s scientists at Darpa have funded a research project at Southern Methodist University to develop cameras that can automatically zoom-in and scan irises, kinda like what happened to Tom Cruise in Minority Report — and without being blocked by pesky obstructions like eyelashes and glare from light. But another problem is that iris scanners are not the most secure means of identifying people. In July 2012, a group of researchers from the U.S. and Spain discovered a way to spoof the scanners by duplicating iris images stored in databases and creating synthetic copies. That means someone could conceivably steal your eyes, in a way.

Illustration: Air Force

Periocular

Periocular

Spotting someone by their irises is one of the best-developed biometric techniques there is. But Savvides and his Carnegie Mellon colleagues think there may be an equally-promising approach in the area around the eye — also known as the “periocular” region.

The “periocular region has the most dense and the most complex biomedical features on human face, e.g. contour, eyelids, eyeball, eyebrow, etc., which could all vary in shape, size and color,” they wrote in a 2011 paper. (.pdf) “Biologically and genetically speaking, a more complex structure means more ‘coding processing’ going on with fetal development, and therefore more proteins and genes involved in the determination of appearance. That is why the periocular region should be the most important facial area for distinguishing people.”

And unlike other biometrics — the face, for instance — the periocular region stays remarkably stable as a person ages. “The shape and location of eyes remain largely unchanged while the mouth, nose, chin, cheek, etc., are more susceptible to changes given a loosened skin,” the researchers note. In other words, this is a marker for life.

Nearby, Savvides and his colleagues think they’ve found a second biometric: the shape of the eyebrow. Face-scanners are sometimes thrown off when people smile or frown. But the eyebrow shape is “particularly resilient to certain (but not all) expression variations,” the researchers note in a separate, yet-to-be-published paper. And the eyebrow can still be seen, even when the subject has most of his or her face covered.

What’s not fully clear is how the eyebrow biometric responds to threading, shaving or waxing. Saavides, who responded to tons of questions about his research, says there’s no fullproof means to avoid this kind of spoofing. But Saavides is also working on sensors that can analyze multiple facial cues and features, while incorporating algorithms that detect the possibility of a person changing one or two of them. A pair of plucked eyebrows might be a weak match compared to the bushy ones the computer has on file — but the computer could also be smart enough to recognize they’ve been plucked.

Photo: Carnegie Mellon University

Long-Range Fingerprint Scanners

Long-Range Fingerprint Scanners

Most fingerprint scanners today require physical contact, but constantly being soaked with finger-oil and dirt can also muck-up the machines. For that reason, among others, one developer is working on a scanner that may one day read your fingerprints at a distance of 20 feet.

But first, scanners with a 20-foot distance haven’t hit the market quite yet. One machine called the AIRprint, made by Alabama firm Advanced Optical Systems, has a range of nine feet, and uses two 1.3 megapixel cameras that receives light in different wavelengths: one horizontally polarized, and the other vertically polarized. To sort out the different wavelengths, a device beams light at your fingerprints, which bounce back into the lenses, which then combines the separate wavelengths into a clear picture. A spin-off company called IDair also has a commercial scanner that reaches up to six feet and is marketed toward “security personnel.” IDair’s 20-foot-range machine is currently in development, and is described as functioning similar to satellite imagery.

The military is reportedly an interested customer. The MIT Technology Review surmised that Marines may use them for scanning fingerprints from inside the relative safety of an armored vehicle or behind a blast wall. It beats exposing yourself to the possibility of a suicide bomb attack. For the civilian market, that seems better than pressing your fingertips against a greasy scanner, if you’re comfortable with the idea of having your prints scanned from far away.

Photo: LetTheCardsFall/Flickr

Gait

Gait

Even before 9/11, researchers were floating that notion that you could pick out someone by how he or she walks. And after the Towers fell, Darpa made gait recognition one of the cornerstones of its infamous Total Information Awareness counterterror program.

The problem is that gait can be kind of hard to spot. A briefcase or a bum leg prevents the recognition system from getting a clear view. So filming someone walk didn’t make for a particularly reliable biometric system. Plus, the same person may have multiple gaits — one for walking, and another for running, say.

But the spread of smartphones has opened up a new way of identifying someone’s stride. Androids and iPhones all have accelerometers — sensors that measure how far, how fast, and with how much force an object moves.

“By using the accelerometer sensor in the cell phone, we are able to capture a person’s walking pattern. As it turns out, these patterns are very good biometric traits for people identification. Because it does not require any special devices, the gait biometrics of a subject can even be captured without him or her knowing,” write Carnegie Mellon University professor Marios Savvides and his colleagues in a recent paper. (.pdf)

In a small, preliminary study, Savvides and his fellow researchers at the CyLab Biometrics Center claim they were able to get a 99.4% verification rate with the system when the subjects were walking. 61% of the time, they were even able to match someone’s fast-paced gait to their slower one. In other words, you can run…. but with a phone in your pocket, it’s going to be harder to hide.

Photo: sfllaw/Flickr

Sweat

Sweat

The Army wants to see some sweat. No, not workout sweat, but sweat that can betray hostile intentions. In 2010, the Army awarded a nearly $70,000 contract to California security firm Irvine Sensors Corporation to develop software that can use sensors to recognize at “abnormal perspiration and changes in body temperature.” The idea was to determine “harmful intent in such military applications as border patrol, stand-off interrogation, surveillance and commercial applications” including surveillance at businesses and “shopping areas.” It’s a bit out there, and still very much in the research stage, but makes a certain kind of sense. Elevated stress levels could give a suspect away when scanned by hyperspectral sensors that read changes in body temperature.

Though a reliable system will have to work in combination with other biometric signals: threatening body movements, facial expressions, iris scans — all of these will also have to be factored into determining whether someone is up to no good. The Army contract, dubbed Image Analysis for Personal Intent, also sought to develop sensors that read these signs from a distance of nearly 150 feet. Perhaps a bit optimistic. But in 2002, a group of scientists in Minnesota managed to determine if military recruits were engaging in deception by scanning for changes in temperature around their eyes. So if you’re at all freaked out about the idea of sweat-scanners, now might be time for a cold shower.

Photo: Army

Advanced Face Recognition

Advanced Face Recognition

Most machines that scan and recognize your face require taking a good, clean look. But now researchers are working on replacing them with scanners that only need a few fragmentary snapshots at much longer ranges than ever before.

One machine that can do it is being developed by defense contractor Progeny Systems Corporation, called the “Long Range, Non-cooperative, Biometric Tagging, Tracking and Location” system. Once a person of interest is spotted, the system captures a 2D image of the person’s face before converting it into 3D. Then, once the image has been converted and filed in a database, it can be quickly recalled when the system spots the person for a second time. That’s because 3D reduces the number of pixels needed to analyze the image, speeding up the process and allowing the system to identify a person with a mere glance. The company also claims the machine works at more than 750 feet.

But a face alone may not be enough to recognize a person with a machine. Everything from lighting conditions to distance can make it harder to get a clear picture, especially if the person being scanned is on the move, in a crowd, or ducking in and out of buildings. Using 3D models makes it easier, but the technology will likely have to be combined with “soft biometrics” like an individual’s gender, height, weight, skin color and even tattoos.

Slightly creepy, no? Well, it gets creepier, like the group of Swiss scientists working on scanning facial features to detect your emotions. Developers at Carnegie Mellon University have also developed a mobile app called PittPatt –which has since been acquired by Google — that can scan your face and match it up with images you’ve shared over the internet, all in less than a minute.

Photo: Carnegie Mellon University

Rapid DNA Testing

Rapid DNA Testing

It used to be that DNA testing took months to perform, from the time when a DNA sample was picked up on a swab, to analyzing it and creating a DNA profile. Now there are machines that can do it in less than 90 minutes, and the Pentagon wants them.

This month, researchers at the University of North Texas are beginning to test a $250,000 machine for the Defense and Justice Departments, and the Department of Homeland Security, so that “casualties and enemies killed in action can be quickly identified in the field,” according to the Biometrics Research Group. But according to the October issue of Special Operations Technology magazine, rapid DNA testing systems co-developed by defense giant Northrop Grumman had already been delivered to “unspecified government customers” beginning back in August. One of those customers is believed to be the FBI. California company IntegenX also has a portable rapid-DNA machine that can analyze molecules taken off everything from clothing to cigarette butts. There’s a simple reason why police are so interested. For a burglar who’s breaking into houses and leaving a DNA trail, the machines could clue-in faster than the burglar is able to continue the spree.

Photo: US Navy

Military Must Prep Now for ‘Mutant’ Future, Researchers Warn

lockheed
Lockheed Martin tests its Human Universal Load Carrier exoskeleton. Photo: Lockheed Martin

Wired | Dec 31, 2012

By David Axe

The U.S. military is already using, or fast developing, a wide range of technologies meant to give troops what California Polytechnic State University researcher Patrick Lin calls “mutant powers.” Greater strength and endurance. Superior cognition. Better teamwork. Fearlessness.

But the risk, ethics and policy issues arising out of these so-called “military human enhancements” — including drugs, special nutrition, electroshock, gene therapy and robotic implants and prostheses — are poorly understood, Lin and his colleagues Maxwell Mehlman and Keith Abney posit in a new report for The Greenwall Foundation (.pdf), scheduled for wide release tomorrow. In other words, we better think long and hard before we unleash our army of super soldiers.

If we don’t, we could find ourselves in big trouble down the road. Among the nightmare scenarios: Botched enhancements could harm the very soldiers they’re meant to help and spawn pricey lawsuits. Tweaked troopers could run afoul of international law, potentially sparking a diplomatic crisis every time the U.S. deploys troops overseas. And poorly planned enhancements could provoke disproportionate responses by America’s enemies, resulting in a potentially devastating arms race.

“With military enhancements and other technologies, the genie’s already out of the bottle: the benefits are too irresistible, and the military-industrial complex still has too much momentum,” Lin says in an e-mail. “The best we can do now is to help develop policies in advance to prepare for these new technologies, not post hoc or after the fact (as we’re seeing with drones and cyberweapons).”

Case in point: On April 18, 2002, a pair of Air Force F-16 fighter pilots returning from a 10-hour mission over Afghanistan saw flashes on the ground 18,000 feet below them. Thinking he and his wingman were under fire by insurgents, Maj. Harry Schmidt dropped a 500-pound laser-guided bomb.

There were no insurgents — just Canadian troops on a live-fire exercise, four of whom were killed in the blast. The Air Force ultimately dropped criminal charges against Schmidt and wingman Maj. William Umbach but did strip them of their wings. In a letter of reprimand, Air Force Lt. Gen. Bruce Carlson accused Schmidt of “willful misconduct” and “gross poor judgment.”

Schmidt countered, saying he was jittery from taking the stimulant Dexedrine, an amphetamine that the Air Force routinely prescribes for pilots flying long missions. “I don’t know what the effect was supposed to be,” Schmidt told Chicago magazine. “All I know is something [was] happening to my body and brain.”

The Food and Drug Administration warns that Dexedrine can cause “new or worse aggressive behavior or hostility.” (.pdf) But the Air Force still blamed the pilots.

The Canadian “friendly fire” tragedy underscores the gap between the technology and policy of military human enhancement. Authorities in the bombing case could have benefited from clearer guidelines for determining whether the drugs, rather than the pilots, were to blame for the accidental deaths. “Are there ethical, legal, psycho-social or operational limits on the extent to which a warfighter may be enhanced?” Lin, Mehlman and Abney ask in their report.

Now imagine a future battlefield teeming with amphetamine-fueled pilots, a cyborg infantry and commanders whose brains have been shocked into achieving otherwise impossible levels of tactical cunning.

These enhancements and others have tremendous combat potential, the researchers state. “Somewhere in between robotics and biomedical research, we might arrive at the perfect future warfighter: one that is part machine and part human, striking a formidable balance between technology and our frailties.”

In this possible mutant future, what enhancements should be regulated by international law, or banned outright? If an implant malfunctions or a drug causes unexpected side effects, who’s responsible? And if one side deploys a terrifying cyborg army, could that spark a devastating arms race as nations scramble to out-enhance each other? “Does the possibility that military enhancements will simply lead to a continuing arms race mean that it is unethical to even begin to research or employ them?” Lin, Mehlman and Abney wonder.

The report authors also question whether the military shouldn’t get give potential enhancement subjects the right to opt out, even though the subjects are otherwise subject to military training, rules and discipline. “Should warfighters be required to give their informed consent to being enhanced, and if so, what should that process be?” the researchers ask.

The ethical concerns certainly have precedent. In a series of experiments in the 1970s aimed at developing hallucinogenic weapons, the Pentagon gave soldiers LSD — apparently without the subjects fully understanding the consequences of using the drug. During the Cold War U.S. troops were also exposed to nerve gas, psychochemicals and other toxic substances on an experimental basis and without their consent.

Moreover, it’s theoretically possible that future biological enhancements could be subject to existing international laws and treaties, potentially limiting the enhancements — or prohibiting them outright. But the application of existing laws and treaties is unclear, at best. ”Could enhanced warfighters be considered to be ‘weapons’ in themselves and therefore subject to regulation under the Laws of Armed Conflict?” the researchers write. “Or could an enhanced warfighter count as a ‘biological agent’ under the Biological and Toxin Weapons Convention?”

Lin, Mehlman and Abney aren’t sure. To be safe, they propose the military consider several rules when planning an enhancement. Is there a legitimate military purpose? Is it necessary? Do the benefits outweigh the risks? Can subjects’ dignity be maintained and the cost to them minimized? Is there full, informed consent, transparency and are the costs of the enhancement fairly distributed? Finally, are systems in place to hold accountable those overseeing the enhancement?

Whether following these guidelines or others, the Pentagon should start figuring out a framework for military human enhancement now, Lin and his colleagues advise. “In comic books and science fiction, we can suspend disbelief about the details associated with fantastical technologies and abilities, as represented by human enhancements,” they warn. “But in the real world — as life imitates art, and ‘mutant powers’ really are changing the world — the details matter and will require real investigations.”

DARPA’s scary-looking Robot Mule has ability to maneuver in an urban environment taking verbal commands

Mule

LS3 Four-Legged Robot Plays Follow the Leader

Testing shows advances in robot’s autonomy, maneuverability and recovery

darpa.mil | Dec 19, 2012

For the past two weeks, in the woods of central Virginia around Fort Pickett, the Legged Squad Support System (LS3) four-legged robot has been showing off its capabilities during field testing. Working with the Marine Corps Warfighting Laboratory (MCWL), researchers from DARPA’s LS3 program demonstrated new advances in the robot’s control, stability and maneuverability, including “Leader Follow” decision making, enhanced roll recovery, exact foot placement over rough terrain, the ability to maneuver in an urban environment, and verbal command capability.

The LS3 program seeks to demonstrate that a highly mobile, semi-autonomous legged robot can carry 400 lbs of a squad’s equipment, follow squad members through rugged terrain and interact with troops in a natural way similar to a trained animal with its handler. The robot could also be able to maneuver at night and serve as a mobile auxiliary power source to the squad, so troops can recharge batteries for radios and handheld devices while on patrol.

“This was the first time DARPA and MCWL were able to get LS3 out on the testing grounds together to simulate military-relevant training conditions,” said Lt. Col. Joseph Hitt, DARPA program manager. “The robot’s performance in the field expanded on our expectations, demonstrating, for example, how voice commands and “follow the leader” capability would enhance the robot’s ability to interact with warfighters. We were able to put the robot through difficult natural terrain and test its ability to right itself with minimal interaction from humans.”

Video from the testing shows the robot negotiating diverse terrain including ditches, streams, wooded slopes and simulated urban environments. The video also shows the map the LS3 perception system creates to determine the path it takes.

The December testing at Fort Pickett is the first in a series of planned demonstrations that will test the robot’s capabilities across different environments as development continues through the first half of 2014.

The DARPA platform developer for the LS3 system is Boston Dynamics of Waltham, Mass.

Ron Paul’s transhumanist Bilderberg financier Peter Thiel looks forward to a computerized system of robotic justice after “Singularity”

Peter-Thiel-007

Will the Singularity Improve the Legal System? Peter Thiel Seems to Think So

The future of law will be computerized.

betabeat.com |Dec 7, 2012

By Patrick Clark

Here’s a Friday afternoon head-scratcher: What will legal systems look like in 1,000 years? No, really. If our arbiters of right and wrong become more highly automated, will we be smoothing over the imperfections of Lady Justice, or placing our respective fates in the hands of heartless machines. What will sentencing guidelines be like after the singularity?

If it’s not clear yet, we’ve been reading an account of a Peter Thiel guest lecture in a Stanford Law School course on legal technology. This is not for the faint of heart.

“So the set of all intelligent machines would be the superset of all aliens,” write Blake Masters in an essay describing the lecture. “The range and diversity of possible computers is actually much bigger than the range of possible life forms under known rules.”

In other words, who the hell knows. But also, probably we would be better in the hands of computers, and maybe here’s how:

Our human-based legal system is dependent on the arbitrariness of the actors, that’s sometimes bad, and sometimes good. Bad in the case of a biased jury or a pissed off judge. Good because if we all got hauled into court every time we broke the law we’d spend our lives shuttling back-and-forth from jail.

But if automated legal technology means fewer law-breakers escape the long arm, something will have to give:

If uniformly enforcing current laws would land everyone in jail, and transparency is only increasing, we’ll pretty much have to become a more tolerant society.

In which case, we may join Mr. Thiel in looking forward to a Hal of justice.

Related

Rise of the machines, end of the humans?

Bilderberg steering committee member is Ron Paul’s biggest campaign donor

PayPal founder Thiel: More gigantic corporate monopolies would be better

Ron Paul Wants to Abolish the CIA; His Largest Donor Builds Toys for It

Ron Paul Owned and Operated by National Security State “Spook Central” Billionaire

JPMorgan Chase Presents Leadership Award to Peter Thiel at First Annual StartOut LGBT Entrepreneurship Awards

Risk of a Terminator Style Robot Uprising to be Studied

terminator

technorati.com | Nov 27, 2012

by Adi Gaskell

In the movie Terminator, machines had grown so intelligent that by 2029 they had effectively taken over the planet, seeking to exterminate what remained of the human race along the way.

While that is firmly in the camp of science fiction, a team of researchers from Cambridge, England, are investigating what risk, if any, technology poses to mankind.

The research, conducted by the Centre for the Study of Existential Risk (CESR), will look at the threat posed by technologies such as artificial intelligence, nanotechnology and climate change.

While many of us may think it unlikely that robots will take over Earth, the scientists at the center said that dismissing such possibilities would in itself be ‘dangerous’.

“The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” the researchers wrote on a website set up for the center.

The CSER project has been co-founded by Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn.

“It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Prof Price told the AFP news agency.

“What we’re trying to do is to push it forward in the respectable scientific community.”

Simulated Brain Ramps Up To Include 100 Trillion Synapses


Neurosynaptic Cores This network of neurosynaptic cores is derived from wiring in the monkey brain. The cores are clustered into regions, also inspired by the brain, and each core is represented by a point along the ring. Arcs connect the different cores to each other. Each core contains 256 neurons, 1024 axons, and 256×1024 synapses. IBM

popsci.com | Nov 19, 2012

By Rebecca Boyle

IBM is developing a cognitive computing program under a DARPA program and just hit a major high.

The Sequoia supercomputer at Lawrence Livermore National Laboratory, recently crowned world champion of supercomputers, just simulated 10 billion neurons and 100 trillion connections among them–the most powerful brain simulation ever. IBM and LLNL built an unprecedented 2.084 billion neurosynaptic cores, which are an IBM-designed computer architecture that is designed to work like a brain.

IBM was careful to say it didn’t build a realistic simulated complete brain– “Rather, we have simulated a novel modular, scalable, non-von-Neumann, ultra-low power, cognitive computing architecture,” IBM researchers say in an abstract (PDF) of their new paper. It meets DARPA’s metric of 100 trillion synapses, which is based on the number of synapses in the human brain. This is part of DARPA’s cognitive computing program, called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE).

To do it, IBM used its cognitive computing chips, which the company unveiled last year. They are designed to recreate the phenomena between spiking neurons and synapses. More than 2 billion of these cores were divided into 77 brain-inspired regions, with gray matter and white matter connectivity, according to IBM. The gray matter networking comes from modeling, and the white matter networking comes from a detailed map of connections in the macaque brain. The combined total 530 billion neurons and 100 trillion synapses ran 1,542 times slower than real time–actually quite fast, in computing terms.

The ultimate goal is a computer that works like a brain, and can analyze information in real time from multiple sources. Under SyNAPSE, it would also be able to rewire itself dynamically in response to its environment, just like real brains do. It would also have to be very small and low-power, which in some ways will be even more challenging than developing the connections. IBM presented its latest results at the Supercomputing 2012 conference.