Category Archives: Transhumanism

Darpa Implants Could Track Your Stress Level 24/7


Photo: U.S. Air Force

Wired | Feb 3, 2012

By Katie Drummond

Plenty of geeks are already obsessed with self-tracking, from monitoring sleep rhythms to graphing caffeine intake versus productivity. Now, the Department of Defense’s far-out research agency is after the ultimate kind of Quantified Self: Soldiers with implanted body sensors that keep intimate tabs on their health, around the clock.

In a new call for research, Darpa is asking for proposals to devise prototype implantable biosensors. Once inserted under a soldier’s skin, Darpa wants the sensors to provide real-time, accurate measurements of “DoD-relevant biomarkers” including stress hormones, like cortisol, and compounds that signal inflammation, like histamine.

Implantable sensors are only the latest of several Pentagon-backed ventures to track a soldier’s health. Darpa’s already looked into tracking “nutritional biomarkers” to evaluate troops’ diets. And as part of the agency’s “Peak Soldier Performance” program, Darpa studied how one’s genes impact physical ability, and tried to manipulate cellular mitochondria to boost the body’s energy levels.

Sensors alone won’t make troops stronger, smarter or more resilient. But they’d probably offer the kind of information that could. For one thing, the sensors would provide military docs an array of reliable info about the health of every single soldier. Plus, they’d tell leaders how a soldier’s body stood up to grueling physical training or a tough deployment. Tracking changes in the body’s endocrine system, for example, might tell a physician that a soldier is increasingly sleep deprived. Or observing chronically increased inflammation levels might tell a team leader that trainee number five isn’t cut out for the Navy SEALs.

Real-time sensors would also solve plenty of problems where warzone medical care is concerned. It’s not easy to take a urine test in the middle of a firefight. Darpa’s solicitation notes that health care often suffers because of “overnight shipping to a centralized laboratory,” and the “collection, processing and handling” that can mar specimens in transit.

Besides, urine samples and blood tests are hardly as personalized as an implanted sensor would be. A system that tracks several biomarkers could offer a robust and real-time analysis of how, say, a soldier’s sleeping patterns or dietary choices affect his or her physical performance.

Far out as the idea sounds, scientists have already made impressive strides toward implantable biosensors. A team at Clemson University, with Pentagon funding, has devised a sensor that can be implanted for short periods to monitor the well being of injured patients. Another group, at Tufts University, is making biosensors out of silk, which they think will be easier to introduce into bodily tissues. Some companies are even getting into niche implants, most notably those to monitor glucose levels among diabetics.

Still, plenty of challenges persist. For one, biocompatibility — the ability of the sensor to integrate into the body, without being “walled off” by surrounding tissues — is still a limiting factor in determining whether a sensor will even work, not to mention what it can measure and how long it’ll last. And Darpa’s ideal sensors don’t just need to be biocompatible. They’ve also got to offer extremely accurate information on several different biomarkers, and have a long enough lifespan to avoid frequent replacement.

Of course, a sensor that tracked every estrogen uptick and cortisol dip would be a self-tracker’s wet dream and a major aide for doctors — whether civilian or military. It’s also got some vaguely dystopian connotations, like the prospect of job hiring and firing based on, say, a body that’s got less than optimal stress responses.

But don’t panic just yet. For now, Darpa only wants prototypes tested on “biospecimens and animal models.”

Neuroscience could mean soldiers controlling weapons with minds


Medevac troops from the American 451st air expeditionary wing look out from their Pavehawk helicopter while heading to pick up casualties in Kandahar, Afghanistan. Photograph: Sean Smith for the Guardian

Neuroscience breakthroughs could be harnessed by military and law enforcers, says Royal Society report

Guardian | Feb 6, 2012

by Ian Sample

Soldiers could have their minds plugged directly into weapons systems, undergo brain scans during recruitment and take courses of neural stimulation to boost their learning, if the armed forces embrace the latest developments in neuroscience to hone the performance of their troops.

These scenarios are described in a report into the military and law enforcement uses of neuroscience, published on Tuesday, which also highlights a raft of legal and ethical concerns that innovations in the field may bring.

The report by the Royal Society, the UK’s national academy of science, says that while the rapid advance of neuroscience is expected to benefit society and improve treatments for brain disease and mental illness, it also has substantial security applications that should be carefully analysed.

The report’s authors also anticipate new designer drugs that boost performance, make captives more talkative and make enemy troops fall asleep.

Related

“Neuroscience will have more of an impact in the future,” said Rod Flower, chair of the report’s working group.

“People can see a lot of possibilities, but so far very few have made their way through to actual use.

“All leaps forward start out this way. You have a groundswell of ideas and suddenly you get a step change.”

The authors argue that while hostile uses of neuroscience and related technologies are ever more likely, scientists remain almost oblivious to the dual uses of their research.

The report calls for a fresh effort to educate neuroscientists about such uses of the work early in their careers.

Some techniques used widely in neuroscience are on the brink of being adopted by the military to improve the training of soldiers, pilots and other personnel.

A growing body of research suggests that passing weak electrical signals through the skull, using transcranial direct current stimulation (tDCS), can improve people’s performance in some tasks.

One study cited by the report described how US neuroscientists employed tDCS to improve people’s ability to spot roadside bombs, snipers and other hidden threats in a virtual reality training programme used by US troops bound for the Middle East.

“Those who had tDCS learned to spot the targets much quicker,” said Vince Clark, a cognitive neuroscientist and lead author on the study at the University of New Mexico. “Their accuracy increased twice as fast as those who had minimal brain stimulation. I was shocked that the effect was so large.”

Clark, whose wider research on tDCS could lead to radical therapies for those with dementia, psychiatric disorders and learning difficulties, admits to a tension in knowing that neuroscience will be used by the military.

“As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that.

“If I stop my work, the people who might be helped won’t be helped. Almost any technology has a defence application.”

Research with tDCS is in its infancy, but work so far suggests it might help people by boosting their attention and memory. According to the Royal Society report, when used with brain imaging systems, tDCS “may prove to be the much sought-after tool to enhance learning in a military context”.

One of the report’s most striking scenarios involves the use of devices called brain-machine interfaces (BMIs) to connect people’s brains directly to military technology, including drones and other weapons systems.

The work builds on research that has enabled people to control cursors and artificial limbs through BMIs that read their brain signals.

“Since the human brain can process images, such as targets, much faster than the subject is consciously aware of, a neurally interfaced weapons system could provide significant advantages over other system control methods in terms of speed and accuracy,” the report states.

The authors go on to stress the ethical and legal concerns that surround the use of BMIs by the military. Flower, a professor of pharmacology at the William Harvey Research Institute at Barts and the London hospital, said: “If you are controlling a drone and you shoot the wrong target or bomb a wedding party, who is responsible for that action? Is it you or the BMI?

“There’s a blurring of the line between individual responsibility and the functioning of the machine. Where do you stop and the machine begin?”

Another tool expected to enter military use is the EEG (electroencephalogram), which uses a hairnet of electrodes to record brainwaves through the skull. Used with a system called “neurofeedback”, people can learn to control their brainwaves and improve their skills.

According to the report, the technique has been shown to improve training in golfers and archers.

The US military research organisation, Darpa, has already used EEG to help spot targets in satellite images that were missed by the person screening them. The EEG traces revealed that the brain sometimes noticed targets but failed to make them conscious thoughts. Staff used the EEG traces to select a group of images for closer inspection and improved their target detection threefold, the report notes.

Work on brain connectivity has already raised the prospect of using scans to select fast learners during recruitment drives.

Research last year by Scott Grafton at the University of California, Santa Barbara, drew on functional magnetic resonance imaging (fMRI) scans to measure the flexibility of brain networks. They found that a person’s flexibility helped predict how quickly they would learn a new task.

Other studies suggest neuroscience could help distinguish risk-takers from more conservative decision-makers, and so help with assessments of whether they are better suited to peacekeeping missions or special forces, the report states.

“Informal assessment occurs routinely throughout the military community. The issue is whether adopting more formal techniques based on the results of research in neuroeconomics, neuropsychology and other neuroscience disciplines confers an advantage in decision-making.”

IBM: Resistance is unnecessary, the Borg will be assimilated comfortably


“Star Trek” captain Jean-Luc Picard (Patrick Stewart) is fitted with gizmos for a fictional Borg transformation. The blending of humans and hardware will probably be more artful in real life by 2111. Paramount Pictures

This wouldn’t be a Borg-like assimilation, in which humans look increasingly like machines. Rather, the machines would blend into the human body.

IBM thinks about the next 100 years

MSNBC | Jun 16, 2011

By Alan Boyle

A hundred years from now, will we be assimilated by the machines? Or will we assimilate them? These are the kinds of issues facing International Business Machines as the company begins its second 100 years.

Right now, most folks are thinking about the past 100 years at IBM, which is celebrating the centennial of its founding on Thursday. But for Bernard Meyerson, the company’s vice president of innovation, it’s all about the next century.

“That’s pretty much what we think about,” Meyerson told me today.

Related

Meyerson has plenty to look back on, including his own not-so-small part in IBM’s past innovations. When his cell phone dropped the connection during our telephone conversation, he called back and casually mentioned that he had a hand in creating the transistors built into that cell phone. And when I asked him to explain, he said, “I actually invented the technology known as silicon-germanium.”

It turns out that IBM has played a behind-the-scenes role in all sorts of technologies, ranging from semiconductor development to barcodes to Wi-Fi. “IBM is a funny company,” Meyerson said. “We don’t force you to put a little sticker on anything that says, ‘We’re the smart guys.'”

IBM Centennial Film

But enough about the past: What about the future? “Going forward, you have tremendous opportunities,” particularly when it comes to making sense of the huge databases that are being built up in all sorts of fields, Meyerson said. For example, imagine a system that can take medical records from the 285 million people around the world with diabetes, anonymize those records and analyze them, looking for potential new treatments or preventive measures.

“The fact is, there is no mechanism today that could do that, and the reason is that medical data is unstructured,” he said. There’s little consistency in how the records are kept, and medical conditions might be described in different ways by different doctors.

When you put together the volumes of data and the numbers of people that have to be covered in these massive, unstructured data sets, the figures mount up to quintillions of bytes. That’s the challenge facing new types of computing tools — for example, the Watson supercomputer, which won a highly publicized “Jeopardy” quiz-show match earlier this year. Now Watson is being put to work on a tougher task: making sense of medical records, which is just the kind of job Meyerson has in mind.

Still other challenges await. Watson-style computers could digest the millions of data points involved in tracking the flow of highway traffic, then develop models to predict where the tie-ups could arise before they actually happen. The computers of the next century will have to handle a wide range of “big data” challenges, ranging from climate modeling to natural-language search engines for multimedia.

Meyerson doesn’t expect Watson to answer that challenge completely. A hundred years from now, Watson will almost certainly be considered a quaint antique, much like the tabulating machines that were made back in 1911.

“Watson specifically is not the issue, as much as the combination of Watson’s ability to interpret natural language, the capacity to store ‘big data’ and apply data analytics to come up with solutions for society,” he said. “In the absence of natural language, you’re going to have a short, unhappy life attempting this work. Without that key ingredient, how are you going to take the interaction of humans and machines to the next level and make it easy?”

What will the next level be in the year 2111? “Honestly, at 100 years I’m genuinely unsure,” Meyerson said. The past century has shown that the pace of technological advancement can be highly variable, depending on what kinds of breakthroughs come to the fore. But if Meyerson had to bet on one particular game-changing technology, it would be coming up with a direct interface between computing circuits and the human brain.

“If it turns out that there is a very natural way to communicate data back and forth without being obtrusive, then the whole world changes,” he told me. This wouldn’t be a Borg-like assimilation, in which humans look increasingly like machines. Rather, the machines would blend into the human body.

Does that sound like a grand dream for the next century? Or a nightmare?

Humans become ‘pets’ in rise of the machines: Apple co-founder


Steve Wozniak: “We’re going to become the pets, the dogs of the house.” Photo: Bloomberg

theage.com.au | June 6, 2011

by Tony Bartlett

Machines have won the war and the human race is destined to become little more than house pets.

That’s the future according to one of the smartest geeks on the planet, Steve Wozniak, who co-founded Apple Computers and is convinced that in his lifetime he will see computer intelligence equal that of humans.

The Woz is to the technological world what The Fonz was to leather jackets and denim, and when he talks, the global industry listens.

As technology explodes, humans are not going to be needed so much in the future and will settle back into a life of ease, Mr Wozniak told a business congress on the Gold Coast on Friday.

Related

 

“We’re already creating the superior beings, I think we lost the battle to the machines long ago,” he said.

“We’re going to become the pets, the dogs of the house.”

He said all of a sudden, true artificial intelligence will creep up on mankind like an accident.

“Every time we create new technology we’re creating stuff to do the work we used to do and we’re making ourselves less meaningful, less relevant.

“Why are we going to need ourselves so much in the future? We’re just going to have the easy life,” he said.

Mr Wozniak said Singularity, where a machine seems like a human being and has feelings, can think and be motivated, seemed an impossible dream to him years ago.

When he started Apple, he said, he never thought a computer would be powerful enough to hold an entire song and today we can fit 50 movies on a little disc in an iPhone.

“You don’t realise it’s happened until it’s there and I think that awareness of machines is getting very, very close and we’re getting close to where a machine will really understand you,” Mr Wozniak said.

“My comment about the machines winning the war is partly a joke, but we’ve accidentally already put so much in place that we can’t get rid of from our lives.

“Once we have machines doing our high-level thinking, there’s so little need for ourselves and you can’t ever undo it – you can never turn them off.”

Natalie Portman’s Dad’s Bizarre Reproductive Science Novel

Misconception’s prologue opens with a doctor inspecting the pubic region of a 12-year-old male patient. He first notes that the boy has no pubic hair. Then, the doctor takes “oddballs” (i.e. plastic balls) and measures them against the young boy’s testicles, noting that the boy’s balls are “size one.” (Pg. xiii) The doctor proceeds to measure the boy’s penis with a yardstick, noting that it is 1.5-inches long. He informs the boy and his doting mother that the child has “Fragile Y Syndrome,” meaning that his X chromosome is fine, but his Y chromosome is weak. In other words: “His penis and testicles will always be small” and he’ll grow up tall and skinny with “a micropenis and two microtesticles.” (Pg. xiii) The story then flashes forward 28 years—the boy has grown up to become Hugh Nicholson, the head of a cloning facility where he and his partner, Dr. Jeremy “Cody” Coddington, duplicate dogs for up to $100,000 apiece.

Daily Beast | Apr 26, 2011

by Marlow Stern

NEW YORK – As Natalie Portman enters the final trimester of her pregnancy, her father, a reproductive specialist, is shopping his debut novel. From micropenises to incestuous in vitro, Marlow Stern unearths the most ridiculous parts of the self-described “fertility-thriller.”

While tearfully accepting her Best Actress Oscar for Black Swan, a pregnant Natalie Portman thanked “my beautiful love Benjamin Millepied, who choreographed the film and has now given me my most important role of my life.” But before gushing about her fiancé, Portman thanked her parents “for giving [her] the opportunity to work from such an early age.” Around the same time, Portman’s father, Dr. Avner Hershlag was working too… on his self-published debut novel, Misconception.

One of the country’s most renowned reproductive specialists, the Yale-educated Hershlag is Director of the Donor Egg and Preimplantation Genetic Diagnosis, Director of Fertility Laboratories, and Medical Director of the In Vitro Fertilization Program at the Center for Human Reproduction in North Shore, Long Island. Clearly, he knows a lot about making babies and hopefully for Portman, her situation is far from the plot of Misconception.

Dr. Hershlag’s debut novel is set in Washington D.C. and centers on Dr. Anya Krim, the fertility specialist for the President and his wife. She delivers a deformed baby with “ambiguous genitalia” who later goes missing and its mother is found dead. Krim then discovers Megan Tanner, a senator’s daughter who has been in a coma for two years, is also pregnant. Senator Tanner is the Majority Whip, and chairman of a senate committee overhearing a controversial Embryonic Stem Cell Bill in Congress. If Dr. Krim—a rape victim herself—didn’t have enough on her petri dish, the First Lady’s last-ditch effort to conceive goes haywire when her embryos are kidnapped from the lab.

And that’s only the beginning.

Full Story

Group working to build a statue of RoboCop cyborg in Detroit


The group working to build the Robocop statue has reached its fundraising goal of $50,000 with the help of a social networking campaign Photo: REX

A group working to build a statue of the fictional crime-fighting cyborg RoboCop in the city said it has reached its fund-raising goal of $50,000 (£31,000) after a social networking campaign exploded in support of the project.

Telegraph | Feb 17, 2011

The next step: convincing the mayor and city officials it’s a good idea.

“I am very positive that it’s gonna happen,” organiser Brandon Walley said on Wednesday.

The 10-day-old RoboCop saga started innocently enough when Detroit Mayor Dave Bing’s social media manager answered a Twitter query about a possible statue. That response – “There are not any plans to erect a statue to Robocop. Thank you for the suggestion” – led to a firestorm of commentary online, with Twitter users making it a top trending topic for days.

As recently as Wednesday morning, “RoboCop” was still one of the 10 most-searched terms on Yahoo!

Imagination Station, a Detroit-based non-profit that latched on to the topic’s viral fervour, set up a way for backers to donate to the project via the crowd-funding website Kickstarter.

The effort yielded more than $25,000 in donations. A private source matched the funds, and now Imagination Station has the $50,000 it has been told it would take to erect such a statue.

Bing, for his part, remains sceptical, and no timetable exists for construction.

“My own personal opinion is that I don’t see where we get a lot of value from that,” the mayor said.

Walley said he sees potential for the planned 7-foot (2.13-meter) sculpture in the city’s tourist district, hoping RoboCop would draw the curious, just as the Rocky Balboa likeness does in Philadelphia and the Fonzie statue known as “Bronze Fonz” does in Milwaukee.

Plus, it’s just a cool idea, said Walley, 35, who lives in the city.

“There’s definitely a pop icon, kitsch factor to it, for sure, but it’s definitely in the light-humorous end. It’s not funny in that it’s a joke on Detroit or anything like that,” he said, referencing fears the statue would play to the perception that Detroit is plagued by crime and violence.

The 1980s science fiction film was set in a futuristic Detroit in which crime ran rampant and centred on police officer Alex Murphy (played by Peter Weller), who is killed in the line of duty and resurrected as an alloy-encased part-man, part-machine being prone to equal parts crime-fighting and butt-kicking.

Weller, who was recently nominated for a Screen Actors Guild Award along with the other members of the “Dexter” cast and is pursuing a Ph.D. in Italian Renaissance art history at UCLA, was clear on one issue: He doesn’t care about the statue depicting him personally.

“I think it’s a great thing as far as a public service. As far as a personal emblem, it doesn’t make any difference to me,” he said.

Time Magazine: Matrix cyborgs coming to replace humans in 2045

2045: The Year Man Becomes Immortal

Time | Feb 10, 2011

By  Lev Grossman

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher.

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.

Probably. It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

Full Story

Artist has camera surgically inserted so he can have ‘eyes at the back of his head’ for a year


Mr Bilal will wear a lens cap on the camera when he teaches at Tisch School of the Arts in New York after an uproar over privacy issues

A New York University professor has had a camera implanted in the back of his head – and it was done all in the name of art.

Daily Mail | Dec 3, 2010

Iraqi-born Wafaa Bilal had the procedure done at a piercing studio last month for a project commissioned by a museum in Qatar.

The camera will broadcast everything he ‘sees’ to the public and will be transmitted to Mathaf: Arab Museum of Modern Art in time for their December 30 opening.

The project, called The 3rd I, will take snap-shot photographs each minute of his everyday activities for one year, Mr Bilal said.

The waterproof camera will capture everything from him taking a shower, walking down the street, to even him having sex.

Mr Bilal shaved a square patch of hair before a titanium plate was inserted inside the back of his head.

A small camera was mounted on the base plate which connects magnetically.

He has a cable runs from the camera to a computer which he carries in a custom-made shoulder bag.

‘Yes it hurt a lot’, he said in response to whether the procedure carried out under local anaesthetic was painful.

‘I wanted to lose that subjectivity of knowingly taking photographs’, Mr Bilal said. ‘At the same time I wanted to capture everyday mundane images.

He says the project is ‘a comment on the inaccessibility of time, and the inability to capture memory and experience.’

After an uproar over privacy issues, Mr Bilal has agreed to conserve the privacy of his students at Tisch School of the Arts by wearing a lens cap on the camera when he’s on campus.

University spokesman John Beckman said: ‘We place a high value on his right to free expression in his creative work as an artist. But as a school of the arts, we also take seriously the privacy issues his project raises.

The Quatari museum’s curator Till Fellrath said: ‘He’s not really a photographer, he’s not really a video artist, he’s not really a performance artist.

‘Whatever artwork he creates, he doesn’t want people to just look at it, he wants them to participate in it.’

Having an ‘eye’ at the back of his head is not Mr Bilal’s first controversial project.

In 2008, he created an artwork called ‘Virtual Jihadi’ in which he added himself as a character in a video game, posing as a suicide bomber hunting George W Bush.

The exhibition was eventually shut down after a wave of protests.

In June this year Mr Bilal had the names of Iraqi cities tattooed on his back with dots to mark American and Iraqi casualties.

Mr Bilal believes his artwork has a deeper meaning than just to get attention.

He said: ‘I see myself as a mirror reflecting some of the social conditions that we ignore’.

Future soldiers may be wearing ‘Iron Man’ suits


Raytheon is designing in real life what comic books and Hollywood have promised for years, a real life Iron Man-like suit.

CNN | Nov 11, 2010

By Eric Marrapodi and Chris Lawrence

A lunchtime crowd is gathering beside the parking lot at Raytheon Sarcos, the defense contractor, on a recent day in Salt Lake City. White-collar workers from nearby office parks stand with their yogurt cups and sandwiches, watching with quiet awe as a man in a metal suit — sort of half-man, half-robot — performs superhuman feats of strength.

This may be the closest these people will get to a real-life “Iron Man,” the character from the comic books and hit movies.

Inside a prosthetic shell of metal and hydraulics, Raytheon test engineer Rex Jameson is putting an XOS-2 exoskeleton through its paces.

As the crowd watches, Jameson uses his robot hydraulic arm to shadowbox, break three inches of pine boards and toss around 72-pound ammunition cases like a bored contestant on the “World’s Strongest Man.”

The suit moves as he moves and amplifies his strength 17-fold. It doesn’t fly though.

“You don’t have this immense feeling of strength,” Jameson says. “It’s just when you go to do something that you couldn’t do without it, then that’s when you notice it.”

Jameson is part of a team designing in real life what comic books and Hollywood have promised for years: bringing an “Iron Man”-like suit to the battlefield.

Raytheon is seeking to develop the suits to help the U.S. military carry supplies, and claims that one operator in an exoskeleton suit can do the work of two to three soldiers. If all goes as planned, the company hopes to see “Iron Man” suits deployed in the field by 2015.

“The logistics personnel in the military typically move 16,000 pounds a day, which is an awful lot of load,” said Fraser Smith, vice president of operations for Raytheon Sarcos. The XOS-2 suit can be used in tight spaces where a forklift cannot.

And with the extra of strength the robot gives the operator, “that means you exert one pound, and it exerts 17. That’s a major amplification of strength and that’s all load the person doesn’t have to carry themselves,” Smith said.

Jameson may be about the furthest thing from the fictional designer of “Iron Man,” playboy billionaire Tony Stark. “I roll in a minivan,” said the married father of three.

The painted black, steel, aluminum and hydraulic pumps of the wearable robotic suit wrap around Jameson’s slight frame, mirroring the human skeleton in form. Its structure runs up the side of Jameson’s legs and arms. Its backbone carries the load of the machine, and on this day is tethered to hydraulic power and a team of engineers.

Full Story

Meet RatCar, A Japanese Robot Car Controlled By A Rat’s Brain


RatCar RatCar involves implanted neural electrodes that allow a rat’s brain signals to control a motorized robot.  IEEE Spectrum

PopSci | Oct 4, 2010

By Rebecca Boyle

Robots are a major part of the cultural fabric of Japan; they’re performing weddings, buying groceries and keeping people company. A team of researchers at the University of Tokyo is taking this robotic cultural immersion a step further — they’re making animal-robot hybrids. Sort of.

RatCar is a brain-machine interface that uses a rat’s brain signals to control a motorized robot. The rat hangs in the air, and the robot does what the rat’s limbs would do. It’s far from the only brain-robot locomotion contraption, but it’s arguably one of the strangest.

Related

‘Frankenrobot’ controlled by rat brain

Osamu Fukayama and colleagues developed RatCar to study whether a small vehicle could be controlled by the brain signals that move rats’ limbs. Unlike less-invasive, EEG-based brain-machine interfaces, the system involves implanting tiny neural electrodes in a rat’s brain.

The rat is suspended from a small lightweight “neuro-robotic platform,” as IEEE Spectrum reports. The goal is to make the vehicle and the rat work together to move forward. Brain-control interfaces like this could be a boon for people with locked-in syndrome or various other disabilities.

The system also includes several models and algorithms that explain the correlation between recorded neural signals and the rat’s movement, as Fukayama explains.

Researchers trained the rats by making them tow the car, motors turned off, around an enclosed area. A camera tracked the rats’ movement and fed data into a modeling program, which pieced together signals from the motor cortex. Then, the rats were hung from the car so their limbs barely touched the floor. The researchers switched the motors on, and as they tried to move, their neural signals were used to drive the car. Six out of eight rats adapted well and were able to get around with the car, according to IEEE Spectrum.

It’s not clear how much the rats’ wriggling might have affected the car’s movement, however. Fukayama and colleagues Takafumi Suzuki and Kunihiko Mabuchi want to perform more experiments to address that question.

They have been working on RatCar for several years and presented their most recent work last month at the IEEE Engineering in Medicine and Biology Society annual conference in Buenos Aires.