Reality eclipsed science fiction again in February when the World Robot Conference set some fairly high expectations for so-called next-generation robots. The World Robot Declaration, issued from the conference headquarters in Fukuoka, Japan, establishes a basis for equality between humans and robotkind.
It states that:
1. Next-generation robots will be partners that coexist with human beings.
2. Next-generation robots will assist human beings both physically and psychologically.
3. Next-generation robots will contribute to the realization of a safe and peaceful society.
The tone of these guiding principles assumes that robots will have the capacity to coexist with humans, help build a better world and contribute to a benevolent society.
But without a foundation that clearly makes robots subservient to us, are we putting ourselves at risk of real situations where a machine like HAL-9000 may start killing people or a suddenly cognizant Matrix or SkyNet may decide humanity is the real problem?
Matt Deeds, a robotics enthusiast since his days as an MIT grad student, and currently a developer at IBM Canada, says there's no need to be concerned - for now.
"In the past decade there's been a big push to have computers that can make their own decisions," he says. "But nothing is even close to being self-aware, and that's something you'd need to get the SkyNet scenario.
"When you have something as sophisticated as a robot, one of the things you concentrate on is emergent behaviour, watching out for situations when it does something you didn't explicitly tell it to do," says Deeds.
"It's a principle of AI (artificial intelligence) development. You want to make it predictable enough that it doesn't do anything bad, but emergent enough so it does something new."
Deeds recognizes the possibility that unpredictability could become deadly volatility.
"You might have a military vehicle that decides that its best course of action is to destroy something you didn't want it to," he says. "And that might not be a malfunction. It could just be part of the programming that emerges in a way you didn't expect."
So what kind of user-friendly programming are we looking for?
That depends on geography and culture.
In Asia, an emerging trend is the creation of robots to care for growing elderly populations.
Mitsubishi's impressive Wakamaru is a sophisticated 3-foot-tall companion for elderly people that speaks (and listens), reminds them to take their medicine on time and has an embedded cellphone so it can make calls in case of emergencies. It also has two camera eyes, so working people can monitor their parents from their offices over the Internet.
Yoshiyuki Sankai, a professor and engineer at Japan's Tsukuba University, invented a robot suit designed to help disabled people who have lost strength in their legs move around on their own. It's a motorized, battery-operated pair of pants that detects faint electrical impulses from leg muscles and translates them into movements. Even a weak person can use it to walk at the rate of 4 kilometres an hour with little physical exertion.
And it's not just a laboratory development. The robot suit will be available commercially next year.
In the U.S., in contrast, technologists are working on exoskeleton technology similar to Sankai's, but without the gentle touch.
The Berkeley Lower Extremity Exoskeleton, which enhances human strength and endurance, is being developed at the University of California with funding provided by the Defense Advanced Research Projects Agency (DARPA).
It's part of the Pentagon's efforts to create better soldiers with greater capabilities. To a soldier wearing the 100-pound exoskeleton, a 70-pound backpack would feel as if it weighed a mere 6 pounds. That means troops could carry bigger, badder guns with more killing power.
But DARPA has seen signs of technology's limits. The same organization recently sponsored the sensational DARPA Grand Challenge, where unmanned vehicles competed in a 200-mile race across the Mojave Desert. Despite the $1 million prize, none of the entrants made it to the finish line. They just weren't sufficiently dextrous or (artificially) intelligent enough to pull it off.
It's interesting to contrast Asian humanistic successes with American militaristic failures. Perhaps Asian countries should consider implementing restrictions on exporting robotics technology in the same way the U.S. currently restricts exporting encryption technology?
That's the type of safeguard that would most likely prevent SkyNet-like scenarios - for now.