- Advertisement -
Hod Lipson, a mechanical engineer who directs the Creative Machines Lab at Columbia University, has shaped most of his career around what some people in his industry have called the c-word — consciousness.
“This topic was taboo,” said the Israel-born roboticist. “We were almost forbidden from talking about it – ‘Don’t talk about the c-word; you won’t get tenure’ – so in the beginning I had to disguise it, like it was something else.”
That was back in the early 2000s, when Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their hardware and then change their behaviour to compensate for that impairment without the guiding hand of a programmer.
This sort of built-in adaptability, Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he said. “You want these machines to be resilient.”
One way to do this was to take inspiration from nature. Animals, and particularly humans, are good at adapting to changes – an ability that might be a result of millions of years of evolution. This kind of intelligence would be flexible and fast. As machine learning grew more powerful, this goal seemed realizable.
Lipson earned tenure, and his reputation grew. So, over the past couple of years, he began to articulate his fundamental motivation for doing all this work. He began to say the c-word out loud: He wants to create conscious robots.
The first difficulty with studying the c-word is that there is no consensus around what it actually refers to. Some people have tried to taxonomize consciousness, explaining it away by pointing to functions in the brain or some more metaphysical substances, but these efforts are hardly conclusive.
Most roboticists tend to skip the philosophy and form their own functional definitions. Lipson, too, settled on a practical criterion for consciousness: the ability to imagine yourself in the future.
One of the earliest self-aware robots to emerge from the Creative Machines Lab had four hinged legs and a black body with sensors attached at different points. By moving around and noting how the information entering its sensors changed, the robot created a stick figure simulation of itself. As the robot continued to move around, it used a machine-learning algorithm to improve the fit between its self-model and its actual body. The robot used this self-image to figure out, in simulation, a method of moving forward. Then it applied this method to its body; it had figured out how to walk without being shown how to walk.
This represented a major step forward, said Boyuan Chen, a roboticist at Duke University who worked in the Creative Machines Lab. “In my previous experience, whenever you trained a robot to do a new capability, you always saw a human on the side,” he said. Recently, Chen and Lipson published a paper in the journal Science Robotics that revealed their newest self-aware machine, a simple two-jointed arm that was fixed to a table. Using cameras set up around it, the robot observed itself as it moved. Initially, it had no sense of where it was in space, but over the course of a couple of hours, with the help of a powerful deep-learning algorithm and a probability model, it was able to pick itself out in the world.
“This topic was taboo,” said the Israel-born roboticist. “We were almost forbidden from talking about it – ‘Don’t talk about the c-word; you won’t get tenure’ – so in the beginning I had to disguise it, like it was something else.”
That was back in the early 2000s, when Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their hardware and then change their behaviour to compensate for that impairment without the guiding hand of a programmer.
This sort of built-in adaptability, Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he said. “You want these machines to be resilient.”
One way to do this was to take inspiration from nature. Animals, and particularly humans, are good at adapting to changes – an ability that might be a result of millions of years of evolution. This kind of intelligence would be flexible and fast. As machine learning grew more powerful, this goal seemed realizable.
Lipson earned tenure, and his reputation grew. So, over the past couple of years, he began to articulate his fundamental motivation for doing all this work. He began to say the c-word out loud: He wants to create conscious robots.
The first difficulty with studying the c-word is that there is no consensus around what it actually refers to. Some people have tried to taxonomize consciousness, explaining it away by pointing to functions in the brain or some more metaphysical substances, but these efforts are hardly conclusive.
Most roboticists tend to skip the philosophy and form their own functional definitions. Lipson, too, settled on a practical criterion for consciousness: the ability to imagine yourself in the future.
One of the earliest self-aware robots to emerge from the Creative Machines Lab had four hinged legs and a black body with sensors attached at different points. By moving around and noting how the information entering its sensors changed, the robot created a stick figure simulation of itself. As the robot continued to move around, it used a machine-learning algorithm to improve the fit between its self-model and its actual body. The robot used this self-image to figure out, in simulation, a method of moving forward. Then it applied this method to its body; it had figured out how to walk without being shown how to walk.
This represented a major step forward, said Boyuan Chen, a roboticist at Duke University who worked in the Creative Machines Lab. “In my previous experience, whenever you trained a robot to do a new capability, you always saw a human on the side,” he said. Recently, Chen and Lipson published a paper in the journal Science Robotics that revealed their newest self-aware machine, a simple two-jointed arm that was fixed to a table. Using cameras set up around it, the robot observed itself as it moved. Initially, it had no sense of where it was in space, but over the course of a couple of hours, with the help of a powerful deep-learning algorithm and a probability model, it was able to pick itself out in the world.
- Advertisement -