Ashutosh Saxena: Pioneering Work in Humanizing Robots



“My mixer is not working again, Ashutosh! Do you know why?” That question could have echoed in the Saxena household in Bareilly, Uttar Pradesh. The young Ashutosh would usually know the reason for the nonworking appliance — chances were that he’d borrowed some parts to make a robot.
“I was good with electronics. I took parts from mixers, grinders and even a music player to develop my first robot,” says Ashutosh Saxena, now a top robotics researcher and assistant professor at the Computer Science Department at Cornell University.
His first robot was 2 feet tall, could move around and pick up small items from one place and put them in another, Saxena says. It won him the first prize in his high school’s science fair and became the baseline for some of the most impressive advances in the field of robotics.
Saxena, 29, heads Cornell’s Personal Robotics Lab, where some of the most exciting and promising advancements in robotics are being undertaken. His work in robotic perception, particularly, is nothing short of groundbreaking and is set to make a lasting impact on the way we view the potential role of robots in our lives.
“Growing up, I used to read science fiction and was impressed with robots and what they could do,” he says. “I read Isaac Asimov where he portrayed the positive role of robots towards the society. That has been my motivation.”
His father, Dr. Ananad Prakash, was a veterinary doctor as well as bank manager. Saxena describes how these two seemingly opposite careers were related: “When farmers requested a loan, my father used his veterinary skills to assess the value of the animals and determine whether they should be approved for the loan.”
His dad died when Saxena was 8. He and his younger brother, Abhijit, were raised by their mother, Shikha Chandra, an award-winning poet. She was extremely supportive of her son’s interest in robots.
Saxena took his natural tendencies and success in creating robots into his college life. He enrolled at the Indian Institute of Technology (IIT) in Kanpur. While there, he developed a wrist-worn device that saves people from getting electrocuted in factory environments. The device senses the contact currents on the skin surface and trips the power supply within 10 milliseconds. This work was presented at National Power Systems Conference in India in 2002.
Another research project he worked on at IIT was using DNA molecules along with plastic. “Along with Prof. Sandeep Verma, I also developed a method to transform polystyrene (rigid plastic with excellent electric resistivity) into a moderately conducting polymer by introducing Adenine nucleobases, the ones found in DNA molecules,” Saxena explains. A potential use of this is to understand the conductivity of DNA.
These accomplishments secured his acceptance at Stanford University, where he received his master’s in 2006 and Ph.D. in computer science in 2009. “Stanford was a multicultural university on the cutting-edge of technology,” Saxena says.
And he thrived in that environment. With the Stanford 3D Reconstruction Group, he developed Make3D, an algorithm that “takes a two-dimensional image and creates a three-dimensional ‘fly around’ model, giving the viewers access to the scene’s depth and a range of points of view,” Saxena says.
Today, Sexena is considered the thought leader in the theory of robotic perception, or how a robot perceives its surrounding world. His lab at Cornell, which he started from an empty space, now has a team of 15 people, including four Ph.D. students. They are moving beyond the realm of theory and putting their work to practical use. They have a robot that can not only throw an item into the trash or load a dishwasher, but can discern which is the empty bottle that belongs in the trash and which is the dish that belongs in the dishwasher.
“It is very challenging to operate a robot in human environments,” says Andrew Ng, who was Saxena’s Ph.D. adviser at Stanford. “Saxena’s pioneering work opens up the possibility of many new applications, and is making many of us reimagine what is possible in robotics.”
One of those possibilities is to create a robot that will help the elderly or disabled complete chores they are not able to do themselves. “We are building robots that can not only give someone their medicine, but also remember that the medicine is due to be taken,” Saxena says.
The robot learns by watching how people interact with objects over time. This it does in two ways. First, by structured learning, which is a person showing the robot how to handle an object. Second, by unsupervised learning, which is the robot observing to make decisions based on information it knows about the object.
Cornell provided a description of the science behind how it’s done: The robot begins by surveying the room with a Microsoft Kinect 3-D camera, originally made for video gaming but now widely used by robotics researchers. Many images are stitched together to create an overall view of the room, which the robot’s computer divides into blocks based on discontinuities of color and shape. The robot has been shown several examples of each kind of object and learns what characteristics they have in common. For each block it computes the probability of a match with each object in its database and chooses the most likely match.
For each object the robot then examines the target area to decide on an appropriate and stable placement. Again it divides a 3-D image of the target space into small chunks and computes a series of features of each chunk, taking into account the shape of the object it’s placing. The researchers train the robot for this task by feeding it graphic simulations in which placement sites are labeled as good and bad, and it builds a model of what good placement sites have in common. It chooses the chunk of space with the closest fit to that model.
Finally the robot creates a graphic simulation of how to move the object to its final location and carries out those movements.
“This robot takes in 1 million pixels every 1/30 of a second,” Saxena says. “This is the same as humans; however we don’t think in terms of pixels.”
Saxena and his team are figuring out how to take advantage of the availability of such an enormous amount of information.
“The robot sees only part of a real object,” Saxena explains, “so a bowl could look the same as a globe. Tactile feedback from the robot’s hand would also help it to know when the object is in a stable position and can be released.”
“Understanding what’s in a camera image is one of the big challenges in artificial intelligence,” says Prof. Thorsten Joachims of Cornell. “Ashutosh Saxena’s work has helped robots understand not only the 3-D structure of their environment, but also high-level concepts such as what people are doing in the room.”
Saxena estimates a prototype of a robot made for home use could be built in five years and he envisions that in 10 years it will be available on the market for somewhere between $5,000-$20,000. “We are at the stage where personal computing was years ago. Most families have at least one computer in their home. The same revolution will happen with robotics with everyone having one in their homes in twenty years,” he says.
In May, Saxena won the Microsoft Faculty Fellowship, recognizing seven outstanding new faculty members with diverse research interests spanning robotics, machine learning, human-computer interaction and social networking, and representing “a selection of the best and brightest in their fields.”
Microsoft will give Saxena $200,000 to put toward his research. According to Microsoft, the award is intended to support young scientists “who are advancing computing research in novel directions with the potential for high impact on the state of the art, and who demonstrate the likelihood of becoming thought leaders in the field.”
Given his prominence in the field of robotics, it’s not surprising that Saxena had some influence on his younger brother’s career: Abhijit Saxena is a roboticist who focuses on surgical robots and devices.
Ashutosh Saxena’s work takes him around the world, including trips back to India giving talks and speeches on robotics. “I love to see new places, discover new cultures and spend time with family and friends,” he says.
Reading science fiction is still a passion, and Asimov is one of his favorite authors — particularly for “I, Robot,” a collection of nine science fiction stories written in 1950. Another favorite science fiction author is Vernor Vinge, a mathematician and computer scientist.
How will robots behave with humans as we allow them to have a greater amount of control over and influence in our lives? This has been characterized in many ways. While some visions stray toward the danger of this scenario, Saxena seems to be a force of good, opening doors for robots to assist those that are not able to help themselves.
source:                       http://www.newsindia-times.com/NewsIndiaTimes/20120914/5352505603114787123.htm

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s