Friday, October 21, 2016

How Robots See

I can see you.


The idea that the field of robotics might one day become so advanced that robots can function virtually the same way as living organisms has long been the subject of a plethora of science fiction films and novels. While robotics has indeed made significant strides, one impediment to its further advancement is the fact that robots are as of yet still unable to truly see the world, at least totally in the sense that humans can. But let's take a look at how most robots are able to process the world with our current technology.

So, just how do humans see? In abridged layman's terms, we use our eyes to collect light that reflects off of the matter around us. The eyes then convert that light into electric signals that travel to the brain via the optic nerves. Obviously, the brain does the heavy lifting here--and some researchers have postulated that up to 50% of our brain mass is involved, one way or another, in the process of seeing. The brain, then, processes those electric signals into valuable information about our surroundings for us.

Therefore, it is no surprise that enabling a robot to gather information about the world in this way, just as animals do, would be largely beneficial to advancing robotics.

Currently, technology allows for robots "see" the way you probably think they might: A video camera is used to collect a constant stream of images, which is then passed to the computer inside the robot. From there, a few different things can happen.

Roboticists use features in the stream of images--say, corners, lines, or unique textures-- to let the robot "see." The features are then stored in a library. They then write code that will recognize the patterns in these features to help the robot comprehend what's around it.

This code forces the robot to evaluate the information it receives from its cameras and compare its features with those it has stored in its library. So if a robot has a feature that looks like the corner of a room in its library, then it ought to be able to interpret another corner for what it is.

It's a somewhat laborious and complicated process, but it is definitely efficient.

Sources

3 comments:

  1. This is very timely with our lesson. It is really interesting how these biological features can be written in code.

    ReplyDelete
  2. This is really interesting! In a robotics class that I took, we had our moving robots "see" with sensors that would send out a signal that would then reflect off of obstacles. If the sensor received information back it would interpret it as an obstacle and turn itself accordingly.

    ReplyDelete
  3. Very interesting topic. Never really thought about how Robots can see. I've read a lot of posts about how products and robots are able to recognize information through sensors and their reflection off obstacles. I wonder how much computer science goes into the software of CP3PO.

    ReplyDelete