Friday, November 25, 2016

Dynamic Programming Chips



As anyone who has dabbled in computer science can no doubt tell you, the best route to solve any large-scale problems correctly is to break it down into smaller chunks. Turns out that this technique is called dynamic programming. It makes for very efficient problem solving not just in coding, but also in fields ranging from genomic analysis to economics to physics. Unfortunately, however, to adapt dynamic programming to computer chips with multiple “cores,” or processing units, your average genomic analyst or economist would have to be practically an expert programmer, too.

To make it so more people will be able to utilize dynamic programming chips, researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University have been working on a new system, Bellmania. The system lets users decide what they want a program to do in broad terms--the kind any non-programmer who doesn't need to worry about how particular computers are would. Bellmania then automatically creates versions of these new programs that are optimized to run on multicore chips.

According to MIT News, in order to test out Bellmania, the researchers "parallelized" several algorithms that used dynamic programming. In keeping with the whole point of dynamic programming, they split up the algorithms into smaller chunks so that they would run on multicore chips. The new programs were between three and eleven times as fast as those produced through earlier parallelization techniques. They were also, on average, just as effective as those manually-parallelized by computer scientists. So--voila! Expert researchers whose endeavors would be aided by dynamic programs no longer need to trouble themselves about becoming experts in another field altogether.

This is pretty interesting, I think. I'm glad that some of the best minds in computer science are working toward letting the best technology be easily-accessible to the best minds in other fields. Where would we be without cooperation?

Sources






Friday, November 18, 2016

A Judgmental Network



Previously on this blog, we've explored how humans have inadvertently passed some of our less-than-desirable tendencies on to artificial intelligence--discrimination. Today, however, let's talk about a more positive result of teaching computers to be critical: judging books by their covers.

Two scientists at Kyushu University in Japan, Brian Kenji Iwana and Seiichi Uchida, have developed a method to do precisely that. They've trained a deep neural network to scan the covers of books and determine their genre.

Now, just what is a deep neural network? According to Wikipedia, it is "an artificial neural network (ANN) with multiple hidden layers of units between the input and output layers." In English major's layman's terms, it is a set of algorithms that, when used together, are able to perform similarly to a human brain. They specialize in pattern recognition.

The two computer scientists trained their deep neural network to "read" book covers using--get this--Amazon. They downloaded exactly 137,788 book covers from the website along with the genre of each volume. The book covers they used belonged to one of 20 genres; if a book was listed under multiple genres, they just used the first.

Uchida and Iwana then used 80% of the data set to train the neural network to choose the book's correct genre just by looking at its cover. The remaining 20% was split up; the two used one 10% to validate the model and test the network and the other 10% to see how well it categorizes covers it's never seen before.

The network has had varying degrees of success. The algorithm often confuses children's books with graphic novels, and frequently mixes up biographies with books on historical eras. The algorithm lists the correct genre within its first three tries 40% of the time and correctly guessed it first 20%. While that's not exactly perfect, it is significantly better than chance.

According to Iwana and Uchida, “This shows that classification of book cover designs is possible, although a very difficult task.” 

Can't argue with that. My grandma once plucked Fifty Shades of Grey from the shelves expected to dive into a courtroom drama.

That brings me to one of the downsides of this algorithm: It's interesting and might perhaps be useful one day, sure, but there hasn't been a real study into how well humans can guess a book's genre by glancing at its cover before. Does our human experience give us a leg up to accomplishing the task? Are we, on average, superior book cover interpreters? I can't be sure.

Either way, this whole concept of deep neural networks is definitely fascinating. Who knows what patterns computer brains will be able to interpret once the programming and technology become even more sophisticated?

 Sources





Friday, November 11, 2016

Computer Science in Flight



Since it took essentially all of humanity's 200,000 years on the planet to come up with the airplane, it's no surprise that heavier-than-air aircraft are quite complicated. Naturally, as planes become faster and increasingly advanced, the technology behind them must also become more complex.

According to BestComputerScienceDegrees.com, computer science is instrumental in nearly every aspect of aviation. Modern aircraft use several technological subsystems that work together to pull off that beautiful feat of flight. Naturally, these require appropriate computer hardware and software to run smoothly and avoid planes from crashing into the ground. They also come in handy for training new pilots.

Computers are also very important to navigation. (Obviously!) According to the website, pilots "utilize computers to assist with navigation through electronic instruments and monitoring flight management systems." 

HowStuffWorks.com says that autopilot--or, as it is more appropriately called, "automatic flight control system" (AFCS)--wouldn't be what it is without a certain computer with several high-speed processors. In order to gather the information crucial to flying the plane, the computer's processors communicate with sensors located on all the plane's largest surfaces. It also collects data from instruments such as gyroscopes, accelerometers, and compasses.

The AFCS then takes that input data and compares it to a set of control modes. A control mode is a certain detail about the flight that is inputted manually by the pilot that dictate things like airspeed, altitude, and flight path.

The computer will send signals to several servomechanism units, or servos, that "provide mechanical control at a distance." There's one servo for each part of the autopilot system. The servos act like the plane's muscles, performing their instructions and moving the craft using hydraulics and motors.

If the input data adheres to the commands of the control modes, then the computer (and, by extension, the passengers and crew) can rest assured that the plane is running smoothly. 



Sources


Friday, November 4, 2016

The Process and Ethics of Ad Block



One of the most infuriating things about the internet is, to me, the torrent of advertisements you turn loose whenever you browse it. They're just so annoying! Buy this, buy that. Whenever I see a thirty-second unskippable ad about Schick Hydro on YouTube, I want to flip.

That's why I've recently installed AdBlock, an extension on my Safari browser that--you guessed it--prevents a website's advertisements from popping up on web pages I view. It's worked pretty well so far; I haven't even seen any ads on Reddit or The New York Times website. But I hadn't the faintest clue about how it actually functions. So I used that great fountain of knowledge, Google, and dug up some info. Not all of it was what I wanted to hear.

It turns out that ad blockers (also known as ad filters) are not all that complicated. Most ad blockers, according to TechCrunch.com, are installed as an extension on a web browser--just like I have mine. Once it's been installed, it can filter out those pesky ads in one or two ways:

  1. The ad blocker can check against a crowdsourced blacklist of domain names which are always ads, and prevent them from loading before the web page is even finished loading, or
  2. Quickly checking the page after it has loaded and removing any items that meet certain criteria,  such as a box that says "Sponsored."
There's more to it than that, though. According to Wikipedia, some ad blockers can manipulate a Domain Name System (DNS). Some external devices, like AdTrap, can even block advertisements.

Ad blockers work well for people like me who are sick of seeing ads, but are they actually harmful? Online content creators or hosts have only two options when it comes to making money: They can either ask for it directly from consumers (like Netflix, which charges a subscription) or collect revenue from ads. So, the site's owners receive less (an infinitesimal amount due to a single user, but it adds up) whenever someone uses an ad blocker.

Does this mean ad blockers are unethical? Are people who use ad blockers moochers? I can't say for sure, but I definitely think that advertisers would do well to work to alter their ads so that they actually spur people to buy products instead of just irking them.

Sources

Friday, October 28, 2016

Snapchat's Facial Recognition and Image Processing



In my twenty-first century, first-world, millennial view, few things in life are as annoying and yet tragically ubiquitous as the face-altering filters on Snapchat. I think they were cool at first, solely on account of their novelty, and there have even been some neat ones, like the X-Men ones from last summer. In my opinion, however, they've become increasingly overused and ridiculous. 

A filter that makes your head into a tomato and then has a stream of more tomatoes shoot out of your mouth with a revolting sound when you open your mouth? An evil, screaming rabbit? Seriously, why? People think this sort of thing is cute, but it's not. I think My friend even tells me that half of all the girls he swipes right for on Tinder have those iconic dog ears and nose plastered over their faces in their photos, which really annoys me him because how is he even supposed to get a real sense of what someone looks like?

Anyway, partly because I'm curious as to how this bane of my social media consumption works and also because I was in desperate need of a blog topic, I did a little research on what's really behind the augmented reality of Snapchat filters. Here's what I found:

Snapchat acquired the technology from a Ukrainian company called Looksery for $150 million in September of last year. It turns out that Snapchat has been pretty reluctant to share the details of its face filter secrets, but you can find the patent for the technology can be found online, so it kind of doesn't matter.

The first step in the filtering process is called detection. How does the Snapchat app recognize that the image it's getting from your phone's front-facing camera is a face? The tech is somewhat similar to how you can scan VR codes or how primitive robots can see where they're going (see my post from last week). They call it computer vision.

But seeing as human faces are markedly more complex than barcodes or wall corners, you'd think that the method to get a computer to recognize individual ones would be more complicated, no?

Recall that a computer cannot truly recognize color--it only reads binary assignments for specific color values of each individual pixel of the image. Essentially, the computer looks for the contrast of light and dark color values to discern whether or not the image is a face. For example, the bridge of your nose typically appears as a lighter shade than the sides of your nose, so the computer will pick up on that and tell itself, "Yep, that's a face." In this way, the computer uses something called the Viola-Jones algorithm.

Next, the computer needs to figure out your facial structure so it knows where to situate the virtual flower crown on your head or where it should put those god-awful dog ears. It does this with an active shape model. In succinct terms, the computer knows where your eyes, forehead, chin, etc. should be because the programmers manually marked the location of such features on a plethora of models' faces and then ran them through the computer as examples. So since humans have more or less the same facial structure (think broadly), the computer has a pretty good repertoire of faces to go off of.

After plotting the location of your features like a map, the computer creates a mesh--that is, an augmented reality mask that will move as your face does an adjust itself when you, say, open your mouth to let tomatoes spill out.

That's the gist of it. I must say that after reading about how it works I've garnered a bit more respect for the creators of these Snapchat filters. It is pretty intriguing once you see how it works.

This facial recognition software is pretty cool--but there is a dark side. Facebook, for example, has begun living up to its name by amassing a database of millions of faces based off of when people tag their friends in photos. That's just a bit creepy. Even worse, the federal government can do the same thing, which should be more than a little troubling.

Sources

Friday, October 21, 2016

How Robots See

I can see you.


The idea that the field of robotics might one day become so advanced that robots can function virtually the same way as living organisms has long been the subject of a plethora of science fiction films and novels. While robotics has indeed made significant strides, one impediment to its further advancement is the fact that robots are as of yet still unable to truly see the world, at least totally in the sense that humans can. But let's take a look at how most robots are able to process the world with our current technology.

So, just how do humans see? In abridged layman's terms, we use our eyes to collect light that reflects off of the matter around us. The eyes then convert that light into electric signals that travel to the brain via the optic nerves. Obviously, the brain does the heavy lifting here--and some researchers have postulated that up to 50% of our brain mass is involved, one way or another, in the process of seeing. The brain, then, processes those electric signals into valuable information about our surroundings for us.

Therefore, it is no surprise that enabling a robot to gather information about the world in this way, just as animals do, would be largely beneficial to advancing robotics.

Currently, technology allows for robots "see" the way you probably think they might: A video camera is used to collect a constant stream of images, which is then passed to the computer inside the robot. From there, a few different things can happen.

Roboticists use features in the stream of images--say, corners, lines, or unique textures-- to let the robot "see." The features are then stored in a library. They then write code that will recognize the patterns in these features to help the robot comprehend what's around it.

This code forces the robot to evaluate the information it receives from its cameras and compare its features with those it has stored in its library. So if a robot has a feature that looks like the corner of a room in its library, then it ought to be able to interpret another corner for what it is.

It's a somewhat laborious and complicated process, but it is definitely efficient.

Sources

Friday, October 14, 2016

How Fitbit Works

It's the brand new craze that's sweeping the nation--Fitbit! I'm sure most have, one way or another, come across these personal fitness trackers. Marketed as a way for the average consumer to keep a close watch on their daily activity, a Fitbit is a watch or wearable clip that monitors the steps you take, the calories you burn, and (depending on the version you own) your heart rate. Downloading the Fitbit app for your smartphone takes things a step further: If you sync the app with your Fitbit, you can scan the bar codes of your food and count the calories you consume, log in what you do when you go to the gym, and even challenge friends to walk more steps than you. More recently, Fitbit has come out with a sleep tracking feature.


Now, I know humanity's made some pretty decent technological progress in the few hundreds of thousands of years we've been ambling around this planet. We're in the Digital Age. We've put men on the moon, split the atom, and invented Hot Pockets. But for a long while, I was skeptical about Fitbits. How could a piece of plastic count how many calories I've burned? So I looked it up, and here's what I found.

To track steps, Fitbits use something called a three-dimensional accelerometer to track the user's movement as well as the intensity of that movement. It's quite similar to what's used in Wii remotes. This, simply, is how Fitbit tracks steps. However, this raw accelerometer data is pretty useless on its own. Fitbit relies on special algorithms to interpret it into something useful--the caloric and perambulatory information we so crave.

It seems that the engineers of Fitbit had to resort to plain old trial and error in order to finely tune the algorithm that the devices use. They compared Fitbit's algorithm to that of other, more established test machines in order to see how well it worked. For example, when the engineers were developing the feature assessing how many calories the user burns, they compared Fitbit's results with those of a portable telemetric gas analysis system. (The gas analysis system, which is so great an assessor of calorie use that googling it yields mostly scholarly articles I haven't got the time to peruse, analyzes gas composition as we exhale.) Fitbit then takes into account your basic metabolic rate (BMR), which includes heart rate and brain activity, and adds it to the data collected from the accelerometer to calculate the calories you burn.

Fitbit's sleep tracker is similar to its step tracker; it merely logs whether or not you're moving. So, if you're wearing your Fitbit while you're in bed but you can't fall out, the Fitbit will assume you're fast asleep. Well, I guess that's the thing about technology: There's always room for improvement.


Sources