Friday, November 25, 2016

Dynamic Programming Chips



As anyone who has dabbled in computer science can no doubt tell you, the best route to solve any large-scale problems correctly is to break it down into smaller chunks. Turns out that this technique is called dynamic programming. It makes for very efficient problem solving not just in coding, but also in fields ranging from genomic analysis to economics to physics. Unfortunately, however, to adapt dynamic programming to computer chips with multiple “cores,” or processing units, your average genomic analyst or economist would have to be practically an expert programmer, too.

To make it so more people will be able to utilize dynamic programming chips, researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University have been working on a new system, Bellmania. The system lets users decide what they want a program to do in broad terms--the kind any non-programmer who doesn't need to worry about how particular computers are would. Bellmania then automatically creates versions of these new programs that are optimized to run on multicore chips.

According to MIT News, in order to test out Bellmania, the researchers "parallelized" several algorithms that used dynamic programming. In keeping with the whole point of dynamic programming, they split up the algorithms into smaller chunks so that they would run on multicore chips. The new programs were between three and eleven times as fast as those produced through earlier parallelization techniques. They were also, on average, just as effective as those manually-parallelized by computer scientists. So--voila! Expert researchers whose endeavors would be aided by dynamic programs no longer need to trouble themselves about becoming experts in another field altogether.

This is pretty interesting, I think. I'm glad that some of the best minds in computer science are working toward letting the best technology be easily-accessible to the best minds in other fields. Where would we be without cooperation?

Sources






Friday, November 18, 2016

A Judgmental Network



Previously on this blog, we've explored how humans have inadvertently passed some of our less-than-desirable tendencies on to artificial intelligence--discrimination. Today, however, let's talk about a more positive result of teaching computers to be critical: judging books by their covers.

Two scientists at Kyushu University in Japan, Brian Kenji Iwana and Seiichi Uchida, have developed a method to do precisely that. They've trained a deep neural network to scan the covers of books and determine their genre.

Now, just what is a deep neural network? According to Wikipedia, it is "an artificial neural network (ANN) with multiple hidden layers of units between the input and output layers." In English major's layman's terms, it is a set of algorithms that, when used together, are able to perform similarly to a human brain. They specialize in pattern recognition.

The two computer scientists trained their deep neural network to "read" book covers using--get this--Amazon. They downloaded exactly 137,788 book covers from the website along with the genre of each volume. The book covers they used belonged to one of 20 genres; if a book was listed under multiple genres, they just used the first.

Uchida and Iwana then used 80% of the data set to train the neural network to choose the book's correct genre just by looking at its cover. The remaining 20% was split up; the two used one 10% to validate the model and test the network and the other 10% to see how well it categorizes covers it's never seen before.

The network has had varying degrees of success. The algorithm often confuses children's books with graphic novels, and frequently mixes up biographies with books on historical eras. The algorithm lists the correct genre within its first three tries 40% of the time and correctly guessed it first 20%. While that's not exactly perfect, it is significantly better than chance.

According to Iwana and Uchida, “This shows that classification of book cover designs is possible, although a very difficult task.” 

Can't argue with that. My grandma once plucked Fifty Shades of Grey from the shelves expected to dive into a courtroom drama.

That brings me to one of the downsides of this algorithm: It's interesting and might perhaps be useful one day, sure, but there hasn't been a real study into how well humans can guess a book's genre by glancing at its cover before. Does our human experience give us a leg up to accomplishing the task? Are we, on average, superior book cover interpreters? I can't be sure.

Either way, this whole concept of deep neural networks is definitely fascinating. Who knows what patterns computer brains will be able to interpret once the programming and technology become even more sophisticated?

 Sources





Friday, November 11, 2016

Computer Science in Flight



Since it took essentially all of humanity's 200,000 years on the planet to come up with the airplane, it's no surprise that heavier-than-air aircraft are quite complicated. Naturally, as planes become faster and increasingly advanced, the technology behind them must also become more complex.

According to BestComputerScienceDegrees.com, computer science is instrumental in nearly every aspect of aviation. Modern aircraft use several technological subsystems that work together to pull off that beautiful feat of flight. Naturally, these require appropriate computer hardware and software to run smoothly and avoid planes from crashing into the ground. They also come in handy for training new pilots.

Computers are also very important to navigation. (Obviously!) According to the website, pilots "utilize computers to assist with navigation through electronic instruments and monitoring flight management systems." 

HowStuffWorks.com says that autopilot--or, as it is more appropriately called, "automatic flight control system" (AFCS)--wouldn't be what it is without a certain computer with several high-speed processors. In order to gather the information crucial to flying the plane, the computer's processors communicate with sensors located on all the plane's largest surfaces. It also collects data from instruments such as gyroscopes, accelerometers, and compasses.

The AFCS then takes that input data and compares it to a set of control modes. A control mode is a certain detail about the flight that is inputted manually by the pilot that dictate things like airspeed, altitude, and flight path.

The computer will send signals to several servomechanism units, or servos, that "provide mechanical control at a distance." There's one servo for each part of the autopilot system. The servos act like the plane's muscles, performing their instructions and moving the craft using hydraulics and motors.

If the input data adheres to the commands of the control modes, then the computer (and, by extension, the passengers and crew) can rest assured that the plane is running smoothly. 



Sources


Friday, November 4, 2016

The Process and Ethics of Ad Block



One of the most infuriating things about the internet is, to me, the torrent of advertisements you turn loose whenever you browse it. They're just so annoying! Buy this, buy that. Whenever I see a thirty-second unskippable ad about Schick Hydro on YouTube, I want to flip.

That's why I've recently installed AdBlock, an extension on my Safari browser that--you guessed it--prevents a website's advertisements from popping up on web pages I view. It's worked pretty well so far; I haven't even seen any ads on Reddit or The New York Times website. But I hadn't the faintest clue about how it actually functions. So I used that great fountain of knowledge, Google, and dug up some info. Not all of it was what I wanted to hear.

It turns out that ad blockers (also known as ad filters) are not all that complicated. Most ad blockers, according to TechCrunch.com, are installed as an extension on a web browser--just like I have mine. Once it's been installed, it can filter out those pesky ads in one or two ways:

  1. The ad blocker can check against a crowdsourced blacklist of domain names which are always ads, and prevent them from loading before the web page is even finished loading, or
  2. Quickly checking the page after it has loaded and removing any items that meet certain criteria,  such as a box that says "Sponsored."
There's more to it than that, though. According to Wikipedia, some ad blockers can manipulate a Domain Name System (DNS). Some external devices, like AdTrap, can even block advertisements.

Ad blockers work well for people like me who are sick of seeing ads, but are they actually harmful? Online content creators or hosts have only two options when it comes to making money: They can either ask for it directly from consumers (like Netflix, which charges a subscription) or collect revenue from ads. So, the site's owners receive less (an infinitesimal amount due to a single user, but it adds up) whenever someone uses an ad blocker.

Does this mean ad blockers are unethical? Are people who use ad blockers moochers? I can't say for sure, but I definitely think that advertisers would do well to work to alter their ads so that they actually spur people to buy products instead of just irking them.

Sources

Friday, October 28, 2016

Snapchat's Facial Recognition and Image Processing



In my twenty-first century, first-world, millennial view, few things in life are as annoying and yet tragically ubiquitous as the face-altering filters on Snapchat. I think they were cool at first, solely on account of their novelty, and there have even been some neat ones, like the X-Men ones from last summer. In my opinion, however, they've become increasingly overused and ridiculous. 

A filter that makes your head into a tomato and then has a stream of more tomatoes shoot out of your mouth with a revolting sound when you open your mouth? An evil, screaming rabbit? Seriously, why? People think this sort of thing is cute, but it's not. I think My friend even tells me that half of all the girls he swipes right for on Tinder have those iconic dog ears and nose plastered over their faces in their photos, which really annoys me him because how is he even supposed to get a real sense of what someone looks like?

Anyway, partly because I'm curious as to how this bane of my social media consumption works and also because I was in desperate need of a blog topic, I did a little research on what's really behind the augmented reality of Snapchat filters. Here's what I found:

Snapchat acquired the technology from a Ukrainian company called Looksery for $150 million in September of last year. It turns out that Snapchat has been pretty reluctant to share the details of its face filter secrets, but you can find the patent for the technology can be found online, so it kind of doesn't matter.

The first step in the filtering process is called detection. How does the Snapchat app recognize that the image it's getting from your phone's front-facing camera is a face? The tech is somewhat similar to how you can scan VR codes or how primitive robots can see where they're going (see my post from last week). They call it computer vision.

But seeing as human faces are markedly more complex than barcodes or wall corners, you'd think that the method to get a computer to recognize individual ones would be more complicated, no?

Recall that a computer cannot truly recognize color--it only reads binary assignments for specific color values of each individual pixel of the image. Essentially, the computer looks for the contrast of light and dark color values to discern whether or not the image is a face. For example, the bridge of your nose typically appears as a lighter shade than the sides of your nose, so the computer will pick up on that and tell itself, "Yep, that's a face." In this way, the computer uses something called the Viola-Jones algorithm.

Next, the computer needs to figure out your facial structure so it knows where to situate the virtual flower crown on your head or where it should put those god-awful dog ears. It does this with an active shape model. In succinct terms, the computer knows where your eyes, forehead, chin, etc. should be because the programmers manually marked the location of such features on a plethora of models' faces and then ran them through the computer as examples. So since humans have more or less the same facial structure (think broadly), the computer has a pretty good repertoire of faces to go off of.

After plotting the location of your features like a map, the computer creates a mesh--that is, an augmented reality mask that will move as your face does an adjust itself when you, say, open your mouth to let tomatoes spill out.

That's the gist of it. I must say that after reading about how it works I've garnered a bit more respect for the creators of these Snapchat filters. It is pretty intriguing once you see how it works.

This facial recognition software is pretty cool--but there is a dark side. Facebook, for example, has begun living up to its name by amassing a database of millions of faces based off of when people tag their friends in photos. That's just a bit creepy. Even worse, the federal government can do the same thing, which should be more than a little troubling.

Sources

Friday, October 21, 2016

How Robots See

I can see you.


The idea that the field of robotics might one day become so advanced that robots can function virtually the same way as living organisms has long been the subject of a plethora of science fiction films and novels. While robotics has indeed made significant strides, one impediment to its further advancement is the fact that robots are as of yet still unable to truly see the world, at least totally in the sense that humans can. But let's take a look at how most robots are able to process the world with our current technology.

So, just how do humans see? In abridged layman's terms, we use our eyes to collect light that reflects off of the matter around us. The eyes then convert that light into electric signals that travel to the brain via the optic nerves. Obviously, the brain does the heavy lifting here--and some researchers have postulated that up to 50% of our brain mass is involved, one way or another, in the process of seeing. The brain, then, processes those electric signals into valuable information about our surroundings for us.

Therefore, it is no surprise that enabling a robot to gather information about the world in this way, just as animals do, would be largely beneficial to advancing robotics.

Currently, technology allows for robots "see" the way you probably think they might: A video camera is used to collect a constant stream of images, which is then passed to the computer inside the robot. From there, a few different things can happen.

Roboticists use features in the stream of images--say, corners, lines, or unique textures-- to let the robot "see." The features are then stored in a library. They then write code that will recognize the patterns in these features to help the robot comprehend what's around it.

This code forces the robot to evaluate the information it receives from its cameras and compare its features with those it has stored in its library. So if a robot has a feature that looks like the corner of a room in its library, then it ought to be able to interpret another corner for what it is.

It's a somewhat laborious and complicated process, but it is definitely efficient.

Sources

Friday, October 14, 2016

How Fitbit Works

It's the brand new craze that's sweeping the nation--Fitbit! I'm sure most have, one way or another, come across these personal fitness trackers. Marketed as a way for the average consumer to keep a close watch on their daily activity, a Fitbit is a watch or wearable clip that monitors the steps you take, the calories you burn, and (depending on the version you own) your heart rate. Downloading the Fitbit app for your smartphone takes things a step further: If you sync the app with your Fitbit, you can scan the bar codes of your food and count the calories you consume, log in what you do when you go to the gym, and even challenge friends to walk more steps than you. More recently, Fitbit has come out with a sleep tracking feature.


Now, I know humanity's made some pretty decent technological progress in the few hundreds of thousands of years we've been ambling around this planet. We're in the Digital Age. We've put men on the moon, split the atom, and invented Hot Pockets. But for a long while, I was skeptical about Fitbits. How could a piece of plastic count how many calories I've burned? So I looked it up, and here's what I found.

To track steps, Fitbits use something called a three-dimensional accelerometer to track the user's movement as well as the intensity of that movement. It's quite similar to what's used in Wii remotes. This, simply, is how Fitbit tracks steps. However, this raw accelerometer data is pretty useless on its own. Fitbit relies on special algorithms to interpret it into something useful--the caloric and perambulatory information we so crave.

It seems that the engineers of Fitbit had to resort to plain old trial and error in order to finely tune the algorithm that the devices use. They compared Fitbit's algorithm to that of other, more established test machines in order to see how well it worked. For example, when the engineers were developing the feature assessing how many calories the user burns, they compared Fitbit's results with those of a portable telemetric gas analysis system. (The gas analysis system, which is so great an assessor of calorie use that googling it yields mostly scholarly articles I haven't got the time to peruse, analyzes gas composition as we exhale.) Fitbit then takes into account your basic metabolic rate (BMR), which includes heart rate and brain activity, and adds it to the data collected from the accelerometer to calculate the calories you burn.

Fitbit's sleep tracker is similar to its step tracker; it merely logs whether or not you're moving. So, if you're wearing your Fitbit while you're in bed but you can't fall out, the Fitbit will assume you're fast asleep. Well, I guess that's the thing about technology: There's always room for improvement.


Sources


Friday, October 7, 2016

The Boolean Pythagorean Triples Problem & How It Was Solved

It should come as a surprise to no one: Computers are good at math.

So good, in fact, that they can solve a problem that mathematicians have been trying to crack for thirty years in just two days.

Such is the case of a problem solved only a few months ago called the Boolean Pythagorean triples problem. A mathematician named Ronald Graham first offered a $100 prize to anyone who could solve it in the 1980s, and no one was able until May of this year. The Stampede supercomputer at the University of Texas did the job, which just so happens to be the largest-ever math proof in the history of ever: 200 terabytes.

Stampede, our heroic supercomputer

So, what's the essence of this colossal and seemingly intractable math problem? Well, you probably remember the unsuitably-named Pythagorean theorem from middle school: a^2 + b^2 = c^2. The integers a, b, and c in the Pythagorean theorem are known as Pythagorean triples. The problem asks whether it's possible for each of the positive integers can be colored either red or blue so that a combination of the Pythagorean triple can satisfy the Pythagorean theorem without having any of the integers be the same color.



In May, researchers at the University of Texas had Stampede attempt to solve the problem, and it spent two days working it out. (Now just imagine how long it would take a human, or an army of humans.) Stampede determined that yes, this is attainable. Another program even checked Stampede's resulting 200 terabyte proof and found it to be sound. 

Answering the question posed by the Boolean Pythagorean triples problem only raised a veritable legion of different questions, however. For example, the question remains of just why it's possible for all of the integers to not have the same color. Also, it's apparently only possible to color the integers differently for 7,824. After the 7,825th integer, it just becomes impossible. Why is that?

I think perhaps this just speaks to our own relative lack of understanding of how the problem really works. But, like any science, it'll probably take dozens of minds and years of collaboration before humanity as a whole has really increased and improved our knowledge base. Really makes me wonder how long it would've taken people to figure the problem out without the use of a supercomputer, though.

Sources





Thursday, September 29, 2016

What're the Little Gold Chips on Our Debit Cards For?

You've probably had the awkward experience of purchasing an item at a store, swiping your debit or credit card, and having the cashier tell you, "Please insert your chip." Or maybe you've inserted your card's chip only to have the cashier say, "Please swipe." 

Actually, you've most likely experienced both instances, assuming you buy things at stores and don't dumpster-dive or shoplift or subsist solely on manna from heaven. If you're like me, you've probably wondered what the little golden chip on my debit card was, how it works, or if people are just full of hot air when they tell you it's "safer."


So I googled what the skinny on these chips were figuring it had something to do with computer science, and (for once) I was right. Here's how it works, and how it apparently protects us consumers against fraud:

Interestingly enough, it turns out that much like marriage equality and affordable healthcare, cards with chips are a global trend that America has been slow to adopt. They've already been used for years by over 80 other countries. They're called EMV cards (the acronym stands for Europay, MasterCard, and Visa) or smart cards. (They might be smart, but are they smart enough to know why kids love the taste of Cinnamon Toast Crunch? I doubt it.)

Cards that use magnetic stripes on the back--that's how we've normally been doing it--store data that stays exactly the same with each new purchase. Therefore, any old counterfeiter with access to that data could use it to uncover enough information about the cardholder to make purchases under their name, which would not be all too favorable.

But EMV chips are different. Every time it's used for a new purchase, the chip creates a new transaction code that can never be used again. So even if a hacker gets his or her hands on that sweet, sweet, data, it won't matter a lick, because the next time the customer buys something, the card will use an entirely new set of data.

"The introduction of dynamic data is what makes EMV cards so effective at bringing down counterfeit card rates in other countries," says Julie Conroy, research director for retail banking at Aite Group, a financial industry research company.

I really hadn't a clue about any of this stuff before reading about it. Now I'm pretty okay with inserting a computer chip on my card, even if it does take a little longer than swiping a mag stripe. I guess this all just goes to show how important--and powerful--data is.

I bet this teen social outcast from the 90s who's brandishing her credit card really wished she could travel to the mid-2010s and use an EMV chip card, because clearly so much of her money's been stolen by counterfeiters that she can't even afford an undershirt. Shame, really.



(I just wanted to use a weird stock photo, okay?)



Sources


Friday, September 23, 2016

New Computer Program Can Identify Communication Problems in Children




Most people I've talked with have probably notice at some point or other that I've got a speech impediment: a stutter. I won't blame anyone if they haven't noticed, however, because the darn thing likes to come and go as it pleases and manifests itself sporadically.

It has, at times, made some things (ordering at drive-throughs, for example) more difficult for me, but for the most part I'd say I've got it under control. This is due in part to a few years of speech therapy I went through in grade school. Taming it might've been easier to do had I started therapy earlier. Unfortunately, upwards of 60% of kids are like I was--they aren't diagnosed with their speech or language disorders before years after kindergarten, and as a result get less time to benefit from treatment.

That's why I'm intrigued by the work of researchers at the Computer Science and Artificial Intelligence Laboratory at MIT and Massachusetts General Hospital’s Institute of Health Professions.
Earlier this week at Interspeech, a conference on speech processing, the researchers reported on the results of recent experiments with a computer system they designed to not only identify language and speech disorders, but also recommend what courses of actions should be taken.

Pretty cool, I guess, but I wondered: Couldn't you tell if a kid has speaking problems by just, well, listening to him or her? Well, that's exactly what the computer system does. It analyzes audio recordings of children reading a story and looks for irregularities in their speech patterns.

To teach the system how to do this, researchers John Guttag and Jen Gong used a method called machine learning, in which a computer searches huge sets of data "for patterns that correspond to particular classifications."

That data was gathered by Tiffany Hogan and Jordan Green, scientists who helped design the program. They said it mainly looks for pauses in the child's speech when he or she is reading.

The MIT researchers even took age and gender into account when training the computer system using a technique called residual analysis. After identifying correlations between the gender and age of test subjects and the "acoustic features" of the way they spoke, Gong corrected for those correlations before inserting the data into the machine-learning program.

All things considered, this interesting development may prove vital in identifying and correcting people's language and speech problems in the future.

Sources




Friday, September 16, 2016

Spit Takes (And It Gives a Little, Too)




One of the breakthroughs of modern science is the mapping of the human genome. We now know a great deal about how our genes--what genes cause what traits, make us more susceptible diseases, and even what parts of the world our recent ancestors came from.

23andMe is the eminent personal genetic service. Since its founding in 2006, the company has grown to become the go-to company if you want to have your genome sequenced. 23andMe is perhaps most well-known for the health information it provides: If a user has a genetic variant that has been proven in scientific studies to be associated with developing, say, Tay-Sachs disease, then 23andMe will notify them. The same goes for other diseases and conditions like sickle cell anemia, lupus, and lactose intolerance. It can be useful for people who want to stay healthy and get the jump on preventing what they me genetically predisposed to. However, much of the health information 23andMe provides has been limited by the FDA, because Uncle Sam won't let anyone have any fun.

Another interesting aspect of 23andMe is its Ancestry Composition, which basically tells you how much of your DNA comes from certain population groups around the world. The company does this by comparing your DNA to that of people in 31 "reference populations"--that is, people whose families have supposedly lived in the same region for the past half millennium or so. It's only an estimate, but it's widely considered accurate. Here are my results, and I think they're pretty accurate considering what I know about my own family history:

(Once I found out 2/3 of my genome comes from African populations and 1/3 from European ones, the oreo jokes lobbed at me in middle school took on a whole new meaning.)

I was interested in how they do this, so I did some digging and found that (of course) a lot of it involves computer science.

So, how does it work? Well, first, 23andMe sends the customer a kit, and then the customer provides a saliva sample and sends it to one of the 23andMe labs in California or North Carolina. At the lab, the scientists extract DNA from the cells in the spit, determining exactly which base pairs (A, C, T, or G) the customer has along each of their 23 chromosomes. This raw data can be downloaded and viewed in the terminal, however as a layman I find it totally useless without it being interpreted.

23andMe interprets genetic code for its Ancestry Composition using a modified version of the (apparently) famed computer program, BEAGLE, which it calls "Finch." First, 23andMe compares the customer's DNA with that of "a set of 10,418 people with known ancestry, from within 23andMe and from public sources." This is called "phasing."

After phasing, 23andMe uses something called a support vector machine (SVM for short) to determine which chunk of your DNA most likely originated in which population group. For example, the SVM clearly recognized that most of my DNA is most common among people living in Sub-Saharan Africa, so it gives an estimate of that amount.

Next, the 23andMe scientists move on to "smoothing," "calibration," and "aggregation," a holy trinity of fancy words that essentially means they're just checking and re-checking their work. And then, voila! The user receives an email that their reports are ready and can then view them online.

23andMe could definitely use some improvement, such as larger African and Asian reference populations. Still, I marvel that something like this is available to us. It's practically magic.

Sources:

Friday, September 9, 2016

Can Algorithms Be Racist? (or, "Ku Klux Klomputer")


I know it's become a hackneyed thing to say, but it's still just as true as ever: nobody's perfect. (Although Batman definitely comes pretty close; he is the world's greatest detective and has definitely defeated Superman on multiple occasions with the use of kryptonite and sheer wit.) Everyone has flaws, vices, prejudices, and screws up from time to time.

That being said, it would be preposterous for any of us humans to ever purport to create anything intelligent without the flaws we haven't managed to shake for the 200,000 our species has been around on the planet. Therefore, it should come as no surprise that there we've some AIs that exhibit some of our less admirable qualities.

Recently, there was an online beauty pageant held that was the first to be completely judged by an artificial intelligence. Beauty.AI launched about a year ago, and the idea was that people would upload photographs of themselves to the website, and the AI would select the ones it deemed to be the most attractive.

There were a total of 44 winners selected, and the programmers who created the algorithm the Beauty.AI used to judge noticed one common factor among them: All of them, barring one, were white.

Doesn't seem like that huge of an issue, right? It wouldn't be if there hadn't been thousands of entries from dark-skinned people, mostly from India and Africa. You'd think the AI would select more than one dark-skinned amateur model, but it showed an obvious preference for lighter-skinned people.

So, what's the deal? Why is this artificial intelligence seemingly racist? The answer's a lot simpler than you might think: In its "youth," the AI wasn't exposed to a plethora of minorities.

It all comes back to algorithms, explained Youth Laboratories, the group that created Beauty.AI. Of course, an AI wouldn't automatically know what humans tend to think of as beautiful, so they taught Beauty.AI what to look for in a contestant by creating an algorithm using thousands of portrait photos. These photos were, overwhelmingly, those of white people. Few people of color were included.

Alex Zhavoronkov, chief science officer of Beauty.AI, was shocked by the contest's winners. Still, he understood why so few people of color were included. "If you have not that many people of color within the dataset, then you might actually have biased results,” he commented.“When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”

This is definitely a problem programmers must anticipate if they're going to be most effective. But even so, could a problem as abstract as determining whether or not what an artificial intelligence finds attractive is culturally sensitive ever really be solved? Is it an issue even worth remedying? Just some food, for thought, I suppose.


Sources:

Thursday, September 1, 2016

To Boldly (Al)Go(rithm) Where No Man Has Gone Before



No Man's Sky is an online video game designed by Hello Games that came out for PC and PlayStation 4 early last month. In it, the player controls a spaceship and traverses the galaxy, discovering quintillions--yes, literally quintillions--of randomly-generated planets covered in randomly-generated flora and fauna. If the player happens to be the first to reach a planet--and they often are, given the game's size--then they get to name it as well as the species they encounter on it. Pretty cool, huh?

Due to its potential for exploration, (who doesn't want to play Captain Kirk?) the game was highly-anticipated. I can personally remember watching promo videos for it on YouTube and reading about it in the gaming magazine Game Informer. People were really impressed by how big No Man's Sky would be, and perhaps moreso because the game was designed by a small group of less than twenty game designers.

No Man's Sky is such a humungous game that it'll take five billion years for players to discover every planet in it. No, seriously.

Now, obviously, so few people didn't take the time to design and program that many individual planets and species. So, how did they do it? They used an algorithm called procedural generation, in which the computer basically saves an designer an innumerable amount of time by creating art on its own.

Procedural generation isn't anything new in video games; in fact the much better-known Minecraft has been utilizing it for years already. No Man's Sky, however, trumps every other game in terms of sheer magnitude.

It's still got some issues, though. In fact, No Man's Sky hasn't received exactly a plethora of positive reviews, with players complaining that far too many of its planets look mostly similar. I guess this algorithm, however admirable, could still use a good bit of improvement.

P.S. I am very sorry about the title of this post.

Sources:






IBM's Watson AI Goes to Film School

As of this week, artificial intelligence is one step closer to outpacing us intellectually and enslaving us all. What step has AI lately taken in its quest to dominate and ultimately subjugate mankind? It's started studying film.

If you keep up with what's new in cinema--or have recently undergone the odious experience of having to sit through thirty-second un-skippable YouTube ad--then you may know about the new sci-fi/horror/suspense movie slated to be released on September 2nd by 20th Century Fox, Morgan. If you don't, here's a little info: Morgan stars Kate Mara, better known for playing the blander version of Sue Storm, and Paul "Surely-I-Deserve-An-Oscar-Just-As-Much-As-DiCaprio" Giamatti. The eponymous Morgan is a girl artificially created by a team of scientists (what could possibly go wrong?), who raise her in their compound as some manner of experiment in human development. Everything's fine and dandy until Morgan grows into a teen and begins to kill people, which is of course a big no-no even if your parents never let you go out on the weekends.



Anyway, as a promotional stunt for its new movie about scientists trying to play God, Fox teamed up with a company that's been relatively successful at playing God, at least in terms of creating an intelligent entity. Technology giant IBM arranged for its artificial intelligence, Watson, to "watch" Morgan and create an original movie trailer for it. This is the first movie trailer ever created by an AI. Since Watson's already proved itself to be quite smart by seriously mopping the floor with the some of the world's best players of Jeopardy!, I guess everyone figured it wouldn't be too shabby at analyzing a film, picking out the most suspenseful bits, and putting it all in a trailer.

So how did an AI accomplish this? Well, the IBM team programmed a system to basically give Watson a crash course on what makes for a horror movie trailer creepy. This was threefold: First, there was  a visual analysis of things like scenery and people's facial expressions. Then there was an audio analysis of ambient sounds, music, and the emotion in characters' voices. Finally, Watson was made to analyze the entirety of a movie scene, including the location and lighting.

Watson was then fed the coding of Morgan and came up with ten scenes it determined to be the scariest. Lastly, it compiled what it deemed to be a good trailer into a single video, which you can watch here:




What do you think? Is Watson sufficiently savvy about what makes for an eerie trailer? Personally, I think it's fairly well-done. The slow "Hush, little baby, don't you cry," that plays slowly in the background was an especially nice touch. I've really got to hand it to IBM; they really designed an excellent program that taught Watson the ins and outs of trailer-making.

Now let's just hope this AI-made trailer for Morgan isn't as deceptive as the human-made trailer for Suicide Squad.

Sources: