Thursday, September 29, 2016

What're the Little Gold Chips on Our Debit Cards For?

You've probably had the awkward experience of purchasing an item at a store, swiping your debit or credit card, and having the cashier tell you, "Please insert your chip." Or maybe you've inserted your card's chip only to have the cashier say, "Please swipe." 

Actually, you've most likely experienced both instances, assuming you buy things at stores and don't dumpster-dive or shoplift or subsist solely on manna from heaven. If you're like me, you've probably wondered what the little golden chip on my debit card was, how it works, or if people are just full of hot air when they tell you it's "safer."


So I googled what the skinny on these chips were figuring it had something to do with computer science, and (for once) I was right. Here's how it works, and how it apparently protects us consumers against fraud:

Interestingly enough, it turns out that much like marriage equality and affordable healthcare, cards with chips are a global trend that America has been slow to adopt. They've already been used for years by over 80 other countries. They're called EMV cards (the acronym stands for Europay, MasterCard, and Visa) or smart cards. (They might be smart, but are they smart enough to know why kids love the taste of Cinnamon Toast Crunch? I doubt it.)

Cards that use magnetic stripes on the back--that's how we've normally been doing it--store data that stays exactly the same with each new purchase. Therefore, any old counterfeiter with access to that data could use it to uncover enough information about the cardholder to make purchases under their name, which would not be all too favorable.

But EMV chips are different. Every time it's used for a new purchase, the chip creates a new transaction code that can never be used again. So even if a hacker gets his or her hands on that sweet, sweet, data, it won't matter a lick, because the next time the customer buys something, the card will use an entirely new set of data.

"The introduction of dynamic data is what makes EMV cards so effective at bringing down counterfeit card rates in other countries," says Julie Conroy, research director for retail banking at Aite Group, a financial industry research company.

I really hadn't a clue about any of this stuff before reading about it. Now I'm pretty okay with inserting a computer chip on my card, even if it does take a little longer than swiping a mag stripe. I guess this all just goes to show how important--and powerful--data is.

I bet this teen social outcast from the 90s who's brandishing her credit card really wished she could travel to the mid-2010s and use an EMV chip card, because clearly so much of her money's been stolen by counterfeiters that she can't even afford an undershirt. Shame, really.



(I just wanted to use a weird stock photo, okay?)



Sources


Friday, September 23, 2016

New Computer Program Can Identify Communication Problems in Children




Most people I've talked with have probably notice at some point or other that I've got a speech impediment: a stutter. I won't blame anyone if they haven't noticed, however, because the darn thing likes to come and go as it pleases and manifests itself sporadically.

It has, at times, made some things (ordering at drive-throughs, for example) more difficult for me, but for the most part I'd say I've got it under control. This is due in part to a few years of speech therapy I went through in grade school. Taming it might've been easier to do had I started therapy earlier. Unfortunately, upwards of 60% of kids are like I was--they aren't diagnosed with their speech or language disorders before years after kindergarten, and as a result get less time to benefit from treatment.

That's why I'm intrigued by the work of researchers at the Computer Science and Artificial Intelligence Laboratory at MIT and Massachusetts General Hospital’s Institute of Health Professions.
Earlier this week at Interspeech, a conference on speech processing, the researchers reported on the results of recent experiments with a computer system they designed to not only identify language and speech disorders, but also recommend what courses of actions should be taken.

Pretty cool, I guess, but I wondered: Couldn't you tell if a kid has speaking problems by just, well, listening to him or her? Well, that's exactly what the computer system does. It analyzes audio recordings of children reading a story and looks for irregularities in their speech patterns.

To teach the system how to do this, researchers John Guttag and Jen Gong used a method called machine learning, in which a computer searches huge sets of data "for patterns that correspond to particular classifications."

That data was gathered by Tiffany Hogan and Jordan Green, scientists who helped design the program. They said it mainly looks for pauses in the child's speech when he or she is reading.

The MIT researchers even took age and gender into account when training the computer system using a technique called residual analysis. After identifying correlations between the gender and age of test subjects and the "acoustic features" of the way they spoke, Gong corrected for those correlations before inserting the data into the machine-learning program.

All things considered, this interesting development may prove vital in identifying and correcting people's language and speech problems in the future.

Sources




Friday, September 16, 2016

Spit Takes (And It Gives a Little, Too)




One of the breakthroughs of modern science is the mapping of the human genome. We now know a great deal about how our genes--what genes cause what traits, make us more susceptible diseases, and even what parts of the world our recent ancestors came from.

23andMe is the eminent personal genetic service. Since its founding in 2006, the company has grown to become the go-to company if you want to have your genome sequenced. 23andMe is perhaps most well-known for the health information it provides: If a user has a genetic variant that has been proven in scientific studies to be associated with developing, say, Tay-Sachs disease, then 23andMe will notify them. The same goes for other diseases and conditions like sickle cell anemia, lupus, and lactose intolerance. It can be useful for people who want to stay healthy and get the jump on preventing what they me genetically predisposed to. However, much of the health information 23andMe provides has been limited by the FDA, because Uncle Sam won't let anyone have any fun.

Another interesting aspect of 23andMe is its Ancestry Composition, which basically tells you how much of your DNA comes from certain population groups around the world. The company does this by comparing your DNA to that of people in 31 "reference populations"--that is, people whose families have supposedly lived in the same region for the past half millennium or so. It's only an estimate, but it's widely considered accurate. Here are my results, and I think they're pretty accurate considering what I know about my own family history:

(Once I found out 2/3 of my genome comes from African populations and 1/3 from European ones, the oreo jokes lobbed at me in middle school took on a whole new meaning.)

I was interested in how they do this, so I did some digging and found that (of course) a lot of it involves computer science.

So, how does it work? Well, first, 23andMe sends the customer a kit, and then the customer provides a saliva sample and sends it to one of the 23andMe labs in California or North Carolina. At the lab, the scientists extract DNA from the cells in the spit, determining exactly which base pairs (A, C, T, or G) the customer has along each of their 23 chromosomes. This raw data can be downloaded and viewed in the terminal, however as a layman I find it totally useless without it being interpreted.

23andMe interprets genetic code for its Ancestry Composition using a modified version of the (apparently) famed computer program, BEAGLE, which it calls "Finch." First, 23andMe compares the customer's DNA with that of "a set of 10,418 people with known ancestry, from within 23andMe and from public sources." This is called "phasing."

After phasing, 23andMe uses something called a support vector machine (SVM for short) to determine which chunk of your DNA most likely originated in which population group. For example, the SVM clearly recognized that most of my DNA is most common among people living in Sub-Saharan Africa, so it gives an estimate of that amount.

Next, the 23andMe scientists move on to "smoothing," "calibration," and "aggregation," a holy trinity of fancy words that essentially means they're just checking and re-checking their work. And then, voila! The user receives an email that their reports are ready and can then view them online.

23andMe could definitely use some improvement, such as larger African and Asian reference populations. Still, I marvel that something like this is available to us. It's practically magic.

Sources:

Friday, September 9, 2016

Can Algorithms Be Racist? (or, "Ku Klux Klomputer")


I know it's become a hackneyed thing to say, but it's still just as true as ever: nobody's perfect. (Although Batman definitely comes pretty close; he is the world's greatest detective and has definitely defeated Superman on multiple occasions with the use of kryptonite and sheer wit.) Everyone has flaws, vices, prejudices, and screws up from time to time.

That being said, it would be preposterous for any of us humans to ever purport to create anything intelligent without the flaws we haven't managed to shake for the 200,000 our species has been around on the planet. Therefore, it should come as no surprise that there we've some AIs that exhibit some of our less admirable qualities.

Recently, there was an online beauty pageant held that was the first to be completely judged by an artificial intelligence. Beauty.AI launched about a year ago, and the idea was that people would upload photographs of themselves to the website, and the AI would select the ones it deemed to be the most attractive.

There were a total of 44 winners selected, and the programmers who created the algorithm the Beauty.AI used to judge noticed one common factor among them: All of them, barring one, were white.

Doesn't seem like that huge of an issue, right? It wouldn't be if there hadn't been thousands of entries from dark-skinned people, mostly from India and Africa. You'd think the AI would select more than one dark-skinned amateur model, but it showed an obvious preference for lighter-skinned people.

So, what's the deal? Why is this artificial intelligence seemingly racist? The answer's a lot simpler than you might think: In its "youth," the AI wasn't exposed to a plethora of minorities.

It all comes back to algorithms, explained Youth Laboratories, the group that created Beauty.AI. Of course, an AI wouldn't automatically know what humans tend to think of as beautiful, so they taught Beauty.AI what to look for in a contestant by creating an algorithm using thousands of portrait photos. These photos were, overwhelmingly, those of white people. Few people of color were included.

Alex Zhavoronkov, chief science officer of Beauty.AI, was shocked by the contest's winners. Still, he understood why so few people of color were included. "If you have not that many people of color within the dataset, then you might actually have biased results,” he commented.“When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”

This is definitely a problem programmers must anticipate if they're going to be most effective. But even so, could a problem as abstract as determining whether or not what an artificial intelligence finds attractive is culturally sensitive ever really be solved? Is it an issue even worth remedying? Just some food, for thought, I suppose.


Sources:

Thursday, September 1, 2016

To Boldly (Al)Go(rithm) Where No Man Has Gone Before



No Man's Sky is an online video game designed by Hello Games that came out for PC and PlayStation 4 early last month. In it, the player controls a spaceship and traverses the galaxy, discovering quintillions--yes, literally quintillions--of randomly-generated planets covered in randomly-generated flora and fauna. If the player happens to be the first to reach a planet--and they often are, given the game's size--then they get to name it as well as the species they encounter on it. Pretty cool, huh?

Due to its potential for exploration, (who doesn't want to play Captain Kirk?) the game was highly-anticipated. I can personally remember watching promo videos for it on YouTube and reading about it in the gaming magazine Game Informer. People were really impressed by how big No Man's Sky would be, and perhaps moreso because the game was designed by a small group of less than twenty game designers.

No Man's Sky is such a humungous game that it'll take five billion years for players to discover every planet in it. No, seriously.

Now, obviously, so few people didn't take the time to design and program that many individual planets and species. So, how did they do it? They used an algorithm called procedural generation, in which the computer basically saves an designer an innumerable amount of time by creating art on its own.

Procedural generation isn't anything new in video games; in fact the much better-known Minecraft has been utilizing it for years already. No Man's Sky, however, trumps every other game in terms of sheer magnitude.

It's still got some issues, though. In fact, No Man's Sky hasn't received exactly a plethora of positive reviews, with players complaining that far too many of its planets look mostly similar. I guess this algorithm, however admirable, could still use a good bit of improvement.

P.S. I am very sorry about the title of this post.

Sources:






IBM's Watson AI Goes to Film School

As of this week, artificial intelligence is one step closer to outpacing us intellectually and enslaving us all. What step has AI lately taken in its quest to dominate and ultimately subjugate mankind? It's started studying film.

If you keep up with what's new in cinema--or have recently undergone the odious experience of having to sit through thirty-second un-skippable YouTube ad--then you may know about the new sci-fi/horror/suspense movie slated to be released on September 2nd by 20th Century Fox, Morgan. If you don't, here's a little info: Morgan stars Kate Mara, better known for playing the blander version of Sue Storm, and Paul "Surely-I-Deserve-An-Oscar-Just-As-Much-As-DiCaprio" Giamatti. The eponymous Morgan is a girl artificially created by a team of scientists (what could possibly go wrong?), who raise her in their compound as some manner of experiment in human development. Everything's fine and dandy until Morgan grows into a teen and begins to kill people, which is of course a big no-no even if your parents never let you go out on the weekends.



Anyway, as a promotional stunt for its new movie about scientists trying to play God, Fox teamed up with a company that's been relatively successful at playing God, at least in terms of creating an intelligent entity. Technology giant IBM arranged for its artificial intelligence, Watson, to "watch" Morgan and create an original movie trailer for it. This is the first movie trailer ever created by an AI. Since Watson's already proved itself to be quite smart by seriously mopping the floor with the some of the world's best players of Jeopardy!, I guess everyone figured it wouldn't be too shabby at analyzing a film, picking out the most suspenseful bits, and putting it all in a trailer.

So how did an AI accomplish this? Well, the IBM team programmed a system to basically give Watson a crash course on what makes for a horror movie trailer creepy. This was threefold: First, there was  a visual analysis of things like scenery and people's facial expressions. Then there was an audio analysis of ambient sounds, music, and the emotion in characters' voices. Finally, Watson was made to analyze the entirety of a movie scene, including the location and lighting.

Watson was then fed the coding of Morgan and came up with ten scenes it determined to be the scariest. Lastly, it compiled what it deemed to be a good trailer into a single video, which you can watch here:




What do you think? Is Watson sufficiently savvy about what makes for an eerie trailer? Personally, I think it's fairly well-done. The slow "Hush, little baby, don't you cry," that plays slowly in the background was an especially nice touch. I've really got to hand it to IBM; they really designed an excellent program that taught Watson the ins and outs of trailer-making.

Now let's just hope this AI-made trailer for Morgan isn't as deceptive as the human-made trailer for Suicide Squad.

Sources: