The online ad that led you to your new favorite pair of shoes might seem innocuous, but according to University of Virginia Associate Professor of Economics and Computer Science Denis Nekipelov, the algorithms behind such ads could lead to an unforeseen financial crash – something he hopes his research will prevent.
Nekipelov’s study of the automation of online advertising, co-authored with Vasilis Syrgkanis, a researcher at Microsoft, and Éva Tardos, a professor at Cornell University, struck a nerve among industry giants and was named “Best Paper” at the ACM Conference on Economics and Computation, a prestigious international conference for computer science.
Companies like Google, Microsoft or Facebook allocate advertising space through automated auctions. Advertisers place bids in the hope that their product appears when you search for something like “black high heels.” Humans are physically incapable of placing real-time bids for every query, so automating the bid process was a natural step.
In theory, automated auctions should arrive at an optimum price via John Nash’s equilibrium principle, enshrined in pop culture by the movie “A Beautiful Mind.” The principle assumes bidders fully understand other bidders’ strategies and will use that knowledge to arrive at their own stable bidding strategies. In practice, Nekipelov’s research demonstrates that the Nash equilibrium is unrealistic for today’s online advertising markets.
“As convenient as it is to assume equilibrium, I don’t comprehend how it could be even remotely replicated by the data,” Nekipelov said, pointing out the rapid rate of change as thousands of new search queries are entered every second. Bidding strategies cannot remain static enough to achieve equilibrium.
Setting aside the Nash equilibrium, Nekipelov and his co-authors studied a new approach using learning algorithms – equations that make real-time adjustments based on the results of each auction. Using search data from Microsoft, they essentially asked, “What happens when we turn our pricing structures over to these intelligent algorithms? Can it go terribly wrong?”
“In one sense, learning algorithms are really good for advertisers. The algorithm will do the work and put them somewhere near their optimal bid,” Nekipelov said. “At the same time, that is actually a bit scary. What are the consequences of giving full trust to these automated bidding strategies? Could the market crash because everyone is using them?”
The team found that most advertisers are bidding at 60 percent of their value, meaning that they are paying only about two-thirds of real worth of an ad. This is good news for advertisers, but not so good for the advertising platform, as there is money left on the table.
Taken to extremes, this inefficiency could be dangerous; Nekipelov even likened it to a Terminator-style scenario with machines limiting human intervention. A misstep in one algorithm could illogically inflate or deflate prices and skew the market.
Take the case of Amazon’s $24 million book. An evolutionary biology book, “The Making of a Fly” by Peter Lawrence, was once priced at $23,698,655 on Amazon. The only two sellers offering new copies were each using price algorithms that exponentially responded to the other’s price increases. One seller’s algorithm raised the price, the other automatically followed suit and thus a $100 book was marketed for millions.
Situated in an obscure corner of the Internet, the price war was comic. But imagine if such a thing were to occur with companies like Facebook, with huge revenues and stocks at stake.
“Volatility in this market can be very bad and we want to know if learning algorithms increase the risk of these very oscillatory outcomes,” Nekipelov said.
Analyzing data from Microsoft’s Bing search engine, Nekipelov and his coauthors could predict how different learning algorithms would behave over time, creating a roadmap that will allow human beneficiaries to forecast and even reverse-engineer how a market will behave.
“The current results are pretty promising,” he said. “If the algorithms are reasonable from a computer science standpoint, we can predict what the outcomes will be.”
Unsurprisingly, companies like Facebook and Google were interested in predicting how their advertisers might behave and have approached the researchers to learn more. The implications could extend to almost any sector using automated algorithms to negotiate an optimum – from financial markets to automated driving.
A few decades down the road, if everyone is driving automated cars like the prototypes Google is churning out, Nekipelov speculated, traffic could be lessened by successful learning algorithms or snarled by unsuccessful ones.
Another example could involve curated content on news sites; showing people content optimized for their enjoyment could increase readership, but also polarize the population by affirming preconceived notions.
Negotiating such challenges will require tremendous collaboration among disciplines. Nekipelov, who serves on the advisory board of U.Va.’s Data Science Institute, has already begun bringing disciplines together for what he calls “research at the frontiers” – connecting computer science, economics and the social sciences.
“We are at the point now where social behavior is extremely connected with technology,” he said. “To understand how it all fits together, you need people with a clear knowledge of the technology and people comfortable with social sciences.”
Students who combine those two disciplines, Nekipelov believes, can successfully compete for top jobs at the technology companies reinventing the future. He hopes partnerships between the Data Science Institute and various science and liberal arts disciplines will help U.Va. students gain those skills.
Such partnerships could eventually give other markets the same clarity that Nekipelov and his co-authors have brought to online advertising, as advertising platforms, advertisers and consumers work to understand the online and offline implications of automation.
Media Contact
Article Information
July 7, 2015
/content/beautiful-algorithm-risks-automating-online-transactions