After a presidential race that didn’t give much attention to AI, the technology appears to be front and center. For an analysis, UVA Today checked in with University of Virginia Darden School of Business professors Mike Lenox and Tim Laseter, also affiliated with the school’s LaCross Institute for Ethical Artificial Intelligence in Business.
Q. Tuesday was quite a splash regarding the years ahead for AI. What does that say about AI’s place in the economy and society?
Lenox: There is a broad race to drive and lead the development of advanced AI, both among leading technology companies and among nation-states, specifically China and the U.S. The fact that the announcement of this alliance between private businesses was made at the White House highlights the geopolitical importance of AI.
Laseter: It also reinforces the needed scale of investment envisioned by the tech leaders to achieve the full potential of AI. AI is already changing the world of work, but we must also remember that investors, entrepreneurs and politicians can fall prey to “the hype cycle.” For example, in his 2013 State of the Union speech, President Obama said 3-D printing “has the potential to revolutionize the way we make almost everything.” AI has passed through multiple hype cycles since the term was first coined in 1955.
Q. What are the potential benefits of having such unbridled investment and momentum around the growth of AI?
Lenox: The belief is that AI will be a massive transformational technology impacting many, if not most, aspects of our lives. There is incredible potential for driving value creation, creating operational efficiencies and innovating solutions to pressing challenges.
We have been on a 50-plus-year journey with artificial intelligence. The developments over the past 24 months suggest that we are entering into the steep part of the technology S-curve, where we see exponential improvement in the technology. These are exciting times.
Laseter: Much like the dawning of the internet 25 years ago, businesses are experimenting to find the “use cases” that will add real value. Many of those experiments failed during the period of excessive hype, but ultimately successes emerged as technology and consumer preferences evolved.
For example, I co-authored a piece called “The Last Mile to Nowhere” predicting the failure of same-day delivery of grocery items by startups such as Webvan, Kozmo and UrbanFetch. Twenty years later, COVID triggered rapid adoption of home delivery enabled by smartphones and gig workers that did not exist in 2000. The same will happen with AI.
Q. What are the potential negative consequences?
Lenox: These are also scary times. The potential for negative consequences to AI are manifest: job displacements exacerbating income inequality, market concentration due to network effects in the underlying technology, the potential for bias in algorithms and the social implications of fast-evolving human-machine interactions.
One area that I am researching is how the demand for AI broadly, and generative AI specifically, is driving a massive growth in demand for computing. One of the big bottlenecks in deploying advanced chips, like those from Nvidia, is building and powering data centers. There is a race for electricity to power these massive new data centers. How we build out this new electrical generation in a way that is expedient and sustainable is a critical question.
Laseter: Again, I reflect on history and the long-term looking forward. Splitting the atom launched the nuclear era and a Cold War seeking to find balance between the superpowers of the time.
As I wrote a decade ago in an article titled “Management in the Second Machine Age,” the Industrial Revolution fundamentally transformed the workforce: one-third of the U.S. workforce were employed in agriculture, forestry and animal husbandry. In the 2012 census, less than 1% of the workforce was employed in “farming, fishing, and forestry occupations.”
AI will trigger similar disruptions, but we aspire to inspire ethical leaders to ensure the innovations do more good than harm.