“When I was a boy, everyone knew robots were going to kill us,” Kris Hammond told attendees at Northwestern Political Union’s panel on the Ethics and Regulation of Artificial Intelligence. Hammond is a computer science professor at Northwestern specializing in AI research and who filled the role of moderator for the evening. “Now we know they’re going to kill us in interesting and new ways.”
It was a chilling way to begin the discussion, which touched on a range of areas including brain-computer interfaces, corporate monopolization of data and the destabilizing power AI could potentially have on the labor industry.
Panelist Larry Birnbaum, also a professor of computer science at Northwestern, believes that the proliferation of AI technologies will have a fundamentally different impact than some of the recent revolutions in agriculture, electricity and computing. “I think this phase of automation will be different, honestly,” he predicted. “It will move up the food chain and kill many of our jobs.”
In the medium term, which he described as roughly the next 50 years, Birnbaum explained how AI would create tremendous wealth, while contributing to major societal and economic disruption. The transition seems all but certain to wipe out scores of jobs in data analytics, transportation, manual labor and various other sectors. In the longer term, he thinks AI will bring on “radical transitions for what it will mean to be a human being.”
John Villasenor, a panelist and member of the UCLA faculty studying the intersection of policy and technology, described himself as more optimistic. “Agriculture used to be much more labor intensive, but I can’t imagine anyone feeling nostalgic for oxcarts,” he said. In the future, we may look back at strenuous modern jobs like strawberry picking the same way.
Or, consider traffic accidents and fatalities caused by overworked truck drivers. Humans weren’t built to drive for fifteen hours every day. Autonomous vehicles, however, can drive around the clock without nodding off. “I’m old enough to remember travel agents,” Villasenor mused. Now, people can reserve flights from their smartphone. Sometimes it seems automation is better for everybody.
However, Villasenor also noted that AI is limited by the data humans provide it. In the emerging field of predictive policing, for example, algorithms are trained to predict and identify citizens with a high likelihood of committing crimes. Due to the disproportionate policing and sentencing of African Americans in many communities, these data-crunching algorithms tend to be fed biased information, and in turn, deliver biased results.
In a world increasingly built on data, both panelists agreed that privacy was quickly evaporating. “We all voluntarily carry around a tracking device,” Villasenor told the audience. “It’s called a smartphone.” Centuries ago, he explained, the country’s founders believed that the biggest threat to privacy was the government. “Google knows more about you than the NSA.”
Birnbaum added that Google wants to know much more intimate information as well. The NSA doesn’t care about your sexual preferences, but to advertisers, that information can be invaluable. “We were caught by surprise by the impacts of social media and the polarization of cable broadcast networks,” he said. Similarly, it’s hard to understand what the impact of artificial intelligence will be.
As the conversation turned toward bizzare hypotheticals, the panelists grappled with some longer-term ethical questions. If an algorithm is developed to compose music or write books, would it be allowed to copyright its works? How will our legal system adapt to a world where autonomous systems can be creative and make decisions? Or, alternatively, what if an artificially intelligent robot is created that decides to go on a murderous rampage? Could its creator be prosecuted?
The so-called “chain of responsibility” for artificially intelligent systems is hard to define and will likely be even harder to enforce. The topic got even murkier when Birnbaum introduced the possibility of brain-to-computer interfaces where the line between natural and artificial intelligence breaks down.
The questions we need to contend with are deeply philosophical, and all three men ended the talk with pleas for ethicists and philosophy students to research artificial intelligence and other emerging technologies in the coming years.
This panel was hosted by the Northwestern University Political Union. They will be hosting a debate on the ethics of vegetarianism this Monday, May 13, at 7 p.m. in the Buffett Institute. For more information, you can visit their Facebook page.