About 10 years ago, Elon Musk said, “With artificial intelligence, we are summoning the demon.”
Musk referred to AI as humanity’s “biggest existential threat” and later posted on social media that AI could be “more dangerous than nukes.”
Musk also posted that humans could serve as the “biological boot loader for digital superintelligence.”
Most Read on Manufacturing.net:
Artificial intelligence continues to evolve and has become a buzzword in the manufacturing industry, sparking debate over whether it will eventually replace human labor. However, AI experts prefer to describe the technology as a tool that complements humans.
In this Q&A, Manufacturing.net spoke with Stephen Graham, executive vice president and general manager of Hexagon Nexus, and Ivan Madera, the CEO and founder of Adaptiv AI, to explore if Musk’s concerns from a decade ago still stand and what modern day issues AI presents.
Graham’s and Madera’s answers have been truncated for length and clarity.
Nolan Beilstein (NB): Is AI the biggest existential threat to humanity, as Elon Musk said in 2014?
Stephen Graham (SG): No. Although that deserves a bit of unpacking. As a concept, Musk’s argument holds, but today’s technology is nowhere near advanced enough to create this kind of existential threat.
Achieving artificial general intelligence would require intentional design, and we are far from that reality. In this sense, the concern remains more science fiction than immediate threat, one that may not materialize for decades, if ever.
Ivan Madera (IM): If unconstrained or not monitored, yes, AI can become an existential threat to humanity. For example, if AI surpasses human intelligence, it could potentially pursue goals misaligned with human values.
However, there are benefits that counteract the potential risks of AI if applied properly. For manufacturing, AI-led solutions can provide a path toward an autonomous workforce to augment complex tasks for a depleted and vanishing skilled workforce.
NB: How have concerns regarding AI evolved in the last decade?
SG: AI has become more powerful and more accessible due to increased cloud processing capabilities. Now, AI concerns are no longer about the theoretical risks of superintelligent machines, but driven by rapid advancements.
The rise of generative AI, which can even pass the Turing test in some cases, has made the idea of artificial general intelligence feel more tangible, even though it remains theoretical.
IM: The increased use of AI and automation in handling sensitive information has raised valid concerns about data privacy and security. However, if models or AI are used in private environments, they can be highly impactful in safeguarding sensitive data.
NB: Aside from the fictional threats, what are the actual present dangers of AI?
SG: There are two major concerns with AI today.
First, AI is a powerful productivity tool that can be used for good or has the potential to be misused. For example, a bad actor could leverage generative AI to create harmful pathogens. It’s an ongoing arms race rather than a unique threat, but one that warrants attention.
The second, and arguably more significant, issue is the unintended consequences AI can have. A classic example is the parable of the paper clip factory, where AI was instructed to maximize productivity and it eventually turned the entire planet into paper clips. AI succeeded in its task but created overwhelming unintended consequences.
While this is a fictional story, it illustrates a real-world risk we see today, particularly in social media. AI-driven engagement algorithms can inadvertently fuel mental health crises, political polarization and even threaten democracy.
IM: There are several present dangers and challenges posed by AI systems that are already having real-world impacts.
Bias and discrimination: AI systems can perpetuate or even amplify human biases present in the data they are trained on, leading to unfair and discriminatory outcomes, particularly for marginalized groups.
Lack of transparency and accountability: The complexity of AI systems can make it difficult to understand how they make decisions, leading to a lack of transparency and accountability when things go wrong. The intersection of human intelligence in pre-training models is paramount.
Click here to subscribe to our daily newsletter featuring breaking manufacturing industry news.