Why the rise of AI requires the revitalization of the humanities

Opinions expressed by Contractor the contributors are their own.

No college campus is complete without a fierce rivalry between STEM and humanities students—and it’s fair to say that scientists have been winning the competition for a long time now. Artists and thinkers may have dominated during the Renaissance, but the Industrial Revolution has been the era of the technical worker. Apple’s market capitalization is greater than 96% of the world’s economies, and digitally transformed businesses now account for almost half of global GDP.

But as technology achieves more milestones and reaches a certain critical mass, I believe the humanities are making a long-awaited comeback. Technological innovation – especially artificial intelligence – calls for us to think about critical questions about human nature. Let’s look at some of the biggest debates and how disciplines such as philosophy, history, law and politics can help us answer them.

Related: Here’s What You Should Know About AI’s Imminent Power

The rise of sentient AI

The potentially ominous or destructive consequences of artificial intelligence have been the subject of countless books, films and TV series. For a while it might have seemed like nothing more than fear-mongering speculation – but as technology continues to advance, ethical debates are starting to seem more relevant.

As AI becomes able to replace an increasing number of occupations and many people become unemployed, it raises all sorts of moral dilemmas. Is it the government’s role to provide a universal basic income and completely restructure our society, or do we let people fend for themselves and call it survival of the fittest?

Then there is the question of how ethical it is to use AI to improve human performance and avoid human failure in the first place. Where do we draw the line between a “human” and a “machine?” And if the lines become blurred, do robots need the same rights as humans? The decisions we make will ultimately determine the future of humanity and can make us stronger or weaker (or see us eliminated entirely).

Humans or machines?

One of the AI ​​advances that is raising eyebrows is Google’s Language Model for Dialog Applications (LaMDA). The system was first introduced as a way to connect different Google services, but ended up sparking a debate over whether LaMDA was actually sensitive – as Google engineer Blake Lemoine claimed after seeing how realistic their conversations were.

In the end, the general consensus was that Lemoine’s arguments were frivolous. LaMDA only used predictive statistical techniques to hold a conversation – just because the algorithms are sophisticated enough to apparently have a dialogue does not mean that LaMDA is sentient. However, it raises an important question about where things would stand if a theoretical AI system were able to do everything a human can, including having original thoughts and feelings. Would it deserve the same rights humans have?

Related: What are some of the ethical concerns of artificial intelligence?

The Turing Test

The debate about what we should really see as human beings is nothing new. Back in 1950, Alan Turing created the Turing Test to determine whether a machine can be sufficiently intelligent and similar enough to humans for us to claim that machines have some level of “consciousness”.

However, not everyone agrees. Philosopher John Searle came up with a thought experiment called “Searle’s Chinese Room”, which states that the program of a machine that only speaks Chinese can be given to a person who does not speak Chinese in the form of cards. That person would then be able to follow the instructions on the cards to trick someone outside the room into thinking they could speak Chinese if they communicated through a slot in the door; but it is clear that this is not the case.

According to Lemoine, Google is unwilling to allow a Turing test to be performed on LaMDA, so it seems Searle is not alone in his reservations. But who will solve these problems?

An area of ​​humanities

As more of our lives are enriched by AI, more of these questions will emerge. 80,000 Hours, a non-profit organization run by Oxford academics that focuses on how individuals can have the greatest impact in their careers, has highlighted positively shaping the development of artificial intelligence as one of the most prominent issues facing the world right now.

And while some of the work will likely focus on research into technical solutions for how to program AI in a way that works for humans, policy and ethical research will also play a large role. We need people who can deal with debates, such as which tasks humans have fundamental value in performing and which should be replaced by machines, or how humans and machines can work together as human-machine teams (HMT).

Then there are all the legal and political implications of a society filled with AI. For example, if an AI engine driving an autonomous car makes a mistake, who is responsible? There are cases to argue that the fault lies with the company that designed the model, the human the model learned its driving from or the AI ​​itself.

For issues like the latter, lawyers and policy makers are needed to analyze the issues and advise governments on how to respond. Their efforts will also be complemented by historians and philosophers who can look back and see where we have fallen short, what has kept us going as a human race and how AI can fit into this. Anthropologists will also have much to offer based on their studies of human societies and civilizations over time.

Related: Why AI and humans are stronger together than each other

An exponential increase in AI requires the revitalization of the humanities

The rise of AI may happen faster than anyone could predict. Metcalfe’s Law states that each additional person added to a network doubles the potential value of that network – meaning that a network becomes exponentially more powerful with each addition. We’ve seen this happen with the proliferation of social networks, but the law is a potentially terrifying prospect when we talk about the rapid ascent of AI. And to understand the issues outlined in this article, we need thinkers from all disciplines. Yet the number of students earning humanities degrees in 2020 in the United States declined by 25% since 2012.

As AI becomes a bigger part of our daily lives and technology continues to evolve, no one in their right mind would deny that we need brilliant algorithm developers, AI researchers and engineers. But we also need philosophers, politicians, anthropologists and other thinkers to guide AI, set limits and help in situations of human failure. This requires people with a deep understanding of the world.

At a time when the humanities are largely seen as ‘pointless degrees’ and young people are discouraged from studying them, I would argue that there is a unique opportunity to revitalize them as disciplines that are more relevant than ever – but this requires collaboration between technical and non-technical disciplines that are complex to build. Regardless, these functions will inevitably be performed, but how well will depend on our ability to prepare future professionals in these areas who have both multidisciplinary and interdisciplinary views of the humanities.

Leave a Reply

Your email address will not be published.