Christians need a newspaper in one hand and a Bible in the other— according to theologian Karl Barth. In our Talking It Over series, James Read invites thoughtful Salvationists from around the world to reflect on moral and ethical issues. Here, he speaks with Glen O’Brien about artificial intelligence. 

Glen, I decided to give ChatGPT (an artificial intelligence chatbot created to hold a conversation with the user) a try recently and asked it to write a poem. I discovered that it could write mediocre poetry impressively quickly! Since then I have been mystified by the deep fear it has provoked in educators, and I have not understood why scientists are calling for a moratorium on further development of artificial intelligence (AI) in general. As someone who has studied AI, can you enlighten me? 

—Jim 

Essentially, AI, of which ChatGPT is an instance, is a form of machine learning. AI performs human-like tasks, such as writing prose or poetry, problem solving, calculation and decision-making, through interpreting patterns in data. We use it every day if we have a cellphone or use a navigation system in our car. It has widespread beneficial uses in industrial and medical settings. It becomes more ethically challenging when applied to military systems (drones, for example). Like all forms of technology, it can be put to either beneficial or destructive use depending on how it is applied and by whom. 

—Glen

I think some theologians and ethicists are musing about AI consciousness, whether computers could have souls and whether they have moral rights. You don’t seem to share that concern. By contrast, you are concerned that we pay more attention to the harms and benefits of AI. Will that be easy? 

Last year I was involved in a serious car crash. Within seconds I heard a voice through the car speakers: “Are you OK? 9-1-1 has been called and emergency help is on its way.” I was startled. And amazed. In the circumstance, it was very welcome AI technology. Afterward, however, I got to musing about how much my car’s computer system was aware of where I was, where I was going and what I was doing whenever I was in the car. That has made me wary. While I benefited from AI advances, I suspect there are downsides, too. Could this omnipresent tool extinguish privacy? And are there worse harms than privacy infringement I am not thinking of or able to control? Who (if anyone) does know? 

—Jim 

First, I’m glad you’re OK, Jim. Do these things worry me? Yes and no. We need to understand that, while we sometimes experience AI as surveillance, no one is actually sitting at a computer terminal watching us. Our search patterns on the internet are tracked by algorithms, lines of digital code in a series of ones and zeros. Yes, there are companies that will target advertising to us based on the data harvested from that surveillance, but there isn’t a person in a dark hoodie watching our every step. Of course, we may decide we don’t want to be tracked online, but most people probably find that less intrusive than phone marketing or someone at their door trying to sell them a set of encyclopedias. The navigation system in your car knows exactly where you are only because a satellite in the earth’s orbit is timing its movement between plotted points. Is that an invasion of privacy? It feels that way for some, but most people seem willing to surrender that small measure of privacy for the stress-free convenience of arriving at their desired location on time. 

New technology has always been met by fear, uncertainty and doomsday scenarios. As an educator, I am well aware that ChatGPT gives students new capacities to cheat on essays. It’s important, though, to ask how we educators might take advantage of the new technology. Rather than simply banning its use, we might ask how it could be used for better educational outcomes. Research is, after all, information gathering. A tool that can gather massive amounts of information in record time is going to have many benefits beyond cheating on essays. 

One of the more worrying aspects of the new technology is the capacity to create “deep fakes”—to artificially render a person’s voice or image in a way that is (almost) indistinguishable from the real person. Used maliciously, this could place an innocent person in a context that makes them appear to have engaged in criminal activity or to be somebody they are not. I shudder to think what the historians of the future might do with artifacts from our era, some of which are real while others are fake. Where is the line between history and art, and between truth telling and creativity? 

—Glen 

That’s an important question. As we both know, however, there are those who claim that there is no such thing as pure truth telling, no unvarnished history. They contend it’s all mixed with human interests and power struggles: “History has always been written to justify the victor’s way of seeing things.” They say AI’s capacity to create “deep fakes” simply exposes what has long been the agenda of the owners of “mainstream” news channels. 

I reject that, believing that there is truth and that finding the truth matters. But AI does cause me to question some time-worn adages. Take, for instance, the idea that necessity is the mother of invention; or to put it less elegantly and more theologically, that human beings have God-created needs that can be met by the creation of tools that human beings have the God-given intelligence to fashion. 

AI turns that simple account on its head, doesn’t it? It certainly feels more like AI is an invention in search of a need than it is a solution to pre-existing needs. It’s here, and now we are scrambling to find (or invent) “needs” it can solve; or so it seems. It feels like AI holds the power, not people. 

—Jim 

I also believe there is a distinction between truth and falsehood and that the deliberate attempt to mislead a person through lies is straightforwardly an immoral act. When it comes to ChatGPT, it is clear from its early use that it generates errors and misinformation. This does not mean that it has a sinister intent. It may simply mean that the technology is not yet sophisticated enough to filter out the errors. 

The dissemination of false and inaccurate information is nothing new, however. It has always been the case that media (perhaps mass media in particular) can produce mistakes. Consumers of media have always needed to show discernment—to weigh up claims, to learn how to identify fallacies and distinguish between weak and strong arguments—in short, to exercise critical thinking. Rather than simply banning ChatGPT from the classroom, we need to think about how we might use it to increase students’ capacity for such critical thinking. 

It does not solve the problem to simply ban development in AI. The genie cannot be returned to the bottle.

What ChatGPT is doing is trawling through the internet and collecting data at a much higher volume and greater speed than I could do. But it is by no means a flawless process. I performed the vanity experiment of asking ChatGPT who Glen O’Brien was, stating only that he was a theologian and historian. The reply was that it could find no information about Glen O’Brien, so either he does not exist or his work “is not well known or important enough to be noticed.” This was very good for my humility, but I knew from my own experience that I did at least exist and that my work had some value, at least to some reviewers and peers. Clearly, I need to improve my online presence, but ChatGPT is still a long way from being omniscient. 

I don’t think, at this stage, that AI has the upper hand over humanity. The technology is simply not far enough advanced. That would require a sophistication of purpose and independent will that AI simply does not possess. Might it happen one day? Yes, it might, and the time to think about strategies to deal with (or prevent) such an eventuality is now. 

It does not solve the problem to simply ban development in AI. The genie cannot be returned to the bottle. However, AI industry experts need to develop the kind of sophisticated strategies that theologians and philosophers have always employed to determine what are the highest goods for human beings (and other beings). 

The more we have thought about questions of meaning, identity and significance, the more likely it will be that we will live together in peace. In Denis Villeneuve’s post-apocalyptic film Blade Runner 2049, artificial humans (“replicants”) have developed the capacity to reproduce without human interference. While human beings have been reduced to hedonists pursuing their own sensual pleasures, the replicants are asking questions about the meaning of their existence, exploring their identity and working for their self-determination and freedom. Perhaps future beings created through technology will have something to teach us about such higher values. 

We should begin now to think about whether our future is to be more like Star Wars or Star Trek— civilizations torn apart by armed conflict or a bold and enlightened exploration of numberless universes, all of them bearing the mark of divine intelligence. 

—Glen 

Dr. James Read, OF, was the executive director of The Salvation Army’s Ethics Centre for many years and served as chair of the International Moral and Social Issues Council. Now retired, he attends Heritage Park Temple in Winnipeg. Rev. Dr. Glen O’Brien is a research co-ordinator and lecturer at Eva Burrows College in Melbourne, Australia. 

Photo: Userba011d64_201/iStock via Getty Images Plus 

This story is from:

Comment

On Wednesday, November 15, 2023, Aimee Patterson said:

Thanks for this discussion! One of the bright sides of AI mentioned is the freedom from menial tasks and the freedom for more creative work. ChatGPT can indeed be an asset to critical thinking if we choose to use it that way. But the fear of being replaced or made redundant by AI is always present. Global conversations about setting structures or guidelines for the development and use of technology don’t tend to take place early enough. And when they happen, they aren’t always undergirded by firm moral values and principles. We may be far off from AI gaining self-awareness or independent will. But it’s never too early to ask whether we want this to happen and why. Then we can tackle other questions. Questions like the ones asked about Star Trek’s Data: “Does an android have moral standing?” And “Is it okay to turn him off?”

Leave a Comment