How does artificial intelligence (AI) relate to Catholic teachings and human understanding? Today, Dr. Michael Dauphinais is joined by Dr. Saverio Perugini, professor of computer science at Ave Maria University, to discuss the relationship between AI, computing, and the Catholic faith. They explore the ethical, philosophical, and technological implications of AI and its impact on society and spirituality.
How does artificial intelligence (AI) relate to Catholic teachings and human understanding?
Today, Dr. Michael Dauphinais is joined by Dr. Saverio Perugini, professor of computer science at Ave Maria University, to discuss the relationship between AI, computing, and the Catholic faith. They explore the ethical, philosophical, and technological implications of AI and its impact on society and spirituality.
Resources:
You know, did we discover computer science or did we invent it?
I like to think of it as we discovered it.
It's God's creation, it's a property of the universe.
You need both faith and reason.
I think Pascal said that, that there are two equally dangerous extremes.
One is thinking of everything as reason, and then excluding it and taking everything on blind faith.
Welcome to the Catholic Theology Show, presented by Ave Maria University.
This podcast is sponsored in part by Annunciation Circle, a community that supports the mission of Ave Maria University through their monthly donations of $10 or more.
If you'd like to support this podcast and the mission of Ave Maria University, I encourage you to visit avemaria.edu/join for more information.
I'm your host, Michael Dauphinais, and today I'm joined by Saverio Perugini, Saverio.
Saverio.
Saverio Perugini, and welcome to the show.
Thank you, Michael.
Thanks for having me.
Yeah, so glad to have you here.
You're a professor of computer science at Ave Maria.
You've taught at other universities for almost 20 years.
So there's such interest today, I think, about artificial intelligence.
I think you, and it almost seems like there's an article about it every day in the newspaper.
I think I just saw that, you know, Bishop Barron's latest, his latest issue of the New Evangelization article or Evangelation and Culture issue is completely dedicated to artificial intelligence, right?
And what is it, how it's impacting culture, right?
This is just, everyone's talking about it.
And I think, so really what I want to do today is to try to help, let's pull back the curtain a little bit, let's see what it actually is.
And so we can understand it better and also recognize what are its genuine, maybe threats and what are the genuine opportunities.
But it does seem to me that a lot of people think that AI is either going to save us or destroy us, right?
So what will AI do?
Well, I mean, first of all, thanks for having me.
Secondly, yes, I mean, the headlines have certainly been sensational, right?
So I guess what I would say just as a bite-size response to that is we heard the same thing 24 years ago about Y2K, that the world was going to come to an end.
And as we all know, we're here right now and it's 2024 and we're still alive and we're thriving.
So I think we have to sort of take a step like we're going to do in this show, take a step back, kind of take a deep breath and really unpack the question.
The technology behind AI has certainly in the last 10 years, the research has, they've made leaps and bounds.
But the idea of the ideas of artificial intelligence go all the way back to the late 60s, early 70s.
So we're not talking about anything that's like entirely new on the face of the earth.
That's great.
So tell us a little bit about what is the nature, what is artificial intelligence and how does it fit within kind of maybe the larger understanding of really what even is computing?
Okay.
So it's hard to define artificial intelligence.
There are, you know, in your classical textbooks, they give several definitions.
One simple definition is, and it's a relative definition, but it's sort of what AI is, is relative with respect to the current state of the art.
So one definition is AI is anything that currently a human can do better than a computer.
So driving a car is AI because currently humans do it better than computers.
Sorting a billion records in a database is not AI because the computer can do it much better than a human could ever possibly do it, right?
So a human being would spend from now till all the time left in the universe to add up a hundred billion digit numbers, whereas a computer could do that in a couple of seconds.
So that's sort of a definition of AI.
Anything that we can sort of do better than a computer is currently AI.
Yeah, so that's the definition.
How does it fall within the broader scope of computer science?
So computer science is also another– it's challenging to define it.
But one– there's lots of definitions.
One is it's the art of problem solving.
Another is it's the use of abstraction to solve problems with computers.
But one definition I like– and it's from a colleague of mine, a professor at the University of Virginia by the name of David Evans, who just wrote a beautiful book on introductory computing.
He essentially says computer science– you know, everything– philosophy, biology, the study of life, chemistry, the study of organic material.
Well, what's computer science the study of?
It's the study of information processes.
And these information processes are all around us.
I mean, we're sitting here with cameras and lights and sensors, and there's information processes going on inside of those devices.
So what specifically do we do with those information processes?
How do we study them?
What we do is we figure out how to describe them, and we use a language to do that, a programming language.
We analyze them, and we use logic and mathematical sort of apparatus to analyze them.
And then we analyze them because we want to improve them, and then ultimately we want to implement them on some type of device, a sensor, a computer, a supercomputer.
So that's, in a nutshell, what computer science is.
That's great.
So that focused really on information processing.
And it's interesting, I think a lot of people, when they think about artificial intelligence and robots that have artificial intelligence, will they be alive?
Would you want to have a robot or an AI girlfriend, an AI boyfriend, and all these different things that are kind of in the media?
And in a certain sense, what we begin to see, and we'll talk about this a little bit more later, but I just want to at least raise the key issue is the difference between information and intelligence.
So computers and computing theory deal with information and they process information, data bits that are processed at incredible rates and all these different things.
But largely, you're dealing with things that can be broken down into information, whereas intelligence is actually seeing into the world and drawing out of the world, really realities coming to understand not simply how much another person weighs, something I can measure, that's information, but in a way who they are.
Seeing into the world and seeing what it is.
That's really what intelligence is.
So from the starting point, one of the things we're going to see as we get a little farther is that that's something in a way a computer can never do.
A computer can never see into the world and form a judgment about the world.
But let's step back a little bit.
So again, that basic distinction between information and intelligence.
Can I just follow up just quickly on that?
Certainly not a philosophy professor, but I think I'm talking about these things with the right terms.
Something like this book here is an artifact.
It was created by a human.
But something like intelligence is not an artifact.
It's a property of nature.
It wasn't something that was created.
So it's like saying, you know, a philosopher, philosophy professor Ed Ruffezer talks, uses this analogy.
It's like the crater on Mars is no more a sculpture than this book is, you know, a human being.
Right.
So it may look like the crater on the face on Mars may look like a sculpture, but we know it's not an artifact.
It's a natural thing that wasn't created.
Yeah, absolutely.
And I think that idea is when you look at texts that have been written by human beings, they seem to be filled with intelligence, but they're only secondarily filled with intelligence.
They were written by a person to be read by a person, but they are just marks on a page.
Correct.
And in some ways what the computer will do is the computer will recognize those marks on a page, recognize patterns, and then mimic those patterns to recreate secondary modes of information processing.
Correct.
But in a certain sense, the book is only kind of intelligent when it is being read and understood by a person.
Correct.
And so it's this element between what we make and in a certain sense who we are, which is really that we've been kind of ultimately made by God.
Correct.
And I just read a quote from an electrical engineer professor, and it said something like, it was a title of a talk that he was giving to the IEEE, and the title of the talk was, you said people think it's going to be the end of the world.
One practical question is, is AI going to replace my job?
Yes.
That's a very real concern.
And the title of the talk was something like, AI will not replace humans.
Humans who know AI will replace other humans.
So AI is not going to replace anything.
It's the humans who use that who will replace jobs or things like that.
So let's take a look.
I think right now we have probably one of the more famous or well-known instances of AI that seems to be really kind of capturing a lot of people's attention and their imagination is this ChatGPT.
Certainly changing in some ways education.
Students have easy access to modes of essay production that was hitherto unknown.
And so we're having to like adapt sometimes.
Professors are perhaps recovering in class writing assignments, oral exams, these different things to help students make sure that they are really developing the skills of thinking and writing on their own, coming to those judgments.
But so simply maybe just to start with what is ChatGPT and maybe you could just start with what is the GPT stand for and just in kind of layman's terms, a little bit of what do each of those terms mean.
So GPT is a Genitive Pre-trained Transformer.
And that's just a fancy way for saying, another way to describe it is LLM, Large Language Model.
And again, what is that?
It's just an alphabet soup, but it's essentially a neural network, which is a type of, you can think of it as a type of algorithm, a mechanized computing.
And what it does is it processes massive amounts of data, massive corpora, and tries to learn.
It's trained.
This is a training phase, sort of in a testing phase.
And it learns, it tries to learn patterns in pre-existing data so that when it sees a new situation, it could classify what you've given it new based on the training.
It's almost like in a classroom.
You know, we train, we don't train our students, but we teach our students with example problems.
That's training data.
And then, you know, at the end of a month, we give them a test.
And hopefully the test data will match the training data.
If it doesn't, then they're not going to learn very well.
But if it does, we see how well they performed on that.
How well can they solve those new problems?
And that's essentially what happens.
There's a training phase and a testing phase.
The training phase, being the training phase complete, it learned patterns so that when it sees new instances of these patterns, it can classify them.
So for instance, you have a million mammograms.
Which ones are going to develop into breast cancer?
Well, if a neural net is trained on those, it can learn the properties of those X-rays, where what looks like might evolve into cancer.
Now, of course, that requires a human physician to classify those.
So we have to know the answer, right?
When we train, we have to know this one is turned into cancer, this one didn't.
So we know the answer.
So then it learns what makes, what's the function from this input to the output?
You learn that.
So when you give a new one, it can tell, it can predict.
And that's essentially what ChatTube is doing, is trying to predict in a very crude sense, given a sentence, what's the likelihood that this next word will be the next word in that sentence?
And if you take that, if you take that and scale it to a grand level, that's how you're giving it a prompt and it's giving an entire essay.
Or you're giving it a prompt and it's writing a program for you, which is in the world, I have to deal with that, right?
Yeah.
So as far as I understand it then, this sense that what is happening in a way within ChatGPT, that it looks at the entire internet, so to speak, and all the texts that are on the internet, and it breaks them down into zeros and ones and patterns, and it says what sorts of patterns are occurring, and then what sorts of patterns will likely follow next.
And so in that sense, then, it seems to me then one of the...
it can appear, and I also heard that they include randomness in the training of AI so that it seems human because it doesn't always say the same thing.
But that's really just a random number calculator stuck into the outputting process so that we tend to think that because if something moves, we think it's alive, and if something changes and doesn't always do the same thing, we tend to think it's alive.
And so this aspect then of these things are actually, again, they're not random.
They're actually trained, and you're not really random if you're following a random number generator.
It's not the spontaneous thing of life.
But you also raised an interesting question there, which is that, so what AI then does is it's not kind of, so to speak, sharing wisdom or judgments about what is true.
It's simply, in a way, repeating the patterns that are just found across the internet.
So AI, in a certain sense, can't be smarter than the common texts that are out in the internet, so to speak.
Yeah.
So there's so much to unpack there.
So yes, and I hope I'm going to pronounce this Latin word correctly.
So what you just described, I think would be fair to say, what you're essentially saying is AI doesn't generate ex-nelio.
Would that be correct to say?
I mean, it doesn't generate, it doesn't work in a vacuum, right?
Now, we have to back up just a sec, just a second.
There are two, AI is a huge field, machine learning is a subfield of AI.
Within machine learning, there are essentially two ways to learn.
One is called supervised learning and one is called unsupervised learning.
Now, with supervised learning, that's essentially learning with the teacher.
The teacher is the data.
You feed into the algorithm the pre-existing data and will learn.
Unsupervised learning is learning without a teacher.
And what that is, is essentially learning by bumping into your environment.
So, for instance, I get to the academic building every morning, I try to find a parking spot or let's say I'm in Publix, an enormous parking lot, and I go at this particular time of day.
And I know, don't go down this aisle because there won't be spots there, I'll go down this one.
So after a while, I do this for a year or so, I kind of learn where the good spots are.
No one taught me that.
I learned it by bumping in the environment by collecting rewards or collecting positive rewards and negative rewards.
This idea that you just brought up about randomness, it's not entirely to make it seem real.
That also serves an integral function, and that is that in AI, particularly in unsupervised learning, when you're learning by bumping into the environment, that's called reinforcement learning.
You need randomness because then if you don't have randomness, again, when I say randomness, I mean pseudo-randomness, of course.
It's an algorithm that generates the randomness.
You're going to get stuck sort of in a local minima or maxima.
For instance, if I'm trying to just take the parking analogy, if I'm trying to find a parking spot, and I do this for a year bumping into the environment, and I figure out this is the way to go.
Once I've learned, I start exploiting the policy I learned.
But for all I know, once I start exploiting it, I've stopped exploring.
So there's a trade-off between exploration and exploitation.
You explore to learn your policy, and then you exploit it to get the advantage.
But once you're exploiting it, you're not learning anymore.
So for instance, every now and again, I might go down a random path in the grocery store because there might be a better spot there, and then I'll update my conception and my model of the world.
So the randomness is not just to make it appear real.
It serves a function that we want to continue to learn.
We don't want to just, we learned everything about the world, now we'll just exploit that.
No, there's new avenues to explore in the world.
So I know that was a long answer to that question.
That's really helpful to see.
And I've heard that was also, I think, though, an experience a lot of people have is with the Roombas, right?
The Roombas apparently were famously designed with some kind of that they learned by bumping into things.
That's reinforcement learning, correct.
Right, as opposed to kind of needing to map and then master.
It's more, right, exploratory.
Correct, correct, yes.
That idea that somehow we learn with our bodies, right?
And so computers, almost like machines, need to have bodies so then they can learn, at least physically, in the world.
What's going on, right?
They certainly need an internal representation of the world.
Yes.
And the creativity is in how do you model that?
People have spent, I mean, there's been years of, decades of research on how to model a local environment.
So before we dive into more, a little bit about both computing and artificial intelligence, and I think limits, and again, cultural things, that I definitely want to speak in more about this, but let's just hear a little bit more about you.
And would you talk a little bit about, especially for our audience, how did you get interested in studying computers or computational theory, information processing, as you put it, and how did you integrate that with your faith as well?
Yeah, so as you had mentioned, you said, I've been doing this about 20 years.
Thanks be to God, this is my 20th year, academic year, post-PhD.
So I've been here two years.
I spent 18 years at the University of Dayton, which is a Marianist institution up in southwestern Ohio.
I was there 18 years here too, since 20.
I joined there immediately after my PhD, which I spent six years at Virginia Tech, which is a sort of large research one university in southwestern Virginia.
And before that, I was at Villanova University, which is an Augustinian institution in Philadelphia.
So how did I get interested?
I started at Villanova back in the last millennium, millennia, as a mathematics major because I loved the beauty of mathematics and took a computer science course my second semester there and just fell in love with really the creativity.
And I think that's a big misconception about computer science, that it's sort of techie people.
And really, there's so much beauty.
I mean, very famous computer scientists have said, code is poetry, and it's in some sense a lot more art than science.
There are scientific aspects to it because there are hard limits.
But a lot of the limitations that artists face is what we face as computer scientists because we don't face physical limitations that, like, say, an engineer or a scientist would face.
We face limitations that artists face, which are our limitations on our creativity, our cognitive abilities, how much memory, how much can we remember at one time?
These are sort of the limitations or the challenges computer scientists face.
So I fell in love with that sort of artistic side of it, and taking a problem and I'm going to try to figure out how can I solve this in a creative way that's elegant and efficient and so on.
So that was sort of my trajectory.
And then, you know, you see with regard to my faith, you start to see that in some sense you need both faith and reason.
I think Pascal said that, you know, there are two equally dangerous extremes.
One is thinking of everything as reason, and then excluding it and taking everything on blind faith.
So we need both.
And I can see that in a lot of the mathematical theories and proofs we do.
We start with axioms.
That's faith.
We don't have proofs for those.
We take it on faith, but from that faith, from those axioms is true.
We can infer new propositions using reason.
You need both, right?
So as I came to learn more about my faith, I just saw how it dovetailed so well with everything we're learning, because ultimately everything's from God.
And did we discover computer science or did we invent it?
I like to think of it as we discovered it.
It's God's creation.
It's a property of the universe, as people like Alan Turing have discovered.
But yeah.
So that's great.
And would you say, I know you've written a number of books as well.
Would you just say a few words about those and what you've tried to do in your writing and teaching?
Yeah.
So I just published a book, Programming Languages, Concepts, and Implementation.
It came out in January 2023.
It's a textbook.
I like to think of it as my magnus opus.
But it's a textbook for a course.
It's a fairly standard course in any computer science curriculum across the country.
And essentially, what it is, is people think of programming languages must be all text speak.
But it's very artistic in the sense that programming languages have a lot in common with natural languages.
There are concepts, right?
So there's syntax, there's semantics, there's conjugations, there's translation, right?
So programming languages have all those same concepts.
So what we do in this book is we take all the core concepts that every language sort of embodies.
Like I said, things like syntax, semantics, scope, recursion, things like that.
We deconstruct it, we study them in isolation, and then we put it back together and see how they occur in other languages.
Yeah, so we're going to take a break in a minute, but I wanted to, when we get back, I just want to highlight you recently gave a series of online courses as part of the Ave Maria Pursuit of Wisdom series on computer science and these sorts of different things.
But at the end, you talked a little bit about the artificial intelligence.
You said at some point there's a real danger in a certain sense that computers will bewitch us and that we will begin to see all intelligence as computational.
So almost that the real danger isn't that we will make computers human, but almost that we will begin to see humans as computers.
And I was really struck by that.
And I'd love to talk more about it as we get back right after the break.
Sure.
Thank you.
You're listening to the Catholic Theology Show, presented by Ave Maria University, and sponsored in part by Annunciation Circle.
Through their generous donations of $10 or more per month, Annunciation Circle members directly support the mission of AMU to be a fountainhead of renewal for the Church through our faculty, staff, students, and alumni.
To learn more, visit avemaria.edu/join.
Thank you for your continued support, and now, let's get back to the show.
Welcome back to the Catholic Theology Show.
I'm your host, Michael Dauphinais, and today we are joined by Saverio Perugini.
So thank you so much for being with us.
Thank you, Michael.
On the show.
And so you're a professor of computer science for over 20 years.
You've written a major book on programming languages, concepts, and implementation.
And we're talking today a little bit about this, one of the key ways in which technology and computational theory and processes are impacting society today, certainly our conversations about artificial intelligence.
At the end of the work you did on these courses that you gave on computer science, you spoke a little bit about, I think, this book, Neil Postman on technopoly, the surrender of culture to technology.
Neil Postman famously wrote that book, Amusing Ourselves to Death, and a few other books.
But this was the surrender of culture to technology.
And one of the things you identified there that I really thought, I was just really struck with, is that all technological achievements also somehow come with costs.
That both they can be used for good or for ill, but also just they change life somehow.
So cars are good, they allow us to get around, but you also have worse accidents.
That's a cost.
And the problem sometimes with computers is we don't see the costs.
The costs are sometimes hard to see.
And of course, really with anything, often the costs are hard to see.
But you mentioned one of the costs is that as we trust more in artificial intelligence, we may trust less in human expertise.
And if that happens then, the ability of human expertise to really develop and see things could become lost.
You also mentioned the sense that as computers and their power may bewitch us, we might begin to see all even human intelligence as merely computational.
So talk a little bit about this.
What is it about artificial intelligence and just some of these things that are going on with technology?
How do they, at least as you see it, how are they kind of shaping perhaps negatively the way we look at human beings?
Correct.
So yeah, oh my gosh.
Yes, it's a huge thing.
Actually, that's one of my biggest fears with AI.
I mean, there's autonomous weapons systems.
Yes, those things are.
But this is the thing dehumanizing us.
So just to start from the top, so your listeners understand AI is a tool.
That's it.
It's just a tool.
That tool can be used for good.
It could be used for evil.
Any technology, it's not neutral.
It has an effect.
It's not neutral.
And Neil Postman talks about that.
Todd Moody, he wrote this beautiful book.
Philosophy and Artificial Intelligence talks about that.
So Postman has a great analogy in his book that I love.
And he starts the book with this, where he says, we're sort of like, you live in a house, you have a watchdog, the burglar comes in and throws some fresh meat, and the dog is feasting on the meat.
Look how great this is while the burglar is looting the house.
We have these technologies, and they sort of seduce us because we see the benefits.
So we're sort of feasting on it.
And meanwhile, what's being lost?
The house is being looted right under our nose.
And then Moody talks about that as well.
So much good, right?
So a human being, a human physician, cannot analyze a million or two billion mammograms, the example I used earlier.
But a computer can do that, right?
And a computer can find patterns.
Now, there are those problems you discussed, but what are some of the problems here, just even at a technical level?
Well, one is humans are error-prone, and humans are programming the neural net.
So therefore, the neural net is going to be error-prone.
And as the system gets bigger, it's more error-prone.
It's sort of proportional to the size, right?
So we can't always trust it, right?
We can't always trust it.
I've seen people with ChatGPT, even people here, certainly students, but even non-students where they'll get a response back, and they just accept it.
And I look at it and I see, that's wrong.
Did you see the code it just generated?
That's not correct.
So, you know, people have talked about this where we shouldn't use these tools unless we know how to solve the problem it's solving for us on our own.
Because then it's just helping us, you know, aid us, right?
But again, getting back to this idea of what is the cost.
So, if you think about any skilled, think of a golfer.
You ask Tiger Woods, how do you do that?
What do you do?
Sometimes experts at things like that, they themselves can't even describe it.
So, if they can't describe what they do, how is the machine ever going to embody that?
So, for instance, a physician looks at a mammogram and says, oh, I know this one, yeah, that's got the properties of it.
Well, describe that to me so I can model that.
I don't know.
It's my 20 years of experience in doing that.
I don't know how I do it.
I just do it, right?
So, again, like I was saying in the pursuit of wisdom, the problem is that if we start to see intelligence primarily as computational, we will start to put more faith into that than into human expertise.
Not only that, but also maybe we'll stifle creativity and human expertise.
One thing I think about with computer science at least is, as I said earlier, AI is one aspect, one subfield of computer science, and we're starting to see that everyone's looking at every problem through an AI lens.
Well, what's that going to do to theoretical computer science or graphics or all these other subdisciplines?
Are they just going to go by the wayside?
So as we come to sort of trust more in these autonomous things, by default we're putting less trust in human beings, and I think that's very dangerous because if you think about it, at a very, very simple level, a severely mentally handicapped person, if someone walked into a room, their mother or the father, they could recognize them like that.
They could even learn to say words.
Computers have a very hard time recognizing images and recognizing speech.
So things we do at a blink of an eye, a computer has a hard time.
But think about this, playing a game of chess, a good game of chess, requires a high level of intelligence.
A severely mentally challenged person probably can't do that.
The computer can do that like nothing.
So it's sort of this, there's this interesting sort of inverse relationship between human intelligence and computer intelligence.
So we need the human intelligence as well.
So there's a book by Sherry Turkle, Alone Together, Why We Expect More of Technology and Less from Each Other.
And in there, she describes in some sense that we are so ordered to social connection that we will attribute life and kind of meaning to things that mimic life and meaning.
So for instance, if you give children robot animals or pets or robot dolls to take care of, they will develop affections for them.
They will tell you it's not alive, but they will emotionally respond as if it were alive.
And she also says, of course, one of the difficulties is that when we then move to social online connection, we will think we're connected, but we're actually not connected because we're connected virtually.
So we have these as-if relationships with computer and as-if relationships with people.
The way I summarize it in some ways is we're so hardwired for connection that we will connect with hardwires.
And we're so hardwired for connection that we will connect through hardwires and therefore lose connection.
And what this means in part is that we will begin to develop relationships with these things that are mimicking human intelligence, but they're just not human and they're not intelligent.
And I think that sense of– you mentioned the word blink as well.
And Malcolm Gladwell had a book called Blink, and in there he gave the example of there was this– they recovered this statue that they thought was by Michelangelo, and they had all these experts come in and analyze it, and they thought it was real.
And then all of a sudden they got one person, though, who had spent his whole life in the field.
He came in and said, that's fake, in a blink.
He made that judgment.
And that's in a certain sense, that's the human ability to see and recognize things.
One of the fun, interesting things as well, we were talking about this a little bit during the break, and maybe if you could just say a word, is that artificial intelligence itself was a minority part of the field.
It wasn't getting funding.
One of the people who's one of the major players in it was largely thought to just be crazy throughout much of the 80s and 90s.
And so if AI were in charge, AI wouldn't have developed.
Because we need these abilities for people to see things anew and then somehow begin to explore them.
Correct.
Yeah, so I'm not sure if I'm going to get the dates on this right, but it's back in the late 70s or early 80s.
So there's essentially two approaches to AI.
There's this symbolic approach and then there's this sort of connectionist approach.
And what we're seeing right now is sort of the connectionist approach is taking off.
But back then, in the late 70s and early 80s, was the opposite.
The connection approach was seen as, this is never going to work, because you needed two things which we didn't have at the time.
You didn't have massive amounts of data and you didn't have the computing power.
I mean, you could have neural nets with the orders of magnitude of like a hundred to a thousand nodes.
Now we have billions, right, the number of nodes.
So the computing power there can handle that, whereas we didn't have that.
So it required a lot of computing power.
Of course, we didn't have back then in the late 70s and early 80s, the Internet was just sort of being developed through DoD.
So we didn't have sort of people, you know, everybody's a content editor posting.
We didn't have this corpus of data from which to mine.
And now we seem to have those two necessary ingredients and sort of the perfect environment.
So these things have taken off.
But back then it was, this is never going to work.
This is just kind of experimental stuff.
But, you know, thanks be to God they stuck with it.
And this is where I get to the idea of like faith versus reason.
They actually had faith that would work because all the science was saying, no, this is not really going to take off.
The symbolic approach is the better approach.
So yeah, there was something, I forgot how they term it.
It's how they call this, but it's like an AI winter where the funding had all dried up and people were doing things on their own, on their own dime to keep their research going.
And even from just let alone from a faith-based or understanding of the human person, even from a scientific perspective, often the great discoveries were rejected by the main, Einstein's early work was not accepted for a while.
LaMetra's work, George LaMetra's, the Catholic priest who developed the Big Bang, was not accepted for I think about a decade or two until we came up with new data.
And so many different times where there's a way that the status quo gets settled in and that it is somewhat overturned by somebody being able to see something.
And to a certain extent, what AI is doing is just seeing what everyone sees right now.
And that can then become the dominant view.
So I want to mention two things and just hear what your thoughts are about them.
So we'll just take one at a time.
So the first one is that I think what a lot of people don't realize is this contemporary trend to treat human beings like computers has a 400-year pedigree.
Okay.
Thomas Hobbes, who was an English political philosopher, sometimes people know of Thomas Hobbes or Calvin and Hobbes.
No, just kidding.
But a little bit in there.
Hobbes wrote in the 17th century about the Leviathan, and he wrote about the need for the state to basically bring human beings into order.
His idea was that human beings in the state of nature are the war of all against all, and then the Leviathan, which is kind of this artificial construction of the state, has to have unlimited power to bring everyone into safety.
That's the basic idea.
But this is what he says about reason.
He says this in the Leviathan about reason.
He says, When a man reasons, he does nothing else but conceive a sum total from the addition of parcels or conceive a remainder from the subtraction of one sum from another.
And he goes on a little bit with these different things about parts and wholes.
So, in this time period, in the beginning of modernity, we begin to define reason not as the ability to see into reality, but merely the ability to add and subtract, the ability to compute.
So, could you say a little bit about how this kind of the view of modernity or at least Hobbes, who is very influential on say the West and certainly in the United States, that what happens when we begin to think of human beings and our capacity for reason as merely computational?
Yeah, it's sort of a utilitarian sort of perspective.
Gosh, what happens?
That's something maybe an anthropologist.
These questions of AI are very, it's anthropology, really, truly is, because it gets into the nature of the human person.
I try to tell my students this all the time.
It's like they want to, a lot of them, some of them, not I shouldn't say a lot of them, they want to study computer sciences.
We're going to get make lots of money, right?
And I think, well, no, like that's not why I like it.
Like I just told you, it was artistic.
And we should try to study these things for their even.
Yes, there's a side effect to we can make money, but it's this intrinsic thing, right?
So there's a mathematician by the name of Paul Lament, or sorry, Paul Lockhart.
And he wrote a book called The Mathematician's Lament.
And he says, and it's something like, you know, you have to know how to read to fill out forms at the DMV.
It's certainly a utility of reading.
You can fill out forms at DMV, but that's not why we teach children to read.
We teach them to read so they can access meaningful and beautiful ideas, right?
So, yeah, I mean, there are sort of direct benefits to certain things, but the more interesting benefits are the indirect benefits, the more eternal benefits.
So thinking about people as these sort of neurons that just compute a zero or a one is not something I would certainly recommend.
That's great.
Yeah.
And I think we can also see then that it's not the computers and the technology that are fundamentally making us think that these are kind of bad philosophical ideas that have been percolating for 300 to 400 years.
On the other hand, we might immediately, a person who's grown up interacting with the world, falling in love, worshiping God, will immediately say Hobbes is wrong.
But in a world where we only interact with computers and we only interact with other people through computers, we'll begin to think, wait a second, Hobbes is obviously right.
And I think that's part of the danger.
You know, when we get to, the second thing I want to think about is you can kind of look to Aquinas and the Bible a little bit about this notion for what is speech and in a way can computers speak?
Well, they can mimic verbal production, but sounds are not speech, right?
Parrots have been mimicking speech in that sense for a long time, but they don't speak, right?
The external word or verbum or logos is a representation, comes from the internal word, right?
The inner word inside of us, which is our judgment about the world.
This is what Aquinas teaches, and our judgments about the world, therefore, are also inherently, not only about facts, but also about values.
They're not only about what is true, but also what is good.
Aquinas will say, right, that we have speech, human beings speak, because we make judgments of right and wrong.
Animals can only kind of make noises, because they have, they express what's pleasurable or not pleasurable.
And this goes all the way back up, of course, into the very creator, God, who creates the world through his speech in Genesis.
And in John, we learn that the word, the speech of God was with God from the very beginning, right?
The logos, the word, the verbum, which becomes flesh.
So now we learn that there was within God already inner speech, where God speaks himself perfectly from all eternity.
God speaks into the world, and it is, and then he speaks into human beings the ability to speak, which is the ability to have reason to come to know God.
And then God, of course, because we lost our ability to speak the truth, and we became right ultimately in a world of lying and murders and all this brokenness, God then comes to restore it through a new speech, through the covenants, through the prophets with Israel, and then, of course, in Jesus Christ.
So this idea in a certain sense, it really does change the way we have to question is what is a human being?
Is a human being merely a mechanical process or is a human being a source of wonder?
And I think when the more you look at human beings or look at human beings through computers or look at computers, you forget that we made computers.
It's like even when we talk about a computer beating Garry Kasparov.
The computer didn't beat Garry Kasparov.
The programmer who programmed that beat Garry Kasparov.
And what happened is we figured out that if we get 20 Ave faculty in a tug of war, we can beat Dwayne The Rock Johnson.
And if you get probably, I don't know, 20,000 programmers together and chess players together, you can create a program, and you can call it artificial intelligence in some ways, that will beat a chess master.
And that's amazing.
But what that does is just so how human beings are clearly, really have this capacity for thinking, but we are so used to thinking about what is in front of our eyes that we forget what is behind our eyes.
And we forget to recognize the most, the deepest mystery of the human person is really our own consciousness, our own awareness of ourselves and our ability to think about the world in a creative manner, right, that somehow mimics God's own, right, creative thing.
So could you just say a little bit about trying to recover that sense of the human person and maybe a little bit about what is it about computers that can almost bewitch us into we begin to love the computer instead of the human being who made the computer, which ironically is just replaying the history of Israel and idolatry where we, instead of worshiping God who made us, we worship the work of our hands.
Yeah, I just think and I think it's an Ecclesiastes.
There's nothing new under the sun, right?
There's nothing.
This is all there's nothing new.
So so what one one thing that you said there that triggered a thought is that one maybe advantage of this proliferation of AI is it has sort of reignited or jumpstarted or rejumpstarted the debate on what does it mean to be human.
So I have here the the the magazine from University of Chicago Loyola and and there's a philosophy professor here.
So the quote is, What do AI and VR virtual reality mean for human dignity?
How does it impact or lead us to ask new questions about what it means to be human?
And I think maybe that's that's there's lots of positives of AI, but this is one positive that's reigniting the debate on what does it mean to be human because we've lost that.
So, for instance, here at Ave Maria, we don't have it now, but we need to have a course in the computer science program on information ethics.
Let's let's talk about this stuff.
So, OK, so I think in some sense, what I think the reason why that hat what you described happens is because computers are electrical.
Well, not all computers are electrical, but the predominant computer is electrical.
And we have electrical signals between our neurons and our brains.
So people see the analogy, but that's where the analogy stops.
A computer is a discrete system.
It can be made.
It doesn't have to be electrical.
There are people have built computers out of Tinker toys.
If you saw a box of Tinker toys tumbling down the stairs doing things, moving, would you say that's intelligence?
No, because you somehow...
But you don't see that when those Tinker toys move, they're just changing discrete states.
But somehow because the computer is...
And Edward Fazer talks about this.
Somehow because the computer is electrical, and we have electrical signals in our head between our neurons, we somehow think they're the same.
No, because what the computer is doing is just carrying out an information process, and that information process can be carried out with Legos.
So again, it goes back to the thing about Mars.
It's no more a sculpture.
The crater on Mars is no longer more a sculpture than the computer as a brain.
So I think part of the problem is we hold on...
We take these analogies too far.
We're sort of...
And we don't even know what's happening.
We take them too far and then it becomes sort of like, oh, that's just...
I mean, not to go down a rabbit hole, but Alan Turing, the concept of a Turing machine is sort of the...
Is the definition of computation, right?
Well, you know, an AI is not the definition of intelligence because people forget there's an A, artificial, right?
It's not human intelligence.
Let's not forget there's an A there, right?
So it's not what intelligence is.
So we have to be...
So I guess I'm just really encouraged that people are starting to say this renewed debate and this interest in AI is renewing these questions of fundamentally what does it mean to be human?
What is intelligence?
I love that idea of thinking about artificial intelligence, virtual reality as a call to recover genuine intelligence and genuine reality.
And I think the one thing we do need to be aware of, though, that today we are in a situation in which there's a trillion dollar industry of advertisers and people that are trying to distract us.
And so we have to be aware.
We have our own tendencies away from reality and away from intelligence because of the wounds of sin.
But now we also have a very powerful industry that we need to become aware of and learn to how do we step back and engage in it when we choose to, not when it prompts us.
So this has been very helpful.
I really appreciate your walking us through both kind of some of the nuts and bolts of artificial intelligence, a little bit about computing theory, a little bit about how it's beginning to shape or really has shaped, or at least participating in the propagation of a fall of a reduced view of the human person.
And so as we're going to use these tools, we need to do so very carefully with our intelligence and be able to do that.
So as we're closing, I'd love to ask you three questions that I like to ask my guests.
So what's a book you're reading?
What's a book I'm reading?
Oh my gosh, I'm always reading three or four or five at the same time.
I'm right now reading a sort of a modern translation of Seneca's essay.
It's called On Anger, which is interesting.
I'm also reading a, yeah, so I'm looking at that, but that's sort of not really a book.
It's sort of an essay.
I'm also reading incrementally philosophy through computer science, and I don't know how I could forget this, but with a group of faculty here, we have a book club, and we're going slowly through Plato's Republic.
We just did book five, but isn't it just ten books?
So walking through that very slowly.
We're walking through that slowly.
It's a great way, and I think hopefully people that have listened as well could, what a wonderful place, I think, at Ave Maria and to have you here now, for people to be able to study kind of computer science, computing within the context of a certain sense of wonder at reality, within the overall liberal arts.
So that's such a gift.
So second question, what's a spiritual practice that you do on a regular basis to help you strengthen your faith and recover a sense of meaning and purpose in your life?
Oh, wow.
The simple answer to that is mental prayer.
I do 30 minutes of mental prayer every day.
The spiritual director of mine about three or four years ago got me going with mental prayer.
And it was like introducing me to a room in my house that I never knew existed, and it turned out to be the ballroom.
And I know sometimes people think that it's a very Protestant way to pray because we're not using scripted prayers, which we have in our Catholic tradition, very, very beautiful prayers.
But just having that one-on-one time with Christ in front of the tabernacle, using something to start a conversation, and then using the affections, you know, adoration, thanksgiving, reparation, and petition to sort of start a conversation.
And I, yeah, mental prayer is just, it's just, I don't know what I would do without it, yeah.
Finally, what's a belief you held about God that you later learned was false?
And what was the truth you discovered?
These are really great.
Just off the cuff, I guess I would say that, you know, when you're younger, you sort of think of God sort of as a vending machine.
I put in a prayer and I get, I put in my quarter and I get out what I want.
And that is sort of the third grade interpretation of God.
There's many, we can go many layers deeper into the onion.
So what I learned is that prayer is not so much asking for things.
It's conforming my will to his will.
And that's really what it is.
It's not about, it's just about a relationship.
It's not about sub-supplication consistently.
That's beautiful.
Well, thank you so much.
Again, we've had a Professor Saverino Perugini on our show today, Professor of Computer Science at Ave Maria University, author of several books and articles, many articles on computer science.
And we've been discussing the issues of artificial intelligence and how we can learn to understand it and perhaps respond to it more effectively and really recover this great sense of really, not only the glory of God, but the beauty of human intelligence and this willingness to face reality.
So thank you so much for being on our show.
Thank you for having me.
And listeners may wish to look at, we have another episode, not so much on computing, but with an engineer actually who became a Dominican sister on creation and the Big Bang, but another one where science and technology and faith meet.
So you may wish to do that.
And also, if you've enjoyed the show, I encourage you please to subscribe to us either on YouTube or on Apple podcasts or Spotify.
And we always appreciate it if you're willing to recommend us to a friend or family.
Thank you so much for listening today.
Thank you so much for joining us for this podcast.
If you like this episode, please rate and review it on your favorite podcast app to help others find the show.
And if you want to take the next step, please consider joining our Annunciation Circle so we can continue to bring you more free content.
We'll see you next time on the Catholic Theology Show.