The AI Summit
Hello friends.
Evidently, it’s lecture season, because that’s what I’ve been doing. The following is a talk I gave on AI in the church, specifically oriented towards lifestyle considerations and the Christian imagination. I thought you’d like it. Note: I re-recorded this talk from my transcript to improve the audio. If it sounds like I’m sitting in a shed, not in front of an audience, that’s because I am.
…
In 1872 a British novelist named Sam Butler published a disturbing novel. It was called Erewon, and it represented a disturbing decade.
The United States had just wrapped up a civil war, the Second Reich had just been founded in Germany, and in many ways Europe was busy setting up the chess board for WWI. Also, there were the innovations: lightbulbs were improving, the second industrial revolution was underway, and the “distant voice” machine, or telephone, was imminent. Perhaps most disturbing of all, an Austrian physicist named Ludwig Boltzman had just provided a statistical explanation for the second law of thermodynamics. Apparently, Boltzman showed that entropy was a physical property of a thermodynamic system and we Westerns have been unable to see time the same way since.
And so Sam Butler had many concerns. Even so, one topped the list: He’d read On the Origin of Species. As a matter of fact Butler disliked Charles Darwin. But he was impressed by the theory of natural selection. And so he did what anyone does with a new theory (whether it’s Poststructuralism, the Enneagram, or James Clear’s four laws of habit formation): He applied it to everything.
In particular, Sam Butler noticed that industrial machines were outperforming humans at tasks they formerly monopolized and so he wrote a novel in which machines achieved consciousness and destroyed humanity.
It was, he felt, the natural order of things.
Welcome, friends.
Thank you for showing up on a Saturday to hear a lecture on a topic that is guaranteed to provoke strong emotions. My primary hope for this morning is that Jesus would fill our discussion and that the Holy Spirit would fall. Let me outline the structure of our time together.
This morning we’ll have two sessions followed by two exercises. In the first session I’m going to introduce the conversation, explain why it matters, and propose a structure within which technology can be discussed in a productive way. In the second session we’ll unpack a positive vision for thriving in the face of certain challenges.
Before we get into it, allow me to begin with a reminder: You were made by love for love. Your destiny is to be brought into the Triune community of love, forever. Right now, you are Christ’s bride on the earth, overthrowing spiritual evil, restoring the human heart, and restoring the world. In a word, you are partnering with the Holy Spirit to establish the Kingdom of God. Your king, Jesus Christ, is the only fully human being. In him human nature has been revealed. His way is a road map to the human condition, which happens to be a wellspring of abundant life.
And so our lives are pointed in a certain direction. That direction is ultimately described by words like theosis and deification and sanctification, and those mean we are becoming like Jesus Christ. Good writers put that telos in different ways. Andy Crouch says that we are called to become people of wisdom and courage. John Mark Comer says that we are being with Jesus, becoming like Jesus, and doing Jesus stuff. Saint Basil said that God became man that man might become God. The big thing is that as we become like Jesus we are more fully inhabiting our humanity as God intends it to be.
To that end, technology matters because the technologies we use shape the people we become.
That goes on Line One of your notetaking guide, if you happen to be using it. The technologies we use shape the people we become.
If you explore Christian history you’ll find that Christians have an extraordinary reputation for refusing certain technologies.
That should blow your hair back because as far as I can tell there are no cases in which a civilization refused a new technology. But Christians have. Let me give you several examples.
Our Roman Catholic brothers and sisters have refused artificial contraception. That means that a billion or so people have said “No” to a technology that changed the west, altered male-female relationships, changed our understanding of sexuality, bodies, families, and so on. They evaluated it and said “No. We don’t like what that will do to us.”
(Note to readers: one impressive observation from Roman Catholic theologians early on was that artificial contraception would be bad for women and end up dehumanizing humanity. If you’re curious about the former claim, you can’t do better than to read Louise Perry’s The Case Against the Sexual Revolution. If you’re curious about the latter claim, Jeff Shafer’s disturbing essay “Machine Antihumanism and the Inversion of Family Law” (available at the link for free) is an important read. If you’re curious about bodies in general and you haven’t read my go-to book on the theology of the body, read Men and Women are From Eden.)
Another one. Anabaptist denominations refuse military technologies. Why? They don’t fight wars. So a hand grenade doesn’t solve a problem they have.
Another one. I realize I’m getting ahead of myself here when it comes to the intersection of technology and magic, but if you’d like an easy example of Christians refusing a technology you could look at the refusal of many Christians in the Roman Empire to make sacrifices within the Imperial Cult. We’ll come back to that one later.
Now, for reasons that we will soon see individual technologies are less significant than the cultural currents that carry them along. That means that individual technologies, including particular AI systems, are less significant than the ideas, movements, and aspirations that shape them. For us, the large-scale result of those currents is the big issue, and the way we orient our lives in view of those currents is the real work.
But first let’s define our terms.
Part I: A Few Definitions
You’ve probably noticed that when people try to talk about technology—and AI in particular—the conversation quickly lapses into agnosticism. Express unease, and you’ll hear things like “Oh so you’re not going to use search engines or Amazon?” or “Oh, so you’re against Waze?” or “Oh so you don’t like red light sensors?”
Those rejoinders are actually a rhetorical technique the appeal to ridicule, which happens to be a known fallacy. It pretends you cannot meaningfully discuss the subject in question.
But you can. So let’s do that.
In ancient Greece Aristotle did important work on technology. For that man the word “techne” meant “an art, a craft, or a skill” and the process of applying scientific observations within arts, crafts, and skills was “technology.” In the end Aristotle called technology “reason concerned with production.”
Unfortunately, Aristotle’s foundational definition is limited. It is (somewhat ironically, if you know Aristotle) missing a telos, or direction. How do people decide where to apply scientific observations? You need an aspirational vision, of course, a way you’d like the world to be. Where does that come from?
I muddled through the Stanford Encyclopedia of Philosophy’s article on technology. It was full of interesting ideas but this one in particular is a gem: technology is “the totality of human endeavours to control their lives and their environments by interfering with the world in an instrumental way…Technology is an ongoing attempt to bring the world closer to the way one wishes it to be.”
Do you see that? “The way one wishes it to be.” In other words, you have to want something, and so most technologies are the result of one of these phrases: “Wouldn’t it be interesting if we could ____?” “Wouldn’t it be great if we didn’t have to ____?”
For that reason my favorite definition of technology comes from Andy Crouch who calls it “Science, plus a dream.” And the dream is a vision for the way life should be.
Here’s a working definition of technology: Science applied to align the world with a dream.
Now for Artificial Intelligence.
Artificial Intelligence is getting harder, not easier, to define because it is an umbrella term and many of the subjects it includes don’t have that much in common.
It’s always interesting to see how nations define concepts, and when it comes to the United States I found this one: Executive Order 13960.
EO 13960 is too long to quote in detail but the following elements are particularly relevant:
An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
A set of techniques, including machine learning that is designed to approximate a cognitive task.
An artificial system designed to act rationally.
I’m drawn to those “human-like” parts because, though they’re a philosophical rat’s nest, they’re revealing. What kind of human are we talking about? More on that later.
As a matter of fact my favorite definition of an intelligence comes from Howard Gardner’s 1983 book Frames of Mind which I was obliged to read in the academic days. Here it is: “the potential to process information that can be activated in a cultural setting to solve problems.”
What Gardner observed is that an intelligence had three parts: Sensitivity to information, a way to process information, and a way to apply information.
So Artificial Intelligence refers most often to computer systems that can do those things and a working definition of AI would sound like this: Artificial Intelligence refers to a technology that can sense, process, and apply information.
At this point let’s revisit our hypothetical conversation:
Person at coffee shop: “What do you think about AI?”
You: “Well, AI is an umbrella term for intelligent technologies, and that means computer systems that can sense, process, and apply information. It’s hard to talk about AI without evaluating individual cases, but I am concerned by the ideas that are shaping the AI systems we have.”
Person at coffee shop: “Um…what?”
You: “Well, AI systems are technologies, and technologies, according to Aristotle, are science applied towards some goal. Another writer, Andy Crouch, calls technology ‘science plus a dream.’ What that means is that our vision of the good life shapes the technologies we create. I don’t agree with what passes for a vision of the good life in our cultural moment.”
Person at coffee shop: “Um…what vision of a good life?”
Part II: Dissecting the Dream
The problem is the dream has mutated.
For several centuries the dream of the West was represented by this ideology called Liberalism. Liberalism viewed humans as essentially rational creatures, or just portable intellects. You can think of Renes Descartes saying “I think therefore I am.”
For reasons that exceed the boundaries of this lecture, liberalism had a hard time defining the purpose of a person. But in the end its architects landed on freedom. Individuals were made to be free, and freedom meant freedom to, not freedom from.
I hope you’ve heard some of that before because Christians have learned how to engage Liberalism and the classics—Patrick Deenen’s Why Liberalism Failed, Charles Taylor’s The Ethics of Authenticity and A Secular Age, Christian Smith’s Soul Searching and To Flourish or Destruct—are out there.
Liberalism isn’t the only ideology guiding the West and it’s certainly not the ideology guiding technological innovation.
It's in the background, sure. But it’s taken a sinister turn.
You’ll notice, for example, that the technologies we’re developing aren’t helping people do what they want; they are controlling people. They don’t view people as rational either (because they almost never dialogue with the mind), and they’re pessimistic about the powers of the intellect. Worse, they don’t care about people as such: They address the parts or powers of the person, like the heart rate, the output, the query, and the trend, and they support the higher-order parts of civilization, like society, politics, and the economy.
That’s not accidental.
The technologies we have don’t relate to persons as such because the ideas that are shaping them don’t recognize persons as such.
If you’d like to learn more about those ideologies you can listen to this podcast from our friends at Rebuilders. Since it exists, I want to outline the forces we’re grappling with in terms of five impulses. These are instinctive responses to reality, and they are increasingly present in our context.
Are you ready? Five impulses, and then a coffee break. Here’s Number One:
1. Bodies
For a textbook example, you can’t do better than Marc Andreessen's definition of reality privilege (though Yuyval Harari’s Dataism definitely vies for first place).
Marc was one of the architects of the internet, and he argued that the real world was not a fact but a privilege. Look:
Consider the possibility that a visceral defense of the physical, and an accompanying dismissal of the virtual as inferior or escapist, is a result of superuser privileges. A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substance, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date. These are also *all* of the people who get to ask probing questions like yours. Everyone else, the vast majority of humanity, lacks Reality Privilege—their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world.
The “quote-unquote real world”?
That’s an easy example but it’s not the only example. When Harmut Nevan unveiled Google’s Willow Chip, he dropped this humdinger:
When I founded Google Quantum AI in 2012, the vision was to build a useful, large-scale quantum computer that could harness quantum mechanics — the ‘operating system’ of nature to the extent we know it today — to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges.
Note that Nevan is interested in “society” not “people” but the big idea here is to break nature down into its composite parts and then re-engineer it however you like.
Because, of course, the current state of things is an arbitrary arrangement.
This is more than a desire to master nature; it is a distaste for nature. Where’s it come from?
Well, if you look at developmental psychology you’ll learn that humans go through these repetitive cycles when it comes to expressing a will: hatching, practicing, and reconciling. In hatching, we realize we have a will. In practicing, we try to do our will everywhere. In reconciling, we realize we can’t, and we come back into relationship with other people. That requires voluntary submission: we let some of our expectations go. Intriguingly, it only happens because the real world pushes back on our idealism. In other words, we want relationship, and we want community, but we will only choose it if there’s no other option.
Unfortunately, emerging technologies are trying to get around the reconciling stage by subverting the physical world’s ability to limit our agency.
2. Relationship
What Saint John Paul II identified in his work on the theology of the body is that our bodies disclose theological realities. In other words, our bodies tell us truths about God and ourselves, and the body’s main lesson may be that we are made for relationship. In Genesis 1, God creates “humanity,” singular, in a duality, “male and female.” And so we only fully experience and display the gift of our humanity in relationship.
The problem, of course, is that selves in relation are selves in submission. We become servants, not masters, and gifts, not commodities.
Also, people are awkward. They subvert our will and hurt our feelings. Significantly, they limit the expression of our powers.
Our culture doesn’t like that.
Think of the automatic checkout line at the grocery store. The automatic checkout line does not view a machine as a better version of a person but as a preferable alternative to a person. Because if life is about getting what you want, wouldn’t you rather interact with entities who will only serve you? ChatGPT doesn’t care if 99% of your queries are immoral, and Deepfake apps won’t block your use when it is sinful. For the most part, they’ll do what you want.
This, friends, is not a positive human vision, in which other people are divine image bearers through whom we can render service to Christ.
Plus, there’s a catch: relationship is the way blessing is transmitted in God’s economy.
This fact stems from the Trinity, a God who exists as a community of self-offering love. God transmits blessing in His very nature through relationship, and He wants to transmit blessing to his creation through relationship.
But people often fail. And so the AI tools we’re getting represent a bid to get blessing without relationship.
3. Limits
Bonhoeffer defined sin as “the passionate hatred of any limit.” Nietzsche, in contrast, defined joy as “the feeling of your power increasing.” In Nietzsche’s influential scheme it is the destiny of the will to transgress all limits and you see that idea driving technological innovation. Get more done in less time! Be anywhere! Never mind why. Limits are arbitrary, not divine, and a curse, not a blessing.
For example: Grocery stores have this limiting technology called the cart. The cart tells you how much to buy, and at Costco it is twice the size it is a King Soopers. Amazon, however, has no limit. You can put as much in your cart as you like (though it does limit orders).
Likewise, on Google calendar there is no limit to the number of events you can put in the same time slot. On most social media apps, there is no limit to how far you can scroll.
There are other examples but you see the point: no limits.
Intriguingly, Liberalism did have a limit, articulated by John Stuart Mill: the offense. You couldn’t transgress other people’s will. In that context grievance became the dominant and in fact sole currency.
Our moment gets around Mill’s limit by getting around other people; it gets around other people by breaking them down into the smallest imaginable units and building them back the way we want. Which is also why our culture is anti-bodies.
4. Work
If you were to take Economics 101 you would learn that as societies become more productive, people work more, not less. There are many reasons for that, but four in particular are relevant here.
One is that as productivity increases the value of completing any one task goes down. So you have to do more.
Two is that as products become more widely available our standards of living expand. So we need to earn more.
Three is that wealth makes people individualistic as a matter of instinct. This step is particularly insidious because we don’t feel it happening. But really: If you have your own car, why not have your own circular saw, why not pay your own money to have someone change your oil instead of having your brother-in-law come over to spend the entire afternoon doing it together? And so every household has to do the work of an entire community. That’s a lot of work.
Four is that a cultural current emerges that both enables and requires continuous expansions. We can get kind of swept along.
What does this do? It exchanges work for toil.
We can define work as labor that requires effort and skill and produces fruit.
Toil is labor that requires effort but not skill and is usually not fruitful.
A good example of work is building a table or crafting an essay or planting a garden. A good example of toil is sending emails or paying bills or asking Chat questions.
Now here is the thing: When the only thing that matters is productivity, work and toil get lumped together. But work is usually not efficient. And so technology can actually create an aversion to work and a preference for toil, even though it makes us hate our lives.
One more.
5. Wisdom
Our cultural current is anti-wisdom and pro-information.
It’s anti-wisdom for many reasons—wisdom is hard, wisdom is slow, wisdom is incarnate—but I think the main reason our culture is anti-wisdom is because wisdom is hierarchical: Wisdom assumes God made the universe and knows what to do with it.
Information, in contrast, is purely egalitarian: Anyone can use a search engine, and anyone can ask Chat a question.
The problem is that the universe doesn’t have a merely a factual structure (in which things are either true or false); it has a moral structure (in which things are either right or wrong).
Getting bigger, the world also has a redemptive structure. In that, wisdom describes the ability to perceive what is consistent with God’s will at a particular place in the narrative arc.
Getting bigger again, the universe also has a divine structure. In that wisdom is knowing how to perceive God’s emotional and relational presence inclined towards His creation at a particular time.
Information is the result of observation; wisdom is the result of contemplation. Information is open to everyone; wisdom is acquired over time, through hard work, in relationship, within divine limits. No wonder we don’t like it.
How are we doing? Awake?
That’s all five.
Put them together, and you have a current that sweeps us along and of course the big question is “What do we do with all this?”
Well, the big thing is how we orient our lives. That is the substance of the next session. We orient our lives to be pro bodies, relationships, limits, work and wisdom.
But another thing we do is evaluate the technologies we accept.
Can I name an awkward reality? Most of us are narcissistic about technology. We assume we use things however we want to, often in spite of how they’re designed.
That is silly and dangerous.
Credit cards, for example, are designed to put people in debt. By definition, if we use a credit card, we are in debt. But many people say “I don’t use them that way,” as if their will was enough to repudiate the design of the credit card.
It’s not.
In general, technologies do the thing they were designed to do. Guns tend to shoot the thing—animal, person, or target—they were designed to shoot. Social Media tends to make addicts. More roads always make more traffic and project management software makes people complete more tasks.
I’ve heard more than one good scholar call technology a mirror, and I simply do not agree. Technology embodies the values of a culture. But it shapes individuals. In complexity theory, that’s called being upwardly emergent and downwardly causal. Technology does not show you yourself. It shapes you.
Here is a good rule for technology:
You should only use technologies that help you become the person you want to become; you should evaluate technologies by asking what they actually do.
To that end a list of questions is helpful. This one is from the technology writer LM Sacasas. It’s 41 questions long. To be honest, I’ve never answered them all. I read through it, slowly, and see what questions jump out.
This is an experiment. Your goal is to allow the Holy Spirit to call to mind a technology you employ, evaluate that technology, and see how it feels. So let’s do that.
…
This essentially concluded Session One. I’ll be back with Session Two, which explored positive commitments for thriving. Here’s Sacasas’ list:
1. What sort of person will the use of this technology make of me?
2. What habits will the use of this technology instill?
3. How will the use of this technology affect my experience of time?
4. How will the use of this technology affect my experience of place?
5. How will the use of this technology affect how I relate to other people?
6. How will the use of this technology affect how I relate to the world around me?
7. What practices will the use of this technology cultivate?
8. What practices will the use of this technology displace?
9. What will the use of this technology encourage me to notice?
10. What will the use of this technology encourage me to ignore?
11. What was required of other human beings so that I might be able to use this technology?
12. What was required of other creatures so that I might be able to use this technology?
13. What was required of the earth so that I might be able to use this technology?
14. Does the use of this technology bring me joy?
15. Does the use of this technology arouse anxiety?
16. How does this technology empower me? At whose expense?
17. What feelings does the use of this technology generate in me toward others?
18. Can I imagine living without this technology? Why, or why not?
19. How does this technology encourage me to allocate my time?
20. Could the resources used to acquire and use this technology be better deployed?
21. Does this technology automate or outsource labor or responsibilities that are morally essential?
22. What desires does the use of this technology generate?
23. What desires does the use of this technology dissipate?
24. What possibilities for action does this technology present? Is it good that these actions are now possible?
25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
26. How does the use of this technology shape my vision of a good life?
27. What limits does the use of this technology impose upon me?
28. What limits does my use of this technology impose upon others?
29. What does my use of this technology require of others who would (or must) interact with me?
30. What assumptions about the world does the use of this technology tacitly encourage?
31. What knowledge has the use of this technology disclosed to me about myself?
32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
35. Does my use of this technology encourage me to view others as a means to an end?
36. Does using this technology require me to think more or less?
37. What would the world be like if everyone used this technology exactly as I use it?
38. What risks will my use of this technology entail for others? Have they consented?
39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
Hey Blaine. Been working in IT for nearly 30 years and was recently transferred to a team charged with building AI-based products for our customers. Our supervisor sent this blog post from Sam Altman, CEO of OpenAI, yesterday. It reads like AI will become (or already has become) a god. Struggling to navigate these waters and definitely between a rock and a hard place in my career. Appreciate your research and teaching on this subject.
https://blog.samaltman.com/the-gentle-singularity
Thanks Blaine for the “work” you put into this! Glad you included Aristotle in your argument. He is too often reduced these days to the abstract logic of the modern day empiricists still grappling with Plato’s Line, never arriving at true opinion and the reality of the phenomenon. I wonder if the enteleke you are looking for in Aristotle’s treatment of techne can’t properly be found apart from his metaphysic and first principles of thought. The problem seems to be a lot Like the hot water Moses gets himself at the Rock the second time. Overfamiliarity with his handy use of the stick when what’s at issue is where it all begins - in thought - his last mountain top experience! I haven’t read Aristotle in many years but feel like a refresher. Was it de Anima you were referring to?
Thanks for this and all that you are about - serious thought, serious work. And a great encouragement to the grey hairs who want to be born again!