Peeking Inside the Black Box: Demystifying Artificial Intelligence
.jpg?sfvrsn=7be185f0_1)
I get a lot of questions from Christian teachers lately that sound something like this: “What are we supposed to do with AI in schools? Is this the end of education as we know it?” These are fair questions! AI technology feels mysterious, and maybe even threatening. It can seem almost… magical… so it’s no wonder it causes us to ask big questions. Big questions demand thoughtful answers. For us, as Christian educators, I think this means we need to cultivate an imagination big enough to understand what AI is, what it isn’t, and how we might use it in ways that glorify God and affirm what it means to be truly human.
Let’s poke around inside the black box and see what’s going on in there. I’m hoping we can demystify what is happening when you type a prompt into your favorite AI chatbot and better understand the possibilities and limitations of these powerful tools.
What Do We Mean by “Intelligence”?
A friend of mine recently quipped, “Everyone’s worried about artificial intelligence, but I’m more concerned about ‘actual stupidity.’” That’s a clever line, but it cuts deep: before we panic about machine intelligence, perhaps we need to re-examine what we mean by intelligence in the first place? And especially the ways we apply that word to machines?
As computers have become more powerful year after year, it’s tempting to picture them as actually thinking, or having a will of their own. But so far as I can tell, machines and human minds function in very different ways. Human ingenuity is part of how God created us to be—reflections of the Creator showing up as we create things ourselves. But human creations can never rival those of the Creator.
I think this is perhaps the real concern many of us have when we talk about “intelligence” as it is applied to machines: “Just what does it mean to be human, if the machines are ‘intelligent’ as well?” And I think that’s a really important question.
Human beings are “heart-soul-mind-strength complexes designed for love,” as Andy Crouch puts it in his wonderful book, “The Life We’re Looking For.” (I think this book should be required reading for anyone looking for a distinctively Christian way of navigating our high-tech world.) That’s intelligence of a very different kind—relational, embodied, creative, and spiritual. Machines don’t love. And they certainly don’t “think”—at least, not in the way we humans do.
Let’s take a minute to learn a bit about how computers—even very powerful computers that run the software behind AI chatbots—actually work. I hope this will help you to marvel a bit, both at the way human innovation has played out in creating these amazing machines, but also in the difference between human thinking and the calculations that power computers’ data analysis functions.
How Do Computers Actually Work?
At its core, a computer does three things: it takes in data (input), processes it, and produces something (output). This is what computer scientists call the “black box model.” We give the machine something to compute, it does some internal math, and we get a result.
But here’s the kicker: computers are efficient at calculation, but they don’t understand meaning; they only follow the instructions humans give them. If you give a human a sentence with the vowels removed—“Ths sntnc s mssng ll th vwls”—they’ll probably figure it out using context and experience. But computers can’t “fill in the gaps” unless we tell them exactly how. Computers… compute! They process data; they don’t interpret.
This is one of the big misconceptions about AI: we might imagine that the machine is “thinking.” It’s not. It’s calculating. When it comes to AI chatbots, those calculations are usually based on one thing: probability.
Machine Learning and Word Guessing Games: Large Language Models
When people talk about machine learning, they often imagine that computers are absorbing knowledge like humans do—through experience and reflection. But what’s actually happening is a giant game of probability. The AI is trained using huge datasets: “A Ford Mustang is a car. A river is not a car. A Toyota Camry is a car. A giraffe is not a car. A Volkswagen Beetle is a car. An actual beetle is not a car.” Over time, the system “learns” to guess what category a new item falls into based on how similar it is to past examples.
This is why your AI chatbot seems proficient at writing essays or lesson plans. It’s not because it understands you. It’s because it’s played this guessing game—at internet scale—millions and millions of times before. Ask it, “The dog sleeps in her ___,” and it will calculate that “bed” is statistically the best choice, not because it knows anything about dogs or beds, but because that’s what the math says, based on the data the chatbot was trained on.
Chatbots like ChatGPT are built on something called a large language model (LLM), which is basically a giant probability engine trained on massive amounts of text—like all of Wikipedia, all of Project Gutenberg, and huge swaths of the internet. When you type in a prompt, the program uses a neural network to draw on this database to predict the next most likely word… and then the next… and then the next… until it finishes the response.
It’s doing this based on patterns, not insight. But the patterns are sophisticated enough that the results can feel uncanny—like magic. But it’s not magic. Fundamentally, an LLM is just astonishingly good at autocomplete.
Toward a Christian Imagination for AI
So, what does all this mean for Christian educators?
First, it means we shouldn’t be mystified or terrified. AI is powerful, but it’s not magic—it’s math.
Second, it’s also important to remember that these chatbots are not morally neutral. They are designed by people and shaped by human values—and human sinfulness winds up warping the LLM, no matter how careful the programmers are as they devise them.
Third, we have to ask some important theological questions: What kind of world are we helping to shape with these tools? Are we affirming human dignity? Are we working toward justice, truth, and restoration? These kinds of questions can help us apply a Christian imagination to when and how we actually implement AI in our classrooms.
Teachers aren’t going to be replaced by AI anytime soon, but a teacher who can be replaced by a machine probably should be! AI can be a tool for good—if we use it with care and recognize its limits. No machine, no matter how powerful, can recognize the image of God in the students we serve. What a privilege we have: opportunities to disciple them!
About the Author


More Blog Posts
-
Dr. Dave Mulder | August 26, 2025
-
Dr. Joni Lakin | August 18, 2025
-
Edward Tooley | August 12, 2025
-
Dr. Robin Hom | August 5, 2025
More ACSI Blog Posts
-
Dr. Dave Mulder | August 26, 2025
-
Dr. Joni Lakin | August 18, 2025
-
Edward Tooley | August 12, 2025
