I had this post almost ready-to-go for this morning but didn’t get around to posting it… which is good, because then I had an interesting experience with AI (I think) in class today. Lately I’ve been starting my classes with a review problem or two. Today I thought, let me grab a few practice problems for the mathematics section on the CLT, we’ll try those. So before class I did a web search for something like “CLT mathematics practice questions”, got directed to a website I will not link that had a bunch of practice problems, and copied three of them for us to try.
But, of course after I copied them, I took a closer look at them and realized… there are three “tells” in these problems that suggest they were written by AI. Here are the three problems:
If x-y=3 and a+b=4, what is the value of (3x+3y)(5a+5b)?
If triangle ABC is comparable to triangle XYZ, and cos(A) = 0.98361846184, and sin(A) = 0.19018472619, what is tan(X)?
How many numbers between 60 and 130 inclusive satisfy both, the first digit of the number is even, and, the number is prime?
There is nothing wrong with question #3, I include it for completeness. But #1, in fact, has no solution (despite the fact that it was offered as a multiple-choice problem with four numerical choices). There isn’t enough information presented to solve the problem. If it had said x+y=3, you could do it easily, but that isn’t what it said. But an AI trying to make a “similar” problem, changing a “+” to a “-” and not really understanding it had just made an impossible problem, easy to believe.
There are two issues with #2. First, just the fact of offering input parameters to the 11th decimal place does not seem like the sort of decision a human would make (it could have been a human, but it’s odd). And then, the text says the triangles are “comparable” - a good precise math word, and clearly what is implied, would be “similar”. To me, this looks like an LLM replacing a word with its synonym, and again not realizing it had left the world of precise mathematical language and entered the world of casual English.
To me, it looks like someone took an official CLT practice test, fed the whole thing into an AI with instructions to “make me a bunch more problems like this”, and then never even bothered to examine them with human eyes once they were done to make sure they were OK. But it made for an interesting teaching moment today, because not only did we do the intended math, but we talked about why you might think an AI wrote these and how to fix the errors within the problems.
The point of this post was not supposed to be the merely pragmatic, “be careful using AI in education, because it makes some really silly mistakes sometimes”. But it is, actually, also true that you should be careful using AI in education because it makes some really silly mistakes sometimes. You hear people share this dream that AI is going to mean every student gets a free individualized tutor. That may happen, I cannot know the future. But we aren’t there yet.
How AI is perceived by the online Right
To speak more generally, now, watching the online Right, it has been interesting to see at least two perspectives on AI develop.
Perspective one says, AI is a powerful tool we can use to improve our productivity, magnify our power, and reform broken systems, and we should absolutely grab it. Elon Musk (if we can call him a member of the online Right… probably fits these days), the New Founding people, and others are in this camp. That link goes to an interview clip about using AI to bypass bureaucracies, because a small team can now analyze a huge amount of information, we don’t require bureaucrats to filter it for us anymore. Speaking of Musk, when I personally use AI, I’m almost always using Grok, which is Musk’s / xAI’s project.
The attitude of these folks reminds me of the attitude of the Right toward the internet circa 1995-2000. Back then, mass media (think television) was largely controlled by the Left, the internet was a way for the Right to make its voice louder (quite successfully). Today, could you live life, not knowing how to use the internet, or not knowing how to use a computer? Yes, and a few people do, but then you would lack certain powers other people have. (And so much the better, some of you might say.) AI, they would probably say, is the next thing that will be like that. Get good at using it or you’re going to lack capabilities all your neighbors will have, you’re going to be less employable in the market economy, and so on.
To say one critical thing of this crowd (there are many good people in this crowd, to be clear), and this also goes for the way Musk / DOGE are treating government employees right now, I think this perspective can lead to a sort of worship of efficiency that is actually not good. Nobody would want to work for an employer for 20 years, someone new applies for the job and let’s just say somehow it is known the new guy would be 10% more productive, so you’re immediately fired and replaced. I am concerned that there is a mindset developing on the Right that would agree with that move, nothing matters to a business (or a government) more than maximized productivity, and using AI is part of that. We’ve got this history of love of capitalism + Protestant Work Ethic that can drive us to a place where nothing but efficiency matters if we aren’t careful.
Perspective two, meanwhile, says that technology has already diminished our humanity in many ways, and therefore whatever productive gains might be achieved, we don’t need another more powerful demon.
Where you land here might depend somewhat on whether you perceive AI to be more like a chainsaw, say, a tool that lets humans do good things that we could do without the tool but it would be a heck of a lot harder, or whether you perceive AI more as a human-replacer than a human-helper, whom soon enough we shall be serving instead of it serving us. (For you Star Trek fans, you know the classic question about whether Data is a toaster, or a person - Star Trek saw it all decades ahead of time.) “Humans now serve the tool” doesn’t have to be anything apocalyptic, by the way, it could be something as simple as, you apply for a job, some algorithm reads your resumé and makes a decision, no human ever even sees it. Behold, how wonderfully efficient, we replaced the whole HR department! That literally is the dream some people have (having worked with some HR departments, I get it). But, who is serving whom, now? No chainsaw ever rejected a resumé.
AI in education
Last November, S.A. Dance had an article in First Things called Teachers Against the Robots. I read it with students last week, actually, and they found it quite interesting.
Technological advances often force upon us questions like, “why are we even doing this thing? Should we keep doing it now that there is a technological workaround?” And so, what is the point of education, anyway? And Dance argues that what many schools teach students today is that the point of education is what they can produce, and to help them produce more and better. This might be especially obvious in the case of a school focused on job training. But even a school that would say, no, we care about the inward life of students, can effectively still give students the impression that the point of education is what they can produce, because we don’t have direct access to their inward life. So how do we judge them? Based on… what they produce (take this exam, write this essay, and so on). There is a sort of Illichian point here that the rituals of your school are going to teach certain lessons very well, lessons you may not even agree with.
If that is your view of education, and AI can produce better than you can, well now that really is an existential concern for you. And so Dance gives the example of Harvard telling high school students in its summer program that they can use AI to come up with topic ideas for essays, and even to write an outline of their essay, but by golly not to actually write the essay. But… why not? Why can you outsource the first two steps to software, but not the third? Well, it’s not particularly easy to answer that question.
If that’s wrong, then what is the real point of education? Drawing from Josef Pieper, Dance suggests that creating a soul that can properly understand itself in relation to the cosmos is our goal here.
Education, though, is not labor; it is a spiritual pursuit. Its goal is to cultivate and nourish our interior lives. As a uniquely human endeavor, it should refine our capacities to think rationally, contemplate reality, appreciate beauty, and feel gratitude. To truly cultivate these capacities, schools must safeguard and encourage what Josef Pieper calls “leisure.”
…
The man at leisure “remains capable of taking in the world as a whole, and thereby to realize himself as a being who is oriented toward the whole of existence.” Achieving this condition of the soul requires true knowledge: knowledge of oneself in relationship to the cosmos. This knowledge is truly “free” for it is good in itself, the highest and most noble thing one can know, and cannot be subordinated to some other end. It is the fruit of the “liberal” arts.
Fostering this condition of the soul is education’s sole concern.
I invite you to read the rest. But the big point would be, if that is your goal, and not production, not efficiency, not entertainment or anything like that, then we should at least be cautious about any technology that can war against the interior life and so dehumanize us.
How I have used AI
And that is why I burned my computer this evening.
No, as I alluded to earlier, something I have told students many times, and which sort of summarizes my thinking here, is, for ANY tool, does that tool serve you, or do you serve the tool? The danger of course here is that, for any tool, the tool always begins as our servant. Humans have been creating tools to help us as long as there have been humans. A tool needn’t be as complex as AI to dominate you, it could be quite simple actually. I have seen students so distracted by a simple calculator (I mean just numbers and the four basic operations!) that they don’t notice that the lecture has resumed, for example. If that happens, is the tool serving you, or are you now serving the tool? Who is actually in charge right now?
For the curious, I’ve only used AI a little bit for teaching purposes, as follows, ordered from less controversial to more controversial.
I have used AI as a random number generator. For example, I fairly often have students sit in random places, “here is a list of ten students, put them in a random order”. That’s pretty safe at least.
I have used AI as a slightly smarter random number generator. So, for example, maybe we’re doing a drill on unit conversions, so I just need a bunch of fake measurements to create the drill. Of course I could make up fake measurements myself, but very easy in this situation to give an AI a list of what units I want employed, a range of reasonable numbers, go make me twenty fake measurements. (There is a comment here about the need to give the software intelligent restrictions to work within.)
I have used AI as a more user-friendly calculator. One thing I find interesting about AI as a technology is that, in part, it’s doing things we’ve had the software to do for ten years or longer (language translation, mathematical manipulations), but it accepts requests in plain English, which may make it just easier to work with. So, for example, we recently were solving for the currents in a circuit using Kirchoff’s Laws, which some of you will know. I had just made up the circuit off the top of my head, so it was kind of bearish, we wrote out all the junction rule and loop rule equations as a class, but we had six unknown currents, which would have been a 6x6 matrix to solve. And so then we reached a point you reach fairly often when teaching science or mathematics - we could do this by hand, but it would take a long time, and we’d just be following an algorithm anyway, is that really worthwhile or should we pull out the tool now? And this actually does get a little bit into, if not “the point of education” generally, at least “the point of this lesson”. The immediate goal at the moment is to learn circuit analysis, not to solve giant matrices, and time is finite. (Or, for you physics people, “well, the physics is done, now it’s just math”, we all say that sometimes.)
And so in that case we pulled out the tool. This time I actually did it with students, projecting Grok onto the board, in part because I wasn’t actually sure it would work, and if the AI made a stupid mistake I wanted them to see that too. But I typed all the equations we had come up with into the software and let it analyze the system, and it spit out six currents. And then, because I wasn’t sure whether to believe it, we pulled out a good old graphing calculator, put the same thing in the calculator’s matrix solver, and it spit out the same numbers for the six currents. You win this time, Grok.And finally, and this I have only done a couple times, but I have used (again Grok) to shorten the amount of time required to make a second version of a document I had already written. (The difference between me and the low-quality CLT review website is by golly I actually reviewed everything before I published it.) So here what you want to think is something like a review sheet for a science exam, maybe I want a second version of it to turn into a review game or something. Feed the list of questions into Grok, “make me a second version of this thing”. And so a question about the melting point of silver becomes a new one about the melting point of copper, a question about diamond scratching graphite becomes a question about fingernails scratching gypsum, or what have you. This did require human editing after the fact, but it saved me time, and because it was really a request to “replace every X with something similar to X”, it’s the sort of thing a language model is good at. (Recognize your tool for when it is useful and when it isn’t, don’t think it’s another mind so you can go ahead and turn your mind off now.)
Probably what all of these share is that they are all activities that would require a fair amount of time, but not a lot of thought, they are all somewhat algorithmic, and I’m happy to shorten that time requirement by using the tool.
Ending thoughts
In my personal life I feel more free just to play (about half of what would have been web searches have turned into Grok inquires for me, for example).
I’ve played a bit with an AI that is specifically intended to solve physics problems. I have, on the one hand, been very impressed with what the software can do, feeding it graduate-level inquires and getting good answers to them. I’ve also seen the same AI make stupid mistakes on high school level problems. I have the trained eye to see those errors, someone just learning the topic obviously will not.
We often get apocalyptic about new technologies - I’ve said before that I once expected every concert forever would be ruined by people’s cell phone ringers going off, for example. At the risk of sounding too casual, we tend to get a handle on things. I’m still a thousand times more concerned about nuclear weapons than I am about AI. As is probably obvious from this, my personal bias is toward viewing AI as just a smarter (well, sometimes) calculator. Used with intelligent restraint, it’s a tool.
But I’d be happy for your thoughts on this one.
Perspective 3
AI are demons and should be killed with fire.
Enjoyed your AI experiences and related insights. Well summarized. I share your mixed reaction to the pros and cons of AI, especially in the realm of classical education where the depth of human reason cannot be forged in any meaningful way by AI machinery. In the end, AI can save time, but is limited to trusted data access AND can never be real "intelligence" due to its inability to independently reason and pursue absolute truth like a human can as a being created in the image of God. This, to me, is a basic and vital reality (i.e. spiritual reality) beyond the ability of mankind to construct. "In God We Trust" applies aptly to our consideration of the true value ofAI.