Readers will know that I’m not apocalyptic about AI, probably in part because I’ve seen enough predictions of tech-doom come and go to be rather skeptical about the next one. Although a cofounder of OpenAI reportedly suggested building a bunker to protect top researchers from “artificial general intelligence”, I’m still more concerned about nuclear weapons, as far as technological apocalypses go. (Now, give the software the ability to launch nuclear weapons, and we may be in trouble - wasn’t that a movie?)
But at the same time, the technology can be dehumanizing. I don’t think I’ve ever clicked a recommended response in email or text messaging (and I generally turn those off first chance I get), because if you’re letting software take over your relational communications with other people, what do you have left? As Edward Hamilton said a while back, AI might also promote a continue Walmartification of the world - if you can get a product 80% as good as “human built” for only 5% of the cost, guess what most people will choose to purchase?
On that note, a tweet I came by yesterday:
This is Microsoft Word + Copilot offering to draft a document for you. The person who shared this is mocking it, but I thought… this is naturally where a world, a workforce, focused on efficiency and productivity is going to go. Suppose you’re an employee who thinks, no, this is wrong, it takes too much away from the human, I’m going to write a document myself as usual. The guy in the next cubicle, with a similar task, has no such concerns, and turns out four documents while you only turn out one. Who looks better to the boss? You’re liable to be pressured into this stuff whether you want it or not, just to keep up.
And that means we have some choices to make, some choices that are going to be increasingly explicit for us. Are economic efficiency and productivity the most important things for a business, for a human life, for a country? In many ways, America has historically answered that question with “yes”, with minimal qualifications. I wonder actually if we suffer a bit, now, from our Protestant Work Ethic history, work hard, work efficiently, work productively, that is what God wants from you right now. Well OK, but if those things are really top priority, you may be insisting that your employees, say, use these tools, without ever saying as much in words. Now if you’re fine with that, go ahead, I’m not trying to have that debate at the moment. But I sense that many Christians are wary of the drawbacks of these technologies, wary of replacing humans with machines, and ought to consider if they are nonetheless promoting a mindset that must encourage their widespread use.
I’d say the same thing to schools. AI is forcing schools to ask themselves what education is really supposed to be about, which is good, because if a piece of software can provide answers or produce work faster than any of your teachers, faster than your best trained student, then why are you in business? What is a school for? It’s a good question. But I would say, particularly for students, many schools are wary of AI (and for good reason) - OK, but are you, at the same time, overstressing the importance of something like “efficient work” or “productive work” to those same students? Because if you are, you might be quite effectively encouraging them to use these tools, without ever saying as much. (Are you just giving them too much work to do and in that way forcing them into an efficiency emphasis? That might go for your teachers too.) We actually need to be saying that, sometimes, fairly often?, nearly all of the time in academic work?, there are good reasons to be intentionally less-efficient. Yes, you can spend three hours reading these book chapters and, in the end, write me a poorer essay about this question than the magic box could have given you in ten seconds. I don’t deny that. Do it anyway.
Do you know the line about how Ben Franklin, watching an early hot air balloon in Paris circa 1780, said, "What good is a newborn baby?"
If using AI, or relying on it for whatever purpose, strips us of our humanity or disconnects us from it, isn’t that apocalyptic enough? If it can draft anything for us with a minimal prompt, what is our purpose in the process, what is our contribution? If the answers it spits out have little truth and no one cares or bothers to check, what objective truth is there, eventually? I think you’re right about efficiency, but at what point do we (the humans) become the hindrance to technological efficiency? It’s good that we’re reconsidering the human aspects—the meaning and purpose—of institutions, but I wonder if the coming culture or “corporate” war will be over priority: those willing to accept human imperfection and inefficiency for learning, creativity, and imagination, and those who value efficiency above all else, even their own humanity.