In a 1980 book, Minds, Brains, and Programs, the philosopher John Searle used what has become known as the “Chinese room analogy” to discuss the relationsihip between artificial intelligence and learning. Searle set up his discussion this way:
Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.
Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch “a script,” they call the second batch a “story,” and they call the third batch “questions.” Furthermore, they call the symbols I give them back in response to the third batch “answers to the questions.” and the set of rules in English that they gave me, they call “the program.”
Like any good philosophical thought-experiment, Searle’s Chinese room is meant a way to explore the underlying questions. In particular, Searle is perfectly willing to say that computers “think,” but would argue against “the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition.” For Searle, “understanding” involves “intentionality,” which in turn is rooted in a biological presence. Here, I’ll sidestep these concepts and focus instead on one place where the rubber meets the road on these issues: how do we interpret the “cognitive state” that occurs when a student uses an AI tool to complete an assignment?
For an application of Searle’s Chinese room thought experiment to AI and modern education, Clay Shirky tells the following story in “Is AI Enhancing Education or Replacing It? Technology should facilitate learning, not substitute for it” (Chronicle of Higher Education, April 29, 2025). Shirky writes:
The recent case of William A., as he was known in court documents, illustrates the threat. William was a student in Tennessee’s Clarksville-Montgomery County School system who struggled to learn to read. (He would eventually be diagnosed with dyslexia.) As is required under the Individuals With Disabilities Education Act, William was given an individualized educational plan by the school system, designed to provide a “free appropriate public education” that takes a student’s disabilities into account. As William progressed through school, his educational plan was adjusted, allowing him additional time plus permission to use technology to complete his assignments. He graduated in 2024 with a 3.4 GPA and an inability to read. He could not even spell his own name.
To complete written assignments, as described in the court proceedings, “William would first dictate his topic into a document using speech-to-text software”:
He then would paste the written words into an AI software like ChatGPT. Next, the AI software would generate a paper on that topic, which William would paste back into his own document. Finally, William would run that paper through another software program like Grammarly, so that it reflected an appropriate writing style.
This process is recognizably a practical version of the Chinese Room for translating between speaking and writing. That is how a kid can get through high school with a B+ average and near-total illiteracy.
A local court found that the school system had violated the Individuals With Disabilities Education Act, and ordered it to provide William with hundreds of hours of compensatory tutoring. The county appealed, maintaining that since William could follow instructions to produce the requested output, he’d been given an acceptable substitute for knowing how to read and write. On February 3, an appellate judge handed down a decision affirming the original judgement: William’s schools failed him by concentrating on whether he had completed his assignments, rather than whether he’d learned from them.
Searle took it as axiomatic that the occupant of the Chinese Room could neither read nor write Chinese; following instructions did not substitute for comprehension. The appellate-court judge similarly ruled that William A. had not learned to read or write English: Cutting and pasting from ChatGPT did not substitute for literacy. And what I and many of my colleagues worry is that we are allowing our students to build custom Chinese Rooms for themselves, one assignment at a time.
I agree that AI poses real challenges for education. The idea behind traditional pedagogy is that a student starts off by producing imperfect or incorrect work, but then makes progress to producing better work. At some point in this process, the quality of the “better work” is deemed sufficient, in which case the student gets a passing mark or promotion to the next grade. But with many assignments, AI is able to produce output that, in the past, would have sufficed for a passing grade. So where does that leave educators? I have no simple answer, but here are three thoughts:
1) In the story of William A., as told above, he apparently has the capability to use speech-to-text software, and then to work with that output using ChatGPT and Grammarly. In the modern economy, these are not trivial skills. I know young peoploe working for consulting firms who conduct interview and produce a cleaned-up writen transcript with a list of main points listed at the top. In the case of William A., the true complaint about the education system is not that he did not learn to do anything, but that the education system lied about what he had actually learned.
2) The pedagogy of arithmetic offers some lessons for what educators should be doing with regard to AI. We teach students how to do the tasks of arithmetic by hand (ah, the satisfaction of calculating a square root with a pencil and paper!), but with the understanding that students will use calculators in the future. Thus, the actual goal is not to teach arithmetic per se, but instead to teach students how to apply arithmetic to real-world “story problems” like how much paint to order for a given job or what the photocopying budget will be. The hope is that students don’t become phobic when they see an explanation involving numbers or a table of numbers.The hope is that all students develop some internal warning bells about arithmetic problems: If you are told that someone was born in 1950 and is now more than 100 years old, someone who can apply arithmetic should have an immediate questioning response. If you are told that some quantity went up 10%, and doubled in size, then you should have a quick reaction that either it went up 100%, or it didn’t double–but either way, something is wrong with the calculation.
3) For educators, the new AI tools are bringing the issue of the calculator to written work, as well as to computer programming and other fields. Yes, it probably remains important to teach and test the basics in a way that doesn’t use AI, as we do with arithmetic. But the actual skills desired here are not about generating workable text on a blank screen, whether that is producing the classic five-paragraph essay (intro, three points, conclusion) or a workable computer program. The skills are that students should be able to look at what has been generated with a skeptical eye, consider how well it matches what is actually desired and needed in the specific setting, and make changes–in many cases, aggressive and far-reaching changes–as needed. One hopes that students learn how to be skeptical and even suspicious of text generated by others, and recognize the need to fact-check, to look references up, to test what actually works.
I comment here as someone who has spent a career as an editor, but in my own mind, AI challenges teachers to move away from an emphasis on “authorship,” or whether a student is able to produce a minimally acceptable draft on on a blank screen, and instead to “editorship,” which is whether a student is able to interrogate and take specific action to improve an earlier draft.
