The Loom and the Weaver: Why College Writing Must Evolve, Not Retreat
ChatGPT can earn a 3.57 GPA at Harvard. This startling revelation from Maya Bodnick’s experiment, where she submitted AI-generated essays to her professors, shows a seismic shift in higher education that we can no longer ignore. The rise of large language models like ChatGPT has thrust colleges and universities into an unprecedented crisis: the traditional take-home essay, long considered the cornerstone of humanities and social sciences education, now faces an existential threat. As Bodnick warns in “ChatGPT Goes to Harvard,” these AI tools have made “cheating so simple — and for now, so hard to catch — that I expect many students will use it when writing essays” (Bodnick). Meanwhile, Owen Terry’s “I’m a Student. You Have No Idea How Much We’re Using ChatGPT” corroborates Bodnick’s experiment, revealing that students are already using AI extensively, often in ways that are virtually undetectable. The academic community stands at a crossroads. Bodnick advocates for shifting all essays to proctored, in-person formats to preserve academic integrity, while Terry observes that universities remain stuck in “an awkward middle ground where nobody knows what to do” (Terry). However, both extreme responses—complete prohibition or unrestricted embrace—fail to address the fundamental reality that AI is here to stay. Instead, colleges should implement a transparent integration approach that teaches students to use AI as a collaborative tool while preserving critical thinking through redesigned assignments that emphasize process, reflection, and human judgment. This middle path acknowledges AI’s permanence in our future while ensuring that higher education continues to develop the analytical minds our society desperately needs.
Terry’s article title captures a stark truth: “You have no idea how much we’re using ChatGPT.” This is not hyperbole; it is a direct challenge to educators who assume they can ban AI and maintain the traditional academic integrity of writing. Terry describes that students have developed strategies of using AI that, effectively, make it impossible to detect. Instead of asking ChatGPT to write essays, students are using what Terry calls the “step-by-step” method, they are “having the AI walk you through the writing process step by step,” developing thesis statements, outlines, and paragraph organization, while writing their own prose (Terry). This method leads to work that is impossible for a human or machine to indicate was AI assisted because “the ideas on the paper can be computer-generated while the prose can be the student’s own” (Terry). Most shocking is not the cleverness of the deception, but what it communicates about student ingenuity. The same creativity and problem solving that educators are hoping to develop through writing assignments has been turned into avoiding writing assignments. This irony exposes an essential misalignment: we have designed an educational system where students’ intellectual curiosity is spent avoiding work rather than engaging with ideas. The fact that students have developed such elaborate avenues of workarounds suggests that our current methods of assessment may be assessing compliance as opposed to competence. Even the technological solutions offer little hope. Bodnick’s research describes that AI detectors are still “deeply imperfect,” and OpenAI’s detector correctly identified writing as AI only 26% of the time, the best detectors “went from 100% to the randomness of a coin flip” once students incorporated paraphrasing tools (Bodnick). The futility of prohibition becomes very clear: we cannot police what we cannot reliably detect.
If prohibition is impossible, one might assume the solution is complete acceptance—but this opposite extreme presents equally serious challenges. Bodnick warns that widespread AI adoption “risks intellectually impoverishing the next generation of Americans” by eliminating the need for students to develop critical thinking skills (Bodnick). When students can generate thesis statements and structured arguments with “almost zero brain activity,” as Terry admits, this undermines the fundamental purpose of education—learning how to think. But I would argue the danger runs even deeper than skill atrophy. When we outsource our initial thinking to AI, we lose something irreplaceable: the struggle of formulation. That frustrating, sometimes agonizing process of staring at a blank page, grappling with inchoate ideas, and slowly giving them shape—this is where real intellectual growth occurs. The difficulty isn’t a bug; it’s a feature. Just as physical muscles grow through resistance, intellectual capacity develops through cognitive strain. When AI removes this resistance, we risk creating a generation of intellectual consumers rather than producers, people who can recognize good ideas but cannot generate them independently. This creates what Terry identifies as an “awkward middle ground” where educational institutions neither embrace AI’s potential nor effectively prevent its misuse, leaving students to navigate this new landscape without guidance or structure (Terry). The result is an educational system that fails on both fronts: it cannot maintain traditional academic standards, nor does it prepare students for an AI-integrated future.
In order to escape this educational paralysis, we must seek the broader implications of the rise of AI. This will occur first by reengaging with the uncomfortable truth that Bodnick herself states: “AI is not coming just for the college essay; it is coming for the cerebral class” (Bodnick). If AI is fundamentally redefining knowledge work in law, journalism, and business consulting, then trying to keep students away from these tools during their education is, at best, pointless and, at worst, counterproductive. Just think about the specific implications: junior lawyers presently spend hours researching case law and drafting briefs. AI will do that sort of work in minutes. Entry-level consultants, whose value is to piece together market research and create presentations, will face competition from AI that can clearly digest far greater amounts of information instantly. Journalists who take pride in quickly transforming complex information into legible prose will find themselves competing with AI that can produce publication-ready copy instantaneously on demand, making traditional deadline pressures irrelevant. Even in academia, graduating PhD students who conduct literature reviews or write grant proposals will encounter AI systems who can grasp and emulate much of the work at superhuman speeds. The question is not whether the actual jobs will disappear altogether—they will not—but how the jobs will be transformed. The lawyers, consultants, and writers who adapt, flourish, and operate best will be the ones who can direct, assess, and enhance AI-generated work—not compete with it. Students who graduate without grasping how to effectively partner with AI will be in dire straits after graduation, in what is sure to be a rapidly changing workplace. Rather than evolving their pedagogical practices, higher education has more responsibilities to now prepare students for the world they will encounter after graduation.
What does this preparation look like in practice? With this level of fabricated reality, transparent integration in students’ learning provides a practical way forward that addresses both authors’ issues. Under this model, students would have to declare all AI use upfront, embedding the use of AI tools in the submission. That is to say, submitting the transcripts of ChatGPT conversations that demonstrate their prompts, iterations, and decision-making. Rather than hiding AI use, students would be publicizing AI use, which would help educators evaluate the entire intellectual process apart from the final product. The focus of assessment would move away from simply evaluating written output to evaluating how students prompt AI, how they critically evaluate how AI replied, and how they synthesize the information into coherent arguments. This type of assessment can change how AI is viewed when it is used knowingly for education and not deception.
The benefits of transparent integration extend beyond mere compliance. First, it preserves academic integrity by eliminating deception—students cannot cheat if they are explicitly documenting their process. Second, it develops vital AI literacy skills that students will need in their later careers, teaching them how to craft effective prompts, identify AI limitations, and blend machine-generated content with human thought. Most importantly, it maintains focus on critical thinking by requiring students to demonstrate judgment in selecting, modifying, and contextualizing AI output. Rather than replacing human thought, transparent integration positions AI as a collaborative tool that enhances but does not substitute for genuine intellectual engagement. This approach acknowledges that in an AI-saturated future, the ability to thoughtfully direct and evaluate AI output may become one of the most valuable skills we can teach.
While the advantages are clear, translating this approach into action requires a lot of pedagogical innovation. Actualizing transparent integration requires substantial transformations in the design of writing tasks and their evaluation. Process-focused work would require students to submit the entire chat with AI along with their final work, allowing for the intellectual processes to become equally valued as the end product. As an example, Terry (2023) points out that AI can help with the ideas behind “the thinking” for a formal task around writing, like generating a thesis statement or an outline, but in this model, students would be graded on the quality of their prompts, their iterations and refinement of the AI suggestions, and their judgement of which of the ideas to pursue; in other words, the process that is often hidden in Terry’s description is being made a visible, evaluable skill. For instance, a student who is analyzing the symbols in The Great Gatsby might submit their ChatGPT series of prompts they used to explore AI’s understanding of symbols: ‘What are the main symbols in The Great Gatsby?’ they then honed the prompt to ask, ‘How does the green light specifically relate to the American Dream in Chapter 1 versus Chapter 9?’, and they would submit their rationale for not considering the AI’s first generic response about hope and why they chose to pursue the AI commentary around the light symbolizing the transition from aspiration to disillusionment. Then the final essay would refer to this AI assisted process alongside their own observation on how Fitzgerald related his financial anxiety when creating his symbolic connections, which is something that the AI was not even able to see. In many ways this is how professionals work with AI and then justify how their coursework parallels authentic work. For example, a marketing director knows better than just accept ChatGPT’s first proposal for the campaign; they are able to refine prompts, create amalgamated responses and use benched knowledge of their brand’s specific industry that the Ai does not have access to. Likewise, a data analyst does not simply treat AI’s first interpretation of statistics. They are able to ask it to reconsider its reasoning, interrogate it for bias, and provide context in the way that their expertise would elucidate it. By making this iterative process more visible within the academic context, we are preparing students to the eventuality of how knowledge works increasingly functions with AI. An instructor might evaluate how a student confronted AI’s initial response, what clarifications they sought and how they lead the AI towards more coherent argumentation, which are all similar skills people will be using when working with AI as needed.
Beyond process documentation, we need additional strategies to ensure authentic learning. Hybrid assessment models offer another crucial innovation, combining the benefits of in-person writing with AI-assisted development. Building on Bodnick’s suggestion that professors “have students write the first draft of their essay during this proctored window,” institutions could require students to produce initial drafts in supervised settings, then use AI tools transparently for revision and development (Bodnick). This approach ensures that students can formulate independent thoughts while still learning to leverage AI for refinement. Adding mandatory reflection components where students analyze and justify their AI interactions would further deepen learning, requiring them to articulate why they accepted, rejected, or modified AI suggestions.
These pedagogical shifts necessitate teaching entirely new forms of critical thinking. Students must learn to evaluate AI output with the same rigor they would apply to any source, identifying biases, factual errors, and logical inconsistencies. Prompt engineering—the art of crafting queries that elicit useful AI responses—becomes a core competency alongside traditional writing skills. Most crucially, educators must focus on developing higher-order skills that AI cannot replicate: ethical reasoning, creative problem-solving, and the ability to synthesize disparate ideas into original insights. By explicitly teaching these skills, we prepare students not just to coexist with AI but to remain intellectually vital in an automated world.
Naturally, any shift in education philosophy will invoke resistance, and too many critics will entirely deny a transparent use of AI–though they will say denying it only lends itself to cheating completely ignores the distinction between deception and valuing tools. If a student conceals AI use, this is an infringement of academic integrity; if the student discloses this and reflects on it, then they are purposely engaged in genuine learning. A reasonable comparison is to calculators in math. We do not preclude students from using calculators just because they compute at a rate faster than humans. After students have learned to use the tool responsibly, we teach them when and how to use a calculator correctively–realizing there remains an expectation for the student to process the mathematics at some point. In the same vein, this is why even the calculator comparison, which does have usefulness, does not begin to reflect the very large change we recognize. Calculators automated computation but we still relied on human cognition, expertise, and experience when deciding how to set-up the problem, how to interpret the answer, and how to make sense of the answer if put into a real-world or authentic context. AI is capable of taking us from the identification of the intellectual process—from understanding the question to determining an approach, to executing a solution. This is a change much more akin to going from handloom weaving cloth to a completely automated process of automated power looms in the industrial revolution. Those weavers who remained did not compete for the speed or efficiency of power looms’, they survived by repositioning themselves from weaver to loom manager, pattern designer, and quality assurance. In a comparable manner, the writers of tomorrow will not compete with AI for first drafts; they will compete by designing how they will guide AI, and then attach the human elements–ethical deliberation, emotional impact, cultural context–that AI cannot simulate. So, in short, the transparent use of AI is evolving flexibly, a framework, albeit perhaps a crutch, into now a transparent framework that adds on, not replaces, a primary learning experience.
A more pragmatic concern involves implementation logistics. Some educators worry that students ‘won’t learn to write’ if we allow AI assistance. This concern assumes that writing skills remain static, but literacy has always evolved alongside technology. Just as word processors didn’t eliminate the need for clear communication, AI won’t eliminate the need for effective writing—it will transform it. Students must still master argumentation, organization, and clarity, but they’ll also need new competencies: curating AI suggestions, editing machine-generated text, and critically evaluating automated output. These skills represent evolution, not degradation, of writing ability.
There is no time for half-measures. The testimony put forth by Terry illustrates that, as we speak, the system is already broken: students are using AI to support their learning/copying wherever they can while educators are, for the most part, oblivious to the breadth of the problem. The data Bodnick provides allows us to conclude that it is futile to hold a position against AI when even the best AI detection tools can only perform at random-chance level. Ignoring this reality while insisting on backward assessment processes guarantees failure in our fundamental purpose: to teach students how to think critically and communicate effectively. Pursuing transparency is a way forward that honors the permanence of AI while also defending the values of education, through process-based assessment, hybrid or scheduled processes, and reconsidered critical thinking.
Universities must act fast to change their writing pedagogy if they are to change it before the gap between their classroom practices and the reality of students’ lives grows beyond any potential for learning. This means training faculty in AI literacy, redesigning assignments to account for transparent usage of AI, and creating new assessment methods that measure the entire intellectual process rather than asking for final products. Institutions that hold out risk becoming irrelevant, producing graduates who do not have a grasp on traditional writing or the AI in conjunction with writing that their careers will ultimately demand. The question is no longer will students have AI - Terry and Bodnick made that clear. The question is will we teach them to use it wisely, ethically, and to support genuine learning, or will we shirk and allow them to responsibly navigate this new reality alone.