Banning AI-assisted writing doesn’t protect quality—it protects status. Fluent, polished writing has long been a proxy for intelligence and credibility. LLMs weaken that signal by lowering the cost of clear expression, letting low-status, time-constrained, or non-native authors export their ideas. That’s uncomfortable, but it’s not bad—it's exactly what critical thinking requires: judging ideas, not signals.
All LLM output is human-directed. Prompt → AI expansion → human refinement → repeat. Like using GCC or FEA, the human specifies, guides, and evaluates the result. No one says mathematicians using proof assistants or engineers using simulations aren’t real mathematicians or engineers—why treat writing differently?
The only content that deserves banning is automated, bot-driven spam. Everything else should be judged on merit. Critical thinking doesn’t change depending on origin; readers must always evaluate arguments themselves. Provenance-based rules preserve hierarchy, not truth, and suppress tool-augmented cognition that could improve discourse.
If you value ideas over signals, you cannot ban thought compilers without undermining the very critical thinking you claim to protect.
2 comments