On the Ethics of AI Use

My university is working through its AI policy and some faculty are reluctant to recognize the value of this new technology. I allow my students to use AI as a research tool and as a copyeditor. I agree that AI-generated content is ethically problematic, but using AI as a research assistant, copyeditor, or even a sounding board to clarify thoughts and steel man arguments, is not only ethical but increasingly standard, perhaps even necessary for those for whom English is a second language. The ethical landscape hinges on attribution and intent, which are distinct from passing off AI-generated content as wholly one’s own.

AI generated

I have thousands of books and dozens of binders with printed articles in my campus office and home library, as well as access to a myriad of databases through my university’s library. At the same time, AI can sift through vast datasets, summarize studies, or flag relevant sources faster than a human—e.g., tools like ChatGPT (OpenAI) or Grok (X’s chatbot, a term coined by Robert A. Heinlein in his 1961 science-fiction novel Stranger in a Strange Land to denotes a form of understanding) can process thousands of articles in minutes. This accelerates discovery and grounds one’s work in evidence. Moreover, if one is familiar with a body of literature, AI can help immensely in recalling sources, while suggesting related sources. 

Using AI as a research assistant is fine if one verifies the output. As I have noted in the past on Freedom and Reason, AI can hallucinate, citing nonexistent books and papers, or bias results (e.g., by overreliance on certain sources). The responsibility falls to the researcher to ensure accuracy. In this way, one can think of AI as a colleague, librarian, or reviewer suggesting books that one stills need to check and read. 

Crucially, per academic norms, no attribution is needed for this backstage or process role. APA, MLA don’t require citing search tools. When my students include these tools on their works cited page, I ask them to remove them when making revisions. The method by which they locate sources is not relevant; what is important is that they go look for sources, read them, and cite them accurately and fully, following all the rules of the assigned style.

AI tools such as ChatGPT and Grammarly polish grammar, tighten prose, or suggest structural tweaks. The writing mechanics of AI systems are sound and instructive. For example, a writer can reduce a 500-word draft to 300 without losing meaning. It’s not only more efficient but raises the quality of one’s work by enhancing clarity and readability. The result is instructive, as well, such as in AI modeling active voice; AI tutors writers by providing a ready model of efficient and logical writing and even thinking. 

All of this perfectly acceptable. Writers have long used tools, such as spellcheck and thesauruses (just don’t abuse the thesaurus!) to refine their work; AI is just a smarter version of these tools. The final product still reflects the author’s ideas and voice. Think about it: no one credits Microsoft Word or Outlook for fixing typos or suggesting phrasing. There is no need to credit AI for such things any more than one should credit calculators for use in solving math problems or statistical packages for generating output and interpretations. 

Bouncing ideas off AI to clarify thoughts or steel man arguments is also a perfectly legitimate use of AI. It can challenge assumptions or refine logic, acting as a Socratic sparring partner. Thus, thinkers have another method for engaging in dialectics. One can sharpen his arguments, essays, or speeches this way. AI thinks logically, so using AI as a sounding board is often more helpful in reaching understanding than engaging humans in debate and discussion. An unfortunate reality—and this has been a reality since time immemorial—is that humans typically do not understand the rules of logic and engage instead in sophistry, which undermines reason rather than enhancing it. 

The intellectual work involved in forming an argument using AI remains the possession of the arguer; AI just helps the arguer see his argument more clearly. No attribution is needed here, either, as AI is serving as a process tool, not a content source. In this way, as a thinking machine, AI is an effective learning tool. It’s like playing a computer in chess or puzzling through patterns in a video game.

One problem, however, is that, in using AI as a copyeditor, one risks an unintended side effect of AI’s polish: tripping up detection tools like GPTZero. When using ChatGPT, Grammarly, or similar AI for copyediting, the output can ping as “AI-generated” because these systems smooth out the natural messiness of human writing in ways that mimic the machine’s own generative patterns. Put another way, AI is a very good writer, and polished writing produces false positivists.

When ChatGPT copyedits, it doesn’t just fix commas. It might rephrase for flow. These rewrites align with its training data and as a result nudge the text closer to patterns GPTZero flags. Grammarly’s “clarity” suggestions can do this, too, swapping passive voice for active or trimming hedges—changes that reflect AI’s stylistic leanings. But this is not a bad thing. Again, one not only tightens his writing by using AI to copyedit but learns to be a better writer by having AI model good writing. 

Detection tools are built on corpora (a collection of written texts, especially a body of literature on a particular subject or the entire works of a particular author), including AI outputs. If ChatGPT’s editing mimics its own generative style (or Grammarly mimics a similar optimization), the edited text can overlap with those training sets, raising the “AI likelihood” score. It is easy to defeat these systems with even moderate tweaking to the output. But why should a writer feel compelled to degrade his work because others may have suspicions about it? This is one of the problems with AI: it degrades trust.

The problem vis-à-vis AI detection is that many don’t grasp the reality that the detectors aren’t distinguishing intent—they’re just pattern-matching. The issue therefore lies with detection tools overreaching. They’re blunt instruments, designed to catch fully AI-written essays (and even here there are false positives), not nuanced edits using AI assistance. Punishing writers for using AI as a tool—especially for legit copyediting—misreads intent. It’s like flagging a painter for using a ruler: the art’s still his. The ruler is a tool, like a calculator or a statistical package.

Using AI to edit or as a sounding board is ethical—the author is refining his work, not outsourcing it. He is using a superb piece of technology to increase his productivity and to improve the quality of the product. The problem is therefore practical, not ethical or moral: detectors can’t yet tell human with AI polish from AI from scratch. That’s on the technology, not the author. Until such time where the detectors can do that (and they may never be able to, especially with the pace of advancement in this technology), the use of AI detection in assessing the honesty of the writer is unjust. The problem of false positives is insurmountable—while the problem of false negatives allows those who use AI for content generation to escape detection.

Unfortunately, especially if an instructor is aggressively suspicious of student writing, the problem of false flagging of content as AI generated could discourage AI use in writing. This is unfortunate because, as noted, AI copyediting is a great equalizer for non-native speakers or busy creators. If a professor or editor wrongly assumes a polished draft is AI-generated, it risks unfair penalties, especially in academia. Plagiarism policies lag tech reality (and likely always will), and to be blunt about it, technophobia punishes those was avail themselves of the latest tools to improve their writer and better convey their ideas.

One last point. I have heard from many people the complaint that what they perceive as AI content feels sterile. I suspect this complaint stems from the feeling that strong logic and good copyediting prioritizes efficiency and polish over personality—traits that can make the writing feel as if it’s missing a human touch. Yet there are thinkers who are highly logical with tight writing mechanics, and an argument can be made, and should be made in my view, that the work of these thinkers is as valuable, and in some areas more valuable, that the writing of those who infuse their work with digression, passion, and tangents. As a huge fan of Star Trek, the analogy that comes to mind is the distinction between viewers who favor Spock over Kirk and vice-versa. The Spock writer prefers logic over passion. 

In my own writing, I often pursue tight science writing, while other essays are written in a white heat (some readers of Freedom and Reason find typos and let me know about them, which I appreciate very much). Sometimes, I pursue both at the same time, infusing my science writing with polemics. But there is nothing inherently wrong with sterile science writing (indeed, as I said a moment ago, sometimes this is preferrable). If readers don’t like it, that’s a matter of taste. No writer should feel compelled to change his style because others find it sterile. And no writer should deny himself the benefits of technology for fear that others will judge him harshly for it. There is no ethical basis upon which to make that judgement. Their self-denial (assuming it’s genuine) is their problem, not the author who uses the tools available to him.

Published by

Unknown's avatar

The FAR Platform

Freedom and Reason is a platform chronicling with commentary man’s walk down a path through late capitalism.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.