After the explosion in machine-generated text and illustration over the last year or two, online controversy about these programs has been, well … constant. Yet there’s something off about the discomfort expressed both by artists and by art fans about machine-generated content. A hole in a pattern, which I’m not quite clever enough to identify. My starting point (as far as I can tell) is our collective willingness to adopt Google Translate. We were comfortable accepting one form of machine-generated text, but we feel uncomfortable accepting another.
This isn’t an “Austin’s got an answer!” article; it’s more like an “Austin has questions and analogies which bother him” article. Fair warning.
Is Translation Art?
Yes.
A better question is why. I read a fair amount of work in translation. Epic poetry fascinates me, but not enough that I’ll become a damn polyglot just so I can read the Mahabharata in Hindi or The Knight in the Panther’s Skin in Georgian. Poetry’s not easy to translate because poetic techniques which are effective in one language often aren’t effective in another.
If someone asked me “Which epic has most frequently been translated into English?” I’d probably guess the Iliad. It’s not just that I’m basically a hellenophile (I am; my academics were basically a mountain of Plato). Rather, it’s an acknowledgment of Homer’s historical influence on Western culture. There’s other works which have been translated more frequently (most notably the Bible) but in terms of epic, my gut’s with the Iliad.

Like most lapsed Classicists, I’ve got the first stanza-ish of the Iliad lodged in my brain somewhere. It’s a good party trick; I especially like using it while playing TTPRGs to pretend to speak crazy foreign nonsense. Despite translation after translation, there’s no one “canon” rendition of the Iliad‘s first line in English. In ancient Greek, this line is:
Meinin aeide thea, Peileiadeo Akhileios
μῆνιν ἄειδε θεὰ Πηληϊάδεω Ἀχιλῆος
When someone asks me what the heck I just said, I usually ad-lib a translation along the lines of
Rage! Sing of rage, goddess, that bane of Akhilleus
the son of Peleus, who brought untold pain to Akhaians.
Now, you may have noticed that’s two lines, making one sentence. In the original, the first sentence is about seven lines because of the structures of ancient Greek grammar. Lattimore’s classic translation runs this as
Sing, goddess, the anger of Peleus’ son Achilleus
and its devastation, which put pains thousandfold upon the Achaians …
with the whole sentence going to line seven, as in Homer. My favorite edition, translated by Edward McCrorie, runs it
Sing of rage, Goddess, that bane of Akhilleus,
Peleus’ son, which caused untold pain for Akhaians …
and likewise continues to the seventh line.
Which translation is correct? Later, at line seven, Lattimore uses the adjective “brilliant” to describe Achilles in line seven when the word dios strikes me as closer to “godlike.” Is Lattimore wrong?
Of course not! This is the nature of good translation—of artistic translation. The worst translation of the three is definitely mine, because it turns out mine is basically half translation, half my memory of McCrorie’s rendition. The only substantial difference is the exclamation “Rage!” to evoke Homer’s meinin.
There’s a real art to translation because good translations aren’t word-for-word. They take into account a language’s metaphors, colloquialisms, and paraphrases. They understand the language’s quirks, and the author’s quirks. Sometimes a phrase is shared by both languages (like, if memory serves, “have your cake and eat it too” is both in Plato and is a modern metaphor); other times, the author’s quirks are totally at odds with the usual quirks of the language.
Sometimes doing a story justice means foregoing direct reference to the original text! Carole Satyamurti’s Mahabharata does just that—it’s a “retelling” rather than a “translation,” and it’s gorgeous. She explicitly frames it that way because providing the Mahabharata’s story to an English audience requires more than mere abridgment to a manageable single volume. (The original work is about 1.8 million words long—a bit over 7,000 pages in a standard paperback.) Telling the story as best she could necessitated abandoning line-by-line adherence to the original text. Abridging specific chapters still wouldn’t have allowed her to tell the epic in full.
Discomfort & Translation
Machine-generated text and illustration makes most people I’ve talked with feel uncomfortable. This is especially true of artists. Discomfort does not mean people refused to use it—right now I’m just pointing at that emotional experience. An analogous experience might be shopping at Amazon or Walmart.
We try to explain that emotion in a number of ways. This often has moral or legal language attached to it. Indeed, I think that there are significant moral complications about this technology. It makes sense to me that there are legal problems about machine-generated works, but I don’t know nothin’ about modern copyright law. So we’re going to set the legal aspect of machine-generated work aside.
Discomfort is often taken as a trivial emotion, as the explanation for a person’s moral claims. I tend to instead think that these emotions are “signposts” pointing at actual moral beliefs. In general I tend to think people hold coherent moral beliefs—real positions about values which ought not be trivialized in the way “that’s just how you feel” implies. Most folks just aren’t good at expressing those beliefs clearly.
This is why it’s interesting that the current fad for machine-generated text causes emotional discomfort, but machine-generated translation—such as Google Translate—does not. At least, it hasn’t in the past as far as I know; that may well change.
On the surface Google Translate and, say, ChatGPT, look structurally similar. Text goes in, new text comes out. This description is, obviously, trivial. After all, ChatGPT can provide text and meaning which was not in the prompt through a process loosely analogous to children’s word association games. A noun such as “ball” has associations with “sports, baseball, etc.” Highlight keywords, find additional associations, and arrange them according to English’s grammar, and you have a response. I’m paraphrasing here, based on my limited experiences with machine-generated text. I find this is useful for research because the program can spit out jargon keywords I don’t know, which I can then use to go find actual definitions and trustworthy sources.
Is “word association” analogous to “translation?” Perhaps. A human which asks ChatGPT a question can genuinely find information not already in their own head. In that regard, the machine seems to step beyond Google Translate’s adherence to the provided material. Yet, it seems like word association involves a type of conceptual shifting from one object to a similar object.
Another comparison might be to seeking out a word’s synonym. Two synonyms share most of their meaning, but it’s rare that they’re identical. For example, “big” and “large” are basically exact—I can’t imagine a meaningful circumstance in which “big” is wrong, but “large” is correct. (One exception would be in a sequence: large, larger, largest. But this distinction is about correct English style, not the meaning of the synonyms.) In contrast, “precise” and “accurate” are synonymous, but I can imagine circumstances in which the shades of difference between the two impact a sentence’s meaning. It is not per se wrong to say “My arrow was precise,” but there is a slight difference between that sentence and “My arrow was accurate.”
If ChatGPT’s word association game shifts words in the same way that we identify synonyms, it seems to me that this effectively a mode of translation. The machine seems unable to generate text which is not present within its web of word associations. We’ve returned to the trivial description: text goes in, new text comes out. Instead of shifting languages, the text-generating algorithm seems to use related words (or, perhaps, related concepts—but that suggests that it holds real concepts rather than a database of definitions, and that question is pretty far outside my intended scope).
So there may be a non-trivial analogue between translation and machine-generation of text. If both Google Translate and ChatGPT have a structural similarity, why does it seem that only the latter provokes discomfort?
One answer is that this is an error. Machine-generated text can use a human’s prompt to produce information they didn’t know. This seems substantially different from Google Translate (which causes the discomfort) but we are mistaken—machine-generated text and mechanical translation actually share a similar structure. Therefore, our discomfort is due to an illusion.
I don’t quite find that satisfying. Let’s use the translation of epic discussed above to try explaining why.
Enjoy staring as I try to muddle through my own thoughts? Then you may as well check out my Patreon since backers get to peek behind the scenes at my creative process. Backing costs about the same as a cup of nice coffee each month, and really helps me cover costs for the site. Thanks!
More Than Word Association
There’s a substantial difference in how a human and Google Translate transform words during translation. Both engage with meaning, and try to remain close to the meaning of source text. Humans tend to be more nuanced, while Google Translate tends to be more literal. However, accurately capturing the nuances of language is a task which, hypothetically, a machine might eventually accomplish. The subtle differences of tone, meaning, and style do not sufficiently explain the procedural difference. The difference lies in human expression.
Lattimore’s and McCrorie’s translations of the Iliad‘s first line are equally accurate to Homer’s Greek. A machine might spit out either, and have successfully accomplished its task. The translation process resulted in two versions not because of an obscure aesthetic nuance, but because of the expressive—artistic—requirement when translating such a work. I suspect a machine could produce a more “accurate” translation of the Iliad in the sense of “adherence to Homeric form.” Retaining structural aspects of the Greek (such as meter), while also forming text which utilizes English poetic techniques (such as alliteration). Let us suppose that such a translation even contains all the nuances of meaning found within the Greek text, making use of equivalent colloquialisms, metaphors, and so on. This translation would not be “perfect” because it would lack the intentional, expressive element of translation by a real mind. It would be of academic interest (perhaps even a good translation), but this mechanical word-crunching lacks the interpretive nature of true translation.
“But,” one might ask, “isn’t that interpretation only necessary because translators can’t perfectly calculate how to fit together both meaning and poetic techniques?” In other words, is the interpretive element of translation merely a function of each language’s nuances?
To an extent, yes. Yet this is not why we translate and re-translate the Iliad, or the Bible, or the Mahabharata. Those pieces of art which have endured interpretation and re-interpretation—and yet remained themselves—have done so because translation as an art form demands the translator engage in personal expression. Personal preference grounds the choices made in these translations. This use of language expresses meaning in a way which a machine cannot generate because it exists outside the actual words.
To borrow a concept from Monty Python’s Flying Circus,I can explain why I think “ball” sounds woody and “light” sounds tinny, and I can explain my preferences in how I use such words while writing fiction. A machine might be trained to identify and utilize aural qualities (such as woody or tinny) in generating text. It might be given rules about its user’s preferences. Yet those preferences are external to the meanings of the words. They do not exist within the language, within the “word association” game used by a text-generating program. Alliteration could be said to exist “within the language;” my preference for woody words is my own.
Without the ability to make intentional choices based upon preference it seems to me impossible to call machine-generated text, even exceptional text, “art.”
I suggest that this distinction applies to other modes of machine-generated text, illustration, and so on. An algorithmic program might identify the techniques involved in an artistic process and learn to reproduce them. So long as it is unable to prefer word P to word Q, brush stroke R to brush stroke S, it is unable to create “art.”
Explaining Discomfort
This distinction between how machines and humans produce text strikes me as pretty likely to be both accurate and meaningful. But does it explain why we feel discomfort, why we intuit that there’s a difference between Google Translate and ChatGPT?
I’m not sure.
That’s terribly comforting to hear at the end of a rather long discussion, I’m aware. (Long discussion in the digital world, anyway.) I think it’s a useful discussion, maybe even an interesting one. That’s why this is still going up! But I’m not sure this analysis of art, translation, and machine-generated text explains the emotional discomfort we experience about these programs. This emotion’s key dimension is moral, not structural. Where I’ve gone astray, I suspect, is in focusing on the structural distinction between human and machine-generated translation. It doesn’t seem that “machines can’t have preferences” is the cause of discomfort around this technology.
Discomfort and structure may intersect when people call machine-generated text or illustration a work of “art.” There seems to be a fundamental assertion that “art is made by humans” which this claim violates. This structural distinction probably helps explain why people feel that way. It defines and makes explicit the reasons for a conclusion many people have intuited. (As I said earlier, I tend to think people have mostly coherent beliefs but struggle with expressing them clearly.) Yet I don’t think the claim “this is art” is the source of this discomfort. There are many other contexts in which machine-generated text or graphics are used, and are not called “art,” and which still provoke discomfort. For example, no one calls a college kid’s ChatGPT essay “art.” … We also don’t seem to call it plagiarism. We just call it cheating.
Hm.
That’s interesting.
Might continue this line of thought next week. Might not. I don’t know! I’ve already spent way longer than intended, so for now we’ll just leave that as food for thought.
Until next time!
Want to keep up-to-date on what Austin’s working on through Akhelas? Go ahead and sign up to the email list below. You’ll get a notification whenever a new post goes online. Interested in supporting his work? Back his Patreon for early articles, previews, behind-the-scenes data, and more.
You can also find Austin over on Facebook, and a bit more rarely on Twitter.
Interesting post, per usual. To me, the issue with AI text or images is when they are used in commercial products. As a fan of the Record of Lodoss War, a Japanese novel series heavily based on D&D (arguments could be made for it to be called the original Critical Role), only the first book is officially translated, but the fan community has used AI translation software to render a (sort of) readable English translation. They are friggin terrible, but beggars and choosers and all that. If a company tried to sell me that crap, I’d be offended not only by the quality but also the fact that they deprived a real translator of a job.
Same applies to ChatGPT and Midjourney, etc. Use them to make cool images for your players, sure, or generate ideas for an adventure, but to try and sell it cuts real artists and authors out of work. It’s totally different.
LikeLiked by 1 person
Thanks for the compliment! 🥰
I agree that the usage problem is most central to commercial works. I’m a bit leery about non-commercial use but have a hard time feeling strongly against it. The ethical sourcing of the program’s training is still a problem, but I’m not sure if a GM using Midjourney for their private game is causing actual harm to artists.
LikeLiked by 1 person
Yeah, I mean I will cop to using copyright images as inspiration or to show to players to get them on board or in the right mindset. It’s pretty similar. Yes, the sourcing of MOST of these AI tools is highly questionable, but it’s not much different than grabbing a still from one of the Conan movies, or some random image off Deviant Art. I’d be willing to bet most folks have whole folders of such images, and don’t think twice about it. And I don’t feel like they should. If they post them on their blog (for instance) they should be attributed and optimally linked back to the source, and of course, selling them is illegal. As it should be.
LikeLiked by 1 person