I have never knowingly used AI. As the author says, "As long as you hold on to your own discernment and individuality, you’ll come out intact. For now."
From The European Conservative
By Rina Furano
AI writing is ubiquitous and devoid of style. Why this is more than a cosmetic problem.
Linguistically, I am a bit of an extremist. When I was still living in my native Vienna, I’d routinely complain to publications and manufacturers which were gradually replacing common Austriacisms with the sterile German-German equivalent. Even my friends, mostly writers by profession, thought it was a bit much; though I once asked an English friend how he would feel if everyone suddenly called an aubergine an “eggplant” or a roundabout a “traffic circle,” and he conceded the point.
I now live in France, which arguably treats its language a lot better, but my soul is not at peace. This is because I failed to account for the rise of ChatGPT, whose horribly formulaic syntax and vapidly-clichéd rambling is everywhere these days: social media posts, YouTube scripts, politicians’ tweets, wedding vows, even articles in the mainstream media. I recognise the style instantly, because I worked on it.
Apparently, sadly, to no avail. AI writing tends to be poor because the potential fix is elusive on many different levels. It takes a lot more than one contracted worker bee to change an algorithm and, considering the mathematical and technological underpinnings of large language models in general, even a lot more than one entire department. In fact, the basic principles of tokenization [sic] and the related mechanisms—in other words, the inner cogs and gears of an LLM (Large Language Model)—are fairly difficult to change once the entire construct is in motion. Even if two thousand linguistics Ph.D.s were to complain to the software engineers about excessive em dashes, empty corporate jargon, or hackneyed phrasing, little could actually be done; like a human brain, an artificial neural network tends to default to its most used pathways, and thus, any fine-tuning that is done retroactively is like chipping away at Mount Everest with a toothpick. Major changes are possible in theory, but prohibitively labour-intensive and costly in practice.
Optimists may be able to find a silver lining here: ChatGPT’s “writing” likely won’t replace actual writers any time soon—at least none with a triple-digit IQ. Not only because it is outrageously poor by all aesthetic and functional measures, but also because even those cosmetic, stylistic shortcomings are, at present, immutably baked into the machine.
As my admittedly few friends know, I am not an optimist. A majority of people still labour under the misapprehension that ChatGPT—or any LLM for that matter—is able to actually reason, which is obviously not the case. Readers of this journal will likely be aware that large language models are nothing but semi-sophisticated probability generators with a verbal veneer. “Advanced reasoning capabilities” and similar marketing claims are, diplomatically speaking, complete balderdash. Complex calculations may (!) take place, sometimes using a stack of different-yet-related algorithms, but calling such computation “reasoning” is a bold stretch and can only be explained by Californian marketing teams having a different understanding of the term than we do in Europe.
What worries me most is the readiness with which growing sections of the population outsource not only their writing, but their thinking—to a machine that demonstrably cannot think. The effects are deleterious: study after study shows that even moderate LLM usage erodes critical reasoning skills (the actual, holistic, creative, human kind). And as the empty LLM syntax spreads from private to public use, from a bunch of foolish students cheating on their exams to time-pressed editors of mainstream media publications and productions, it threatens to blight every mind in its path, including millions of readers, viewers, consumers—who increasingly perceive soulless, artless, AI-generated American corporate speak as the new normal, or even worse, as a standard to which to aspire.
It is admittedly true that people have long been falsely impressed by obscure verbal bombast—one only needs to take a look at virtually the entire discipline of philosophy post-Hegel. Goethe’s Mephistopheles already taunts:
Man easily is fooled, if only words he hears,
That with them goes material for thinking.
But this problem is massively compounded in a modern society which not only values presentation over substance, but also egalitarianism over competence—with the result that even the most mind-bogglingly obvious vacuities now go unnoticed by a large number of participants. Annoyingly, the causes of this systemic ineptitude fertilise and perpetuate one another. First, AI-generated rhetoric has made it infinitely easier for people to live beyond their intellectual means. One might appeasingly object that such cognitive Mrs. Buckets have always existed; I’d counter that they didn’t use to exist in such overwhelming numbers, as previously at least a modicum of erudition was required to even erect the façade. Second, the impostors used to be kept in check by broad swathes of the population still able to think for themselves, if with nothing else, then at least with common sense. But it is exactly these once boundary-setting swathes that now stand to be hoovered up into the dust bag of an external, unthinking, corporate-controlled algorithm. And then what?
I am by no means a Luddite; I do believe in the potential of artificial intelligence, even though the term is currently laughably misapplied. Some of the natural sciences already make much better use of novel, sophisticated computation and its underlying mathematics than any consumer LLM ever will. It is precisely this chasm between the massive potential for cultural advancement and the current end-user reality that irks me; one tiny sliver of the population ascends, while the masses are set to descend into pre-Enlightenment debilitation. Ours is already an age of intolerable conformity; it’s cliché, yes, but I shudder to imagine what awaits our societies once conformity no longer needs to be enforced externally because the sheer ability to conceive dissent has been quashed at the seedling stage. At that point, what good would any technological or scientific progress be? There’d hardly be anybody left to understand and appreciate it, anyway.
Some might argue that we’ve already reached that point: after all, when did you last check your phone and admire the fact that you were holding six to ten billion transistors in your hand? But for all my pessimism and misanthropy, I refuse to agree. It might be Sisyphean, masochistic, or stereotypical Viennese madness, but I still think it is better to try and chip away at Mount Everest with a toothpick than not at all: sometimes by working on the root cause itself, sometimes by pettily complaining to a newspaper about the erosion of cherished local idiom, sometimes by gently reminding an overzealous friend that their public LLM use will repel anyone with a functional brain. What else can one do to stop the machine from encroaching on our very thought? As long as you hold on to your own discernment and individuality, you’ll come out intact. For now.

No comments:
Post a Comment
Comments are subject to deletion if they are not germane. I have no problem with a bit of colourful language, but blasphemy or depraved profanity will not be allowed. Attacks on the Catholic Faith will not be tolerated. Comments will be deleted that are republican (Yanks! Note the lower case 'r'!), attacks on the legitimacy of Pope Leo XIV as the Vicar of Christ, the legitimacy of the House of Windsor or of the claims of the Elder Line of the House of France, or attacks on the legitimacy of any of the currently ruling Houses of Europe.