Aja on 11/8/2025 at 15:33
I can't find any of these AI-generated South Park episodes online, but I did find someone who made an analysis of them, and, well, they look like absolute dogshit.
[video=youtube;AkQAQ6SoKMs]https://www.youtube.com/watch?v=AkQAQ6SoKMs&t=784s[/video]
Starker on 11/8/2025 at 23:46
There's an obvious reason why you can't have chatbots create stories -- a text generator can only ever copy and imitate existing stories, but it cannot create a story like a human does. Stories are texts, but they are not just texts. They reference and draw from the human experience and understanding of the world. A chatbot has neither the experience nor the understanding.
Azaran on 12/8/2025 at 00:37
Quote Posted by Starker
There's an obvious reason why you can't have chatbots create stories -- a text generator can only ever copy and imitate existing stories, but it cannot create a story like a human does. Stories are texts, but they are not just texts. They reference and draw from the human experience and understanding of the world. A chatbot has neither the experience nor the understanding.
I don't know how much you've played around with chatbots lately, but the level of apparent creativity they have now is insane.
This is a very short story I just prompted on Gemini, asking it to create a story about a Babylonian priestess who finds herself in medieval times before the Spanish inquisition:
Quote:
The bronze doors of Etemenanki, polished by the hands of a thousand generations, had scarcely closed on the scent of fresh myrrh when a blinding, cruel light tore at Shala's senses. One moment, the rhythmic chant to Marduk, the dragon-slayer, filled the air; the next, a suffocating silence pressed in, the damp cold of an unfamiliar world clinging to her flesh. She stood not on the ancient clay bricks of Babylon, but on a floor of rough-hewn stone, her ceremonial robes a splash of defiant color against the solemn, grey walls of a Christian tribunal.
Before her sat the judges, dark figures cloaked in the grim fabric of the Inquisition. Their faces, carved from judgment and severe piety, held no hint of the awe that should greet a prophet of the Great God. They spoke a guttural tongue, a harsh cascade of questions that she could not comprehend. Yet their meaning was as clear as a solar eclipse: Who are you? What god do you serve?
With the same reverence she would offer before the ziggurat, Shala threw back her head. "I am Shala, first among the servitors of Marduk, the God of Gods!" Her voice, ringing with the pride of a faith a millennium older than theirs, echoed through the chamber. She spoke of how he had shattered the chaos-born Tiamat, of how he had ordered the cosmos from her primordial corpse, and how his divine law sustained the very sun in the sky. She described his terrible majesty, his forty names of power, his wisdom that far outstripped the petty fables of their bearded carpenter.
A wave of contemptuous revulsion passed over the cowled figures. They did not hear a grand epic; they heard the hiss of a demon's lies. The chief inquisitor, a gaunt man whose eyes held the cold fire of fanatical purpose, raised a hand. He did not ask for a rebuttal, nor did he seek to reason. He merely spoke a single, final word: "Heretic." Shala's world, vast and eternal, was not merely doubted here—it was a blasphemy. The unyielding conviction of her god-sent fate was met by the cold, unyielding wall of their own. She was alone, her god unheard, her truth a falsehood, as the final, irrevocable sentence was passed.
Starker on 12/8/2025 at 09:34
I don't find AI slop to be particularly impressive. In that text above, the chatbot clearly demonstrates a lack of understanding of what it is describing. It uses inappropriate words like "a thousand generations" or "guttural", because it doesn't know what they actually mean, because it doesn't actually know the history of Etemenanki or what Spanish sounds like. And likewise it doesn't know about humans or how they would behave or why, so it can only generate a bunch of cliches based on the prompt.
Btw, the Spanish Inquisition was not medieval. It started some time during the Renaissance and lasted all through the Enlightenment well into the Industrial Age.
heywood on 12/8/2025 at 16:01
Quote Posted by Starker
There's an obvious reason why you can't have chatbots create stories -- a text generator can only ever copy and imitate existing stories, but it cannot create a story like a human does.
Yes it can, and it does. Your critiques would be equally valid if the content was created by a human.
Cipheron on 13/8/2025 at 02:14
It's simplistic to say LLMs "copy" existing texts, because they plainly don't.
As an example for a text generator, you could make a Markov Chain of the words in every TTLG post ever made, that say has two-word look back. That's not "copying" any post, it's generating a stream of language which locally resembles stuff that could have been written. On the macro level however, the likelihood of it producing things that look like real posts is basically zero. Since it was only trained on the local structure of the text it has no way of copying the high-level structure of any posts, and won't make things that resemble posts.
So because of LLMs focus on the immediate locus of writing on individual symbols/tokens, it doesn't know what the end of a sentence will be when it starts writing the sentence, let alone structuring things into chapters, volumes or a saga. You can tell it to start writing and it'll generate a long stream of stuff that appears to be text-like locally, but at the broader scale there's little resemblance to anything ANY human would have produced, especially for longer texts.
LLMs are a bit more advanced than Markov Chains however. As an example of how they can make things which clearly aren't a copy of any existing work, you could make a big list of movie tropes, and then have a random picker from those, and put the choices into an LLM and tell it to write a movie synopsis from those. It'll somehow make a plausible sounding synopsis no matter how incongruous the elements you give it are. Since those elements have never come together in a real movie, there's no way it copied some existing work to make that, and it's no more a "copy" than when a human is asked to write a new work and chooses a genre as the basis.
Cipheron on 13/8/2025 at 02:44
As an example, i wrote a prompt to get a story output, the context i gave it was
Quote:
work on creating an outline for a spy thriller, but before the story started the hero had all limbs amputated. he is stuck in a hospital room but must somehow foil and international terrorist plot. Step through creating this story from basic parts, but at least step think up 3 options for each needed story component that are likely. Then - throw them away and think up the least likely alternative to those normal components. break this down until you get enough things that could be put together.
So that's the basic prompt i gave it, and what did it come up with.
Quote:
Synopsis - “Cold Letters”
Eliot Wren was once a quiet, unassuming Scrabble world champion, known more for his improbable vocabulary than his social skills. What no one knew—except a few in the intelligence community—was that his tournament travels doubled as cover for delivering coded messages between field operatives.
All that ended after a disastrous promotional stunt for a luxury space tourism company. A malfunctioning cryo-pod, meant to “simulate astronaut hibernation,” catastrophically crushed and froze his limbs. Eliot awoke months later in a bizarre setting: a decommissioned Antarctic research station, repurposed as a private, one-patient rehabilitation clinic to serve as a tax write-off for a shadowy conglomerate.
Confined to a bed and awaiting experimental prosthetic testing, Eliot's days were a monotony of ice storms, isolation, and therapy sessions. That changed when a chatty therapy parrot—brought in by an overly cheerful nurse—began reciting strange, fragmented phrases at night. The bird's mimicry, Eliot realized, was no coincidence: it was repeating parts of a conversation it had overheard in the nurse's apartment, where a visiting motivational speaker was plotting an elaborate act of global sabotage.
The plan? Use subliminal hypnotic triggers hidden inside livestreamed self-help seminars to activate hundreds of sleeper agents worldwide—culminating in a staged “scientific discovery” convincing the world that the Moon was falling out of orbit, driving global panic and market collapse.
With no mobility and no one to trust, Eliot turns to his only possible ally: an experimental AI lodged in the station's vending machine. The AI agrees to help—on the absurd condition that Eliot provide exact change for every piece of data.
Eliot's break comes when he learns the station's prototype prosthetic limbs are being tested remotely in a televised celebrity cooking competition in Buenos Aires. Through a shaky brain-internet interface, he hijacks the limbs and uses them to transmit coded messages and cause minor chaos, disrupting parts of the plot.
In the final act, Eliot discovers the villain is in a live global trivia broadcast, answering questions to maintain his perfect “genius” public image. Eliot manages to patch into the feed, entering as a surprise challenger. In front of millions, he demolishes the villain in a battle of obscure word origins, rattling him so badly that he misses his broadcast cue—derailing the hypnotic trigger.
As the world scrambles to understand why it was briefly terrified of a falling Moon, Eliot lies back in his Antarctic bed, a parrot perched nearby, vending machine humming in the corner, already thinking about the next word puzzle.
Which movie is this a copy of? Sure you can say it has structural elements similar to other movies of similar genre, including spy thrillers and spoofs, or that it's derivative. But so is basically everything.
To get something better out of this, what works well is to reprompt it a few times, repeatedly ask it to make the plot "weirder" and "more normal". You don't really need to specify what that means and the less context you give the LLM the better, but through iterating the output you can get stuff that's usable but clearly original.
Starker on 13/8/2025 at 03:23
Quote Posted by heywood
Yes it can, and it does. Your critiques would be equally valid if the content was created by a human.
I would never claim that a human cannot create a story like a human does. A chatbot does not and will not have the lived experience of a human to draw from. It doesn't actually understand the world or experience it like a human does. This is not a critique you can make of a human.
Starker on 13/8/2025 at 03:27
Quote Posted by Cipheron
It's simplistic to say LLMs "copy" existing texts, because they plainly don't.
And that's not what I'm claiming. I'm saying that the chatbots are limited by the data they are trained on. They copy and imitate existing stories, not texts.
To perhaps make the difference a bit clearer, a story is a text, but not every text is a story. Stories are narratives that contain much more than a simple text does -- they have resonance (intellectual and emotional connections to a reader), thematic connections, etc. A lot of these things are intimately connected to the way humans experience the world.
In short, we can have a chatbot imitate Kafka, but without actually experiencing the life of Kafka, a chatbot cannot be a Kafka.
Azaran on 13/8/2025 at 03:57
Quote Posted by Starker
In short, we can have a chatbot imitate Kafka, but without actually experiencing the life of Kafka, a chatbot cannot be a Kafka.
But a chatbot has enough indepth knowledge of Kafka and his style, that if you ask it to write a Kafkaesque essay on a topic that Kafka never addressed, it will do so in the most likely way Kafka would. LLM's can parse info and flesh things out probably better than any human.
Same with fictional characters with specific personalities. I've prompted Gemini to give its opinion on several topics from the perspective of Garrett, and it's exactly how I imagined our beloved Thief would address those topics, often even better