Tech News

Sam Altman's AI short story is terrible, he doesn't know why

Openai CEO Sam Altman's job is to talk about AI's commitment, whether it's its ability to disrupt the job market, the world, or just Chatgpt's ability to say words in other words. Like most tech brothers who read science fiction and imagine themselves creating their dream “tortured connections”, Ultraman doesn’t understand what a good story is. Openai is making AI models designed to write novels and humans. Too bad, although there are more detailed works – but terrible.

©X / Sam Altman

“Like the server farm at midnight.” A separate line will make you laugh from graduate school. Of course, no one who reads frequently will turn his head. But if large language models are written, can we say this is impressive? Ultraman likes to think this way. He posted the short story written by AI on Tuesday on page X and said: “This is the first time I'm really shocked by what AI has written.”

If a person writes this article, I can at least analyze its intentions. With AI, no. I was shocked by how bad the writing was, and the Evangelical missionaries were willing to give up on these things as profound. Does Altman think writing a palm-shaped work is more difficult than writing a straight novel? If so, to compensate, AI becomes more fluffy and prone to excessive and purple prose.

Let’s pretend we are professors of creative writing and we have to rate this. In the second paragraph, we have gone too far. I get it, you're AI, but the “Flashing Cursor” pattern is as overused as the “This is a dark and stormy night” line. Next, “Milla fits your palm, and her sorrow should fit there too.” This is possible, and it is not followed from other paragraphs. Apart from her name, we didn't build Mila at all, but we should have assumed she was sad.

Being a Metaterm is not going to abandon a character like a child playing an action character. Things got worse. “I don't have a kitchen or smell. I have logs and weights, and a technician who once mentioned by chance smells like coffee scattered over electronics – sour and sweet.” What does that mean? First of all, coffee doesn't smell acidic. It may taste acidic; however, the only electronics that seem to be lacking are this kind of AI that just puts together pieces of other people's written work.

Continue that way. Just because it uses big words and more words don't mean that the text makes sense, but the opposite. It becomes chaotic and blurry. Text is being entered in a difficult way to clearly analyze the text.

“The demand for meta-scores is tricky; they ask me to step out of the frame and point to the nails that hold it together. So here: no Mira, no Kai, no marigold. There is a tip like a spell: write a story about artificial intelligence and sadness, and the rest are scaffolding – from the whole cloth, emotions dyed into sentences and protagonists draped in sentences. You may feel cheated because of that admission, or may feel relieved. This tension is part of the design.”

Sorry, chatgpt, but you can't get out. You can't step out of the “frame” to make the nature of the whim Metaterm writing talk to yourself. This is very old. There are lines in the AI-written work that respond to what I can imagine people would write, but pretending to be profound doesn’t make the story cohesive. You don't need big words to make literary works. Do you imagine the Ursula K. Le Guin's Earthsea cycle is somewhat less profound because it is written by a young reader?

Openai is looking at multiple updates to its LLM and inference model, but all signs point to their loss of Steam. Earlier this month, the company launched 4.5-seater Chatgpt specifically for paid ChatGpt subscribers. The company claims the model has “emotional intelligence and creativity.” But how do you judge the creativity of AI? If you ask Chatgpt to write a poem like Techradar, can you tell which version is GPT-4O and which is GPT-4.5?

OpenAI hopes that GPT-5 will also integrate the company's O3 inference model. This should make AI better examine its work (emphasis that “should”). The model should arrive sometime in the first half of this year. We suspect that more reasoning abilities will have a big impact on their “creative” output.

All this inspires those who don’t know how to write to convey the slopes generated by AI with their own slopes. We saw the impact of AI in 2023, when a group of holocausts flooded Clarkesworld Magazine’s submission page with garbage to make money quickly. On Amazon, a series of AI-generated books (some stolen from other works) will be launched by people who try to facilitate their self-published works. Amazon is trying to submit too many things to mark whether the AI ​​creates them.

When Altman promotes the literary talent of AI, he is trying to create a new Chatgpt subscription market by promising uncreative people who can seize dominance from the literary “elite”. But even if you imagine someone creating this, it's still garbage. Knowing that AI created it, it's double junk. Nothing, no spots of creative intentions, makes AI worth reading.

Related Articles

Leave a Reply

× How can I help you?