Winners have been announced!
TL;DR: Writing Contest for AI Fables
Deadline: Sept 1/Oct 1
Prizes: $1500/1000/500, consideration for writing retreat
Word length: <6,000
How: Google Doc in replies
Purpose: Help shape the future by helping people understand relevant issues.
Hey everyone, I’d like to announce a writing competition to follow up on this post about AI Fables!
Like Bard, I write fiction and think it has a lot of power not just to expand our imaginations beyond what we believe is possible, but also educate and inspire. For generations the idea of what Artificial General Intelligence could or would look like has been shaped by fiction, for better and for worse, and that will likely continue even as what once seemed to be purely speculative starts to become more and more real in our everyday lives.
But there’s still time for good fiction to help shape the future, and on this particular topic, with the world changing so quickly, I want to help fill the empty spaces waiting for stories that can help people grapple with the relevant issues, and I’d like to help encourage those stories to be as good as possible, meaning both engaging and well-informed.
To that end, I’m calling for submissions of short stories or story outlines that involve one or more of the “nuts and bolts” covered in the above post, as well as some of my own tweaks:
- Basics of AI
- Neural networks are black boxes (though interpretability might help us to see inside).
- AI "Psychology"
- AI systems are alien in how they think. Even AGI are unlikely to think like humans or value things we'd take for granted they would.
- Orthogonality and instrumental convergence might provide insight into likely AI behaviour.
- AGI systems might be agents, in some relatively natural sense. They might also simulate agents, even if they are not agents.
- Potential dangers from AI
- Outer misalignment is a potential danger, but in the context of neural networks so too is inner misalignment (related: reward misspecification and goal misgeneralisation).
- Deceptive alignment might lead to worries about a treacherous turn.
- The possibility of recursive improvement might influence views about takeoff speed (which might influence views about safety).
- Broader Context of Potential Risks
- Different challenges might arise in the case of a singleton, when compared with multipolar scenarios.
- Arms races can lead to outcomes that no-one wants.
- AI rights could be a real thing, but also incorrect attribution of rights to non-sapient AI could itself pose a risk by restricting society’s ability to ensure safety.
- Psychology of Existential Risk
- Characters whose perspectives and philosophies show what it's like to take X-risks seriously without being overwhelmed by existential dread
- Stories showing the social or cultural shifts that might be necessary to improve coordination and will to face X-risks.
...or are otherwise in some way related to unaligned AI or AGI risk, such that readers would be expected to better understand some aspect of the potential worlds we might end up in. Black Mirror is a good example of the “modern Aesop’s Fables or Grimm Fairytales” style of commentary-through-storytelling, but I’m particularly interested in stories that don’t moralize at readers, and rather help people understand and emotionally process issues related to AI.
Though unrelated to AI, Truer Love's Kiss by Eliezer Yudkowsky and The Cambist and Lord Iron by Daniel Abraham are good examples of "modern fables" that I'd like to see more of. The setting doesn't matter, so long as it reasonably clearly teaches something related to the unique challenges or opportunities of creating safe artificial intelligence.
At least the top 3 stories will receive at least $1500, $1000, and $500 in reward (more donations may still be received to increase the prize pool), which I’m planning to distribute after judging is complete sometime in October.
In addition, some authors may also be invited to join a writing retreat in Oxford, UK to help refine their story and discuss publication options. We'd like to not just encourage more stories like these to exist, but also try and help improve and spread them.
Stories must be submitted by September 1st to be considered for the retreat, and practically speaking, the sooner they’re submitted the better the odds of being judged in time for it. Stories must be submitted before October 1st for monetary prize consideration.
This contest is meant to be relatively short in duration, as our preference is for relatively short stories of ~6,000 words or less. Longer stories can be submitted, but anything past the first 6k words will not be included in judgement.
As a final note, this is not an expectation for completely polished, ready-to-publish stories; things like spelling and grammar mistakes will not be penalized! What we want are the ideas to be conveyed well and in an engaging way. Proper editing, both of style and content, can be done later. Just show us the seed of a good story, and maybe we can help it grow!
To submit your fable, please link to a Google Doc in a reply to this post. If you wish to remain anonymous or retain First Publisher rights for your own attempts at finding a traditional publisher, you can send it to me in a DM.
You can also link to stories you believe fit the criteria in order to nominate someone else; if that’s the case, please indicate that the story is not your own.
Happy writing, and feel free to ask any clarifying questions either by reply or DMs!
Cool initiative! I'm worried about a failure mode where these just stay in the EA blogosphere and don't reach the target audiences who we'd most like to engage with these ideas (either because they're written with language and style that isn't well-received elsewhere, or no active effort is made to share them to people who may be receptive).
Do you share this concern, and do you have a sense of mitigate it if you share the concern?
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute's AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation. The real impact would come from writing the one story that actually does go viral beyond the EA community. But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc.
That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA. (Suggestions welcome for things we might consider adapting!)
Could also be a cool mini-project of somebody's, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art?
On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story! I guess my concrete advice would be to put more emphasis on starting from a seed of something that's already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
Absolutely. Part of the hope is that, if we can gather a good collection of stories, we can find ways to promote and publish some or all of them, whether through traditional publishing or audiobooks or youtube animated stories.
I would hope given its a "fable" writing contest, almost by default these stories would be
completelyaccessible to most of the general public, like the Yudowsky classic " sorting pebbles into correct heaps or likely even less nerdy than that. But OP can clarify!I hadn't read this one first, but honestly I'm not sure if this counts as "completely accessible to the general public". I'd expect an accessible fable to be less rife with already assumed concepts about the key topics as this one is (like, don't say "utility maximizer", rather present a situation which makes it clear naturally what a utility maximizer is).
Good point!
Here is my first entry. It's a story mostly about orthogonality and the difficulties of alignment - though it relies on something that you mostly could do already with present tech. Using the template of a very familiar fairy tale trope as a starting point:
Mirror, Mirror
I have one other entry planned, but haven't written it yet. Will post it if I manage it (though I'm not seeing a lot of participation yet - I wonder if I'm just early?).
I like it nice one!
Thanks!
Thanks for your submission! There have been a few others already, but so far they've all been through DMs.
Oh, I didn't remember that was an option - do you mind if I do that too? Since it's a Google Docs link I'd appreciate the added privacy.
Up to you!
Hi, is there a specific date when you wanted to announce the results of the contest?
Hoping to announce by the end of the month :) It'll be its own top level post.
Any update on the timeline? Thank you!
Just finishing up the post now, sorry for the delay! I've been gathering and double-checking permissions :)
Here's my (short) story:
The Black Knight's Gambit
It's not about any of the proposed issues, instead it's about the dangers of using AI to control other AI. Even if we can't think of a way they could cooperate, they might, so in solving that you'll end up with an infinite regress problem.
Ah yes, the good old Godzilla Strategies.
I have taken down my entry, though I don't "retract" it.
I just posted an explanation of why I think the scenario in my fable is even more intractable than it appears: De Dicto and De Se Reference Matters for Alignment.
Here's the first 1200 words of a short story about death, taxes, counterfeiting, the Pointers Problem (the most vexing aspect of alignment IMO), outcome pumps, and (a misunderstanding of) the Yoneda Lemma. Apologies to Scott Alexander. I plan to continue working on & editing this story, but wanted to submit something by Sep 1.
https://docs.google.com/document/d/1c_HgOS1UfiYCnEtc7Ujdw7-uAzv73d5IZjefN4OqVtY/edit?usp=sharing
My entries are two short stories set in a world where AI helps a global government effectively run the world. These stories are about dissent, diversity and family.
1. Lesser Equals
2. My husband says AGI is like a cat - co-written with @rileyharris
I really liked the writing style of the second one, not sure I get the moral but I'd love to read more things from you two!
Daystar - this is a great idea, and I hope you get lots of strong submissions. People think and decide based on stories they hear, or stories they tell themselves. Good AI fiction can be helpful in guiding people to understand AI risks.
Two concerns though.
One concerns the marginal benefits of new writing, versus the benefits of assembling the best existing short stories about AI futures and risks. Ever since 'Frankenstein' (1818) by Mary Shelley, we've had over 200 years of science fiction writing about creating nonhuman/artificial intelligences, and that includes a lot of excellent very short stories. Would it be worth identifying and collecting some of the best existing writing on AI, and publishing that? I guess the counterargument would be that we've had a lot of rapid progress on AI safety thinking in the last 10-20 years, which has not been incorporated in much science fiction yet, so newer writing might be more relevant.
The other concern is how to maximize impact once you select some great writing on this topic. I agree with some of the other comments that turning the short stories into YouTube videos for a popular, established channel (e.g. Rational Animations, or Kurzgesagt, or whatever) could be very compelling. It's much easier for videos to get a lot of attention than for short stories to get a lot of attention. (If I tweet about a video, I can be confident that at least a few hundred people will watch it, and retweet/quote-tweet about it; if I tweet about a short story, it's likely that less than 10 people will read it, and very few will retweet about it). So this contest could be framed as a sort of 'screenwriting pitch' in some sense, rather than as thinking of the stories themselves as the key deliverables.
In any case, it's well worth doing!
Yes, we've definitely talked about collecting fiction that already exists and is still relevant to the modern understanding of the issues involved. That's why I'm happy for people to submit existing stories as well, and one thing we've discussed is possibly reaching out to authors of such stories or whoever holds their rights to interweave them with new stories if we try to publish in a traditional anthology.
Same with turning fables into videos; we're pretty confident that if we get a few good stories out of this, turning them into animations or short audiobooks will be worth doing :)
My entry: The King and the Golem.
This seems like a great opportunity. It is now live on the EA Opportunity Board!
Thanks!
What is the timeline for announcing the result of this competition?
I'm finishing writing the post now :)
My submission:
The sparrows and the iron beaks.
I would like to enter my short story: Simon Says
It is a semi-satirical story about the dangers of AI systems being able to imitate humans too well, and the implications this might have for human relationships. It also incorporates mental health and LGBTQ+ themes, but the main theme is behavioural misalignment in large language models (LLMs) and their potential psychological power over humans. I hope it may be of interest!
Not sure if prewritten material counts, but I'd like to enter my Trial of the Automaton if it qualifies. I can transfer it to Google docs if need be.
Are there any rules on multiple submissions?
Multiple submissions are fine :)
I understand why everyone is so focused on AI x-risk but I feel it's so laser focused that it's like driving into future staring into the rearview mirror. Where are the prizes that encourage fiction that can better imagine what sort of future we want with AI, and how to aim at it, rather than what we don't?
I'm trying to do that here (though very much not a short story), but I really want more people to put forth their perspectives on what a good future with AGI/ASI looks like. Is the only way to encourage this type of constructive fiction to fund it yourself?
I don't think these visions are mutually exclusive: there are ways to portray positive visions of the future while still teaching something real about the way AI can actually work or the risks that were solved along the way.
Has the deadline been extended? I see "Sept 1/Oct 1" but no clarification
Ah yeah the September 1 deadline was meant for the writing retreat, the overall deadline is Oct 1st.
My entry is Offspring.
While I'm at it, might as well submit my other sort-of-AI-related, (very) short story. It's about a world where eliminativism is true, but a (rather cartoonish) rogue AI doesn't believe it.
https://unoriginal.blog/cremate-yourself-immediately/
My second entry to the contest:
The tale of the Lion and the Boy
This is more of a general allegory of trying to manage a superintelligence to your advantage. In keeping with the original literary tradition of fables, I've used sentient animals for this one.
My entry is called Project Apep, it's set in a world where alignment is difficult, but a series of high profile incidents lead to extremely secure and cautious development of AI. It tugs at the tensions between how AI can make the future wonderful or terrible.
<a href=” https://docs.google.com/document/d/1QghW_DxGLltdAbVTYLtraraJMDTriZYL1wT54IwkSdI”>I know it better for you</a>
The Lion, The Dove, The Monkey, and the Bees
"Peppy the Cyber-Skunk and the Wood-Wide Web", a short story of about 1200 words.
https://docs.google.com/document/d/11Xbsxx1lPx6BzXE1ceyrSjfZZlXL4K7t_BJwyGZ8kcQ/edit?usp=sharing
My submission is entitled Risk your Station.
Does "submitted by September 1st" mean "submitted before Sept 1" or "submitted by the end of Sept 1"?
By the end of Sep 1st.
Thank you. I should have checked this 7 hours ago! But probably I wouldn't have finished if I had.
Here's my entry: Retrocausal missives from the deep past, vol XII: the menace of the OI Boddhisattva mind.
For those who liked the story, after the following additional short missive,
I can now present The Bodhisattva of Friendliness as the Green-Haired Hacker Girl as the Bodhisattva of Friendliness vol I, an aesthetic exploration., set in the same universe.
This is just a quick proof of concept. I hope to soon have something a bit more polished.
ETA: just realized I linked to a huge PDF that triggers a warning it can't be checked for viruses. So for those who understandably might be reluctant to download it and see what it contains, this FB post provides a sample: https://www.facebook.com/aatu.koskensilta/posts/pfbid02zLuqsiU3HJW2ix9VPcpVwsSLgWhXUcDuv6EhMSnZFYZLYnhimrdHRsK9mZXYdQBql
(It's a sort of picture book for sentient beings of all ages, and also a transcript of executing a sort of search strategy in the space of possible aesthetics with the aid of ChatGPT+ and DALL-E 3, with a very specific (and hopefully transparent) purpose in mind.)