Jellyfin LLM/"AI" Development Policy
Posted by mmoogle 20 hours ago
Comments
Comment by hamdingers 19 hours ago
I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.
Comment by adastra22 19 hours ago
But that is translation, not “please generate a pull request message for these changes.”
Comment by SchemaLoad 19 hours ago
Comment by pixl97 17 hours ago
Simply put you seem to live in a different world where everyone around you has elegant diction. I have people I work with that if I could I would demand they take what they write and ask "would this make sense to any other human on this planet".
There are no shortages of people being lazy with LLMs, but at the same time it is a tool with valid and useful purpose.
Comment by ChadNauseam 18 hours ago
Comment by username223 14 hours ago
Comment by newsclues 18 hours ago
Comment by adastra22 17 hours ago
Comment by Gigachad 19 hours ago
Comment by embedding-shape 19 hours ago
Using Google Translate probably means you're using a language model in the end anyways behind the scenes. Initially, the Transformer was researched and published as an improvement for machine translation, which eventually led to LLMs. Using them for translation is pretty much exactly what they excel at :)
Comment by adastra22 17 hours ago
Comment by habinero 18 hours ago
I've done this kind of thing, even if I think it's likely they speak English. (I speak zero Japanese here.) It's just polite and you never know who's going to be reading it first.
> Google翻訳を使用しました。問題が発生した場合はお詫び申し上げます。貴社のウェブサイトにコンピュータセキュリティ上の問題が見つかりました。詳細は下記をご覧ください。ありがとうございます。
> I have found a computer security issue on your website. Here are details. Thank you.
Comment by mort96 19 hours ago
Same with grammar fixes. If you don't know the language, why are you submitting grammar changes??
Comment by denkmoon 19 hours ago
Comment by MarsIronPI 19 hours ago
Comment by mort96 19 hours ago
I have read text where people who aren't very good at the language try to "fix it up" by feeding it through a chat bot. It's horrible. It's incredibly obvious that they didn't write the text, the tone is totally off, it's full of obnoxious ChatGPT-isms, etc.
Just do your best. It's fine. Don't subject your collaborators to shitty chat bot output.
Comment by habinero 18 hours ago
The times I've had to communicate IRL in a language I don't speak well, I do my best to speak slowly and enunciate and trust they'll try their best to figure it out. It's usually pretty obvious what you're asking lol. (Also a lot of people just reply with "Can I help you?" in English lol)
I've occasionally had to email sites in languages I don't speak (to tell them about malware or whatever) and I write up a message in the simplest, most basic English I can. I run that through machine translation that starts out with "This was generated by Google Translate" and include both in the email.
Just do your best to communicate intent and meaning, and don't worry about sounding like an idiot.
Comment by pessimizer 18 hours ago
If you think that every language level is always sufficient for every task (a fluency truther?), then you should agree that somebody who writes an email in a language that they are not confident in, puts it through an LLM, and decides the results better explain the idea they were trying to convey than they had managed to do is always correct in that assessment. Why are you second guessing them and indirectly criticizing their language skills?
Comment by mort96 18 hours ago
I have no idea what you're talking about with regard to being a "fluency truther", I think you're putting words into my mouth.
Comment by pixl97 17 hours ago
LLMs can do a lot of proof checking on what you've written. Asking it to check for logical contradictions in what I've stated and such. It will catch were I've forgot things like a 'not' in one statement so one sentence is giving a negative response and another gives a positive response unintentionally. This kind of error is quite often hard for me to pick up on, yet the LLM seems to do well.
Comment by epiccoleman 17 hours ago
It's actually kind of a weird "of two minds" thing. Why should I care that my writing is my own, but not my code?
The only explanation I have is that, on some level, the code is not the thing that matters. Users don't care how the code looks, they just care that the product works. Writing, on the other hand, is meant to communicate something directly from me, so it feels like there's something lost if I hand that job over to AI.
I often think of this quote from Ted Chiang's excellent story The Truth Of Fact, The Truth of Feeling:
> As he practiced his writing, Jijingi came to understand what Moseby had meant: writing was not just a way to record what someone said; it could help you decide what you would say before you said it. And words were not just the pieces of speaking; they were the pieces of thinking. When you wrote them down, you could grasp your thoughts like bricks in your hands and push them into different arrangements. Writing let you look at your thoughts in a way you couldn’t if you were just talking, and having seen them, you could improve them, make them stronger and more elaborate.”
But there is obviously some kind of tension in letting an LLM write code for me but not prose - because can't the same quote apply to my code?
I can't decide if there really is a difference in kind between prose and code that justifies letting the LLM write my code, or if I'm just ignoring unresolved cognitive dissonance because automating the coding part of my job is convenient.
Comment by IggleSniggle 15 hours ago
If you are using LLMs to precisely translate a set of requirements into code, I don't really see a problem with that. If you are using LLMs to generate code that "does something" and you don't really understand what you were asking for nor how to evaluate whether the code produced matched what you wanted, then I have a very big problem with that for the same reasons you outline around prose: did you actually mean to say what you eventually said?
Of course something will get lost in any translation, but that's also true of translating your intent from brain to language in the first place, so I think affordances can be made.
Comment by Kerrick 19 hours ago
Comment by IggleSniggle 15 hours ago
Comment by Kerrick 3 hours ago
> They asked you because they wanted your human judgment.
Comment by giancarlostoro 19 hours ago
Comment by willio58 15 hours ago
Then, of course, I review the output and make some manual edits here and there.
That last thing is the key in both written communication and in code, you HAVE to review it and make manual edits if needed.
Comment by dawnerd 18 hours ago
Comment by pixl97 17 hours ago
Comment by wolvoleo 11 hours ago
In my opinion it really devalues the message they're sending. I immediately get this dismissive rolleyes feeling when I see it.
Comment by voidr 9 hours ago
Comment by gllmariuty 18 hours ago
like in that joke with the mechanic which demands $100 for hitting the car once with his wrench
Comment by gonzalohm 19 hours ago
I only use LLM to write text/communication because that's the part I don't like about my work
Comment by VariousPrograms 16 hours ago
When I was young, I used to think I'd be open minded to changing times and never be curmudgeonly, but I get into one "conversation" where someone responds with ChatGPT, and I am officially a curmudgeon.
Comment by cedmans 16 hours ago
Comment by solid_fuel 15 hours ago
Quite literally - if they sent me the text of the prompt I could obtain the same output, so the output is just a more verbose way of stating the prompt.
I find it really disrespectful to talk to people through an LLM like that.
Comment by al_borland 14 hours ago
If anything, AI should be used to take the long rambling email and send off the shorter distilled version.
Comment by blks 15 hours ago
Comment by heavyset_go 14 hours ago
I am capable of copying and pasting shit into an LLM, do not give me its output and don't insult me by pretending the output is your own work.
Comment by peyton 16 hours ago
Comment by giancarlostoro 19 hours ago
A lot of the time open source PRs are very strategic pieces of code that do not introduce regressions, an LLM does not necessarily know or care, and someone vibe coding might not know the projects expectations. I guess instead of / aside from a Code of Conduct, we need a sort of "Expectation of Code" type of document that covers the projects expectations.
Comment by embedding-shape 18 hours ago
Are you talking about some agent that is specific for writing FOSS code or something? Otherwise I don't see why we'd want all agents to act like this.
As always, it's the responsibility of the contributor to understand both the code base and contributing process, before they attempt to contribute. If they don't, then you might receive push-back, or have your contribution deleted, and that's pretty much expected, as you're essentially spamming if you don't understand what you're trying to "help".
That someone understands this before contributing, is part of understanding how FOSS works when it's about collaborating on projects. Some projects have very strict guidelines, others very lax, and it's up to you to figure out what exactly they expect from contributors.
Comment by giancarlostoro 2 hours ago
Comment by embedding-shape 2 hours ago
Comment by JaggedJax 19 hours ago
I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.
Comment by djbon2112 15 hours ago
Comment by bjackman 19 hours ago
1. Fully human-written explanation of the issue with all the info I can add
2. As an attachment to the bug (not a PR), explicitly noted as such, an AI slop fix and a note that it makes my symptom go away.
I've been on the receiving end of one bug report in this format and I thought it was pretty helpful. Even though the AI fix was garbage, the fact that the patch made the bug go away was useful signal.
Comment by Gigachad 19 hours ago
Comment by pixl97 17 hours ago
Think of a scenario like
Attacker floods you with tons of AI slop to make your overloaded and at risk of making mistakes. These entries should have just enough basis in reality to avoid summary rejection.
Then the attacker puts in useful batch of code that fixes issues and injects a tricky security flaw.
If there's not a lot going on the second part is hard to pull off. But if you ruin the SnR it becomes more likely.
Comment by Amorymeltzer 18 hours ago
>I'm of the opinion if people can tell you are using an LLM you are using it wrong.
They continued:
>It's still expected that you fully understand any patch you submit. I think if you use an LLM to help you nobody would complain or really notice, but if you blindly submit an LLM authored patch without understanding how it works people will get frustrated with you very quickly.
<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...>
Comment by Daviey 3 hours ago
That said, I don’t think a blanket "never post LLM-written text" rule is the right boundary, because it conflates two very different behaviours:
1. Posting unreviewed LLM output as if it were real investigation or understanding (bad, and I agree this should be discouraged or prohibited), versus
2. A human doing the work, validating the result, and using an LLM as a tool to produce a clear, structured summary (good, and often beneficial).
Both humans and LLMs require context to understand and move things forward. For bug investigation specifically, it is increasingly optimal to use an LLM as part of the workflow: reasoning through logs, reproduction steps, likely root cause, and then producing a concise update that captures the outcome of the investigation.I worked on an open source "AI friendly" project this morning and did exactly this.
I suspect the reporter filed the issue using an LLM, but I read it as a human and then worked with an LLM to investigate. The comment I posted is brief, technical, and adds useful context for the next person to continue the work. Most importantly, I stand behind it as accurate.
Is it really worth anyone’s time for me to rewrite that comment purely to make it sound more human?
So I do agree with Jellyfin's goal (no AI spam, no unverifiable content, no extra burden on maintainers). I just don’t think "LLM involvement" is the right line to draw. The right line is accountability and verification.
Comment by transcriptase 19 hours ago
Comment by estimator7292 19 hours ago
Comment by Cyphase 18 hours ago
That said I understand calling it out specifically. I like how they wrote this.
Related:
> https://news.ycombinator.com/item?id=46313297
> https://simonwillison.net/2025/Dec/18/code-proven-to-work/
> Your job is to deliver code you have proven to work
Comment by darkwater 19 hours ago
Comment by anavid7 19 hours ago
love the "AI" in quotes
Comment by wmf 19 hours ago
Comment by bigstrat2003 18 hours ago
Comment by Grimblewald 17 hours ago
That said, LLMs have a single specific inductive bias: Translation. But not just between languages, between ontologies themsleves. Whether it’s 'Idea -> Python' or 'Intent -> Prose,' the model is performing a cross-modal mapping of conceptual structures. This does require a form of intelligence, of reasoning, just in a format suitable to a world so alien to our own that they're mutually unintelligble, even if the act of charting ontologies is shared between them.
This is why I think we’re seeing diminishing returns, it is that we’re trying to 'scale' our way into AGI using a map-maker/navigation system. Like asking google maps to make you a grocery list, rather than focusing on its natural purpose in being able to tell you where you can find groceries. You can make a map so detailed it includes every atom, but the map will never have the agency to walk across the room. We are seeing asymptotic gains because each extra step toward 'behavioral' AGI is exponentially more expensive when you're faking reasoning through high-dimensional translation.
Comment by monkaiju 15 hours ago
Comment by doug_durham 18 hours ago
Comment by djbon2112 15 hours ago
Comment by ChristianJacobs 19 hours ago
I know there will probably be a whole host of people from non-English-speaking countries who will complain that they are only using AI to translate because English is not their first (or maybe even second) language. To those I will just say: I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AIs (poor) interpretation of what you hoped to convey.
Comment by nabbed 19 hours ago
Comment by ChristianJacobs 19 hours ago
Comment by fragmede 19 hours ago
Comment by bjackman 19 hours ago
(But also, for a majority of people old fashioned Google Translate works great).
(Edit: it's actually a explicit carveout)
Comment by adastra22 19 hours ago
Comment by soundworlds 17 hours ago
GenAI can be incredibly helpful for speeding up the learning process, but the moment you start offloading comprehension, it starts eroding trust structures.
Comment by Sytten 13 hours ago
We do that internally and I cant overstate how much better the output is even with small prompts.
IMO things like "dont put abusive comments" as a policy is better in that file, you will never see comment again instead of fighting with dozen of bad contributions.
Comment by h4kunamata 19 hours ago
One more reason to support the project!!
Comment by rickz0rz 15 hours ago
Comment by zenoprax 15 hours ago
"Commit 1: refactor the $THING to enable $CAPABILITY"
"Commit 2: redirect $THING2 to communicate with $THING1"
"Commit 3: add error handling for $EdgeCase" --- long explanation in commit body
A single commit with no commentary just offloads the work to the maintainers. It's their project so their rules.
Comment by NoGravitas 2 hours ago
Comment by sbinnee 16 hours ago
Comment by patchorang 18 hours ago
Sort of related, Plex doesn't have a desktop music app, and the PlexAmp iOS app is good but meh. So I spent the weekend vibe coding my own Plex music apps (macOS and iOs), and I have been absolutely blown away at what I was able to make. I'm sure code quality is terrible, and I'm not sure if a human would be able to jump in there and do anything, but they are already the apps I'm using day-to-day for music.
Comment by lifetimerubyist 19 hours ago
Should just be an instant perma-ban (along with closure, obviously).
Comment by fn-mote 15 hours ago
This is the internet. Real offenders will just submit the next PR with a new alt account.
Comment by uhfraid 11 hours ago
Than it escalates to a platform issue so you just report them in that case. GitHub enforcement staff handles it
“- Creating alternative accounts specifically to evade moderation action taken by GitHub staff or users”
https://docs.github.com/en/site-policy/acceptable-use-polici...
Comment by monkaiju 14 hours ago
Comment by Hamuko 19 hours ago
Comment by SchemaLoad 19 hours ago
Comment by lifetimerubyist 19 hours ago
Comment by MarsIronPI 19 hours ago
Comment by antirez 19 hours ago
Comment by darkwater 19 hours ago
1) we accept good quality LLM code
2) we DO NOT accept LLM generated human interaction, including PR explanation
3) your PR must explain well enough the change in the description
Which summed together are far more than "no shitty code". It's rather no shitty code that YOU understand
Comment by anthonypasq 19 hours ago
there is no such thing as LLM code. code is code, the same standards have always applied no matter who or what wrote it. if you paid an indian guy to type out the PR for you 10 years ago, but it was submitted under your name, its still your responsibility.
Comment by mort96 19 hours ago
The quality of "does the submitter understand the code" is not reflected in the text of the diff itself, yet is extremely important for good contributions.
Comment by anthonypasq 2 hours ago
this was a problem before LLMs
Comment by heavyset_go 7 hours ago
When it comes to IP, LLM output is not copyrightable unless the output is significantly modified by a human with their own creativity after it is generated.
Comment by darkwater 9 hours ago
Comment by actuallyalys 17 hours ago
Comment by micromacrofoot 19 hours ago
Comment by FanaHOVA 19 hours ago
"LLM Code Contributions to Official Projects" would read exactly the same if it just said "Code Contributions to Official Projects": Write concise PRs, test your code, explain your changes and handle review feedback. None of this is different whether the code is written manually or with an LLM. Just looks like a long virtue signaling post.
Comment by getmoheb 19 hours ago
The point, and the problem, is volume. Doing it manually has always imposed a de facto volume limit which LLMs have effectively removed. Which I understand to be the problem these types of posts and policies are designed to address.
Comment by yrds96 16 hours ago
Comment by mort96 19 hours ago
Comment by djbon2112 14 hours ago