SDL bans AI-written commits
Posted by davikr 1 day ago
Comments
Comment by manoDev 1 day ago
Comment by charlie90 23 hours ago
Comment by sph 18 hours ago
Not to be condescending, but everyone goes through this phase, then they grow up, it’s literally what separates the amateur from the master.
Comment by djhn 15 hours ago
Comment by sph 12 hours ago
If you want to go spiritual, there's karma yoga from the Bhagavad Gita: "You have a right to perform your prescribed duties, but you are not entitled to the fruits of your actions. Never consider yourself to be the cause of the results of your activities, nor be attached to inaction."
Did Leonardo work for fame and monies, or simply because he found massive enjoyment in it? What about Hemingway, or Einstein?
This might all sound like new age bullshit, but it's taken me literally 15 years of my life to understand this and grow out of chronic procrastination and dissatisfaction.
Comment by em-bee 6 hours ago
i want all my software verified by a human, even an unexperienced human is more reliable than AI at this point. (this may change, but it hasn't yet)
Comment by krapp 22 hours ago
Comment by hackable_sand 22 hours ago
Comment by giancarlostoro 1 day ago
Comment by dim13 1 day ago
Comment by sph 18 hours ago
Comment by a34729t 18 hours ago
Comment by registeredcorn 1 day ago
Example:
* Do I care if an LLM was used to determine the volume of my doorbell? Not particularly.
* Do I care if an LLM was used to generate code to unlock my front door remotely? Absolutely!
I need a warning label cautioning me of the risks associated with generative materials. I don't care in the slightest when it isn't present, because the inherent risks associated are inherently lesser.
Batteries, not chicken breasts.
Comment by em-bee 6 hours ago
Comment by aspenmartin 1 day ago
Comment by djhn 15 hours ago
It’s who else has access: property and facility management, maintenance, etc. In the age of physical keys, I trusted these SMBs to be relatively capable, let’s say 7/10, in protecting those keys from most local would-be criminals and opportunists. That goes down to 2/10 for protecting digital assets, like remote unlock capabilities, from cybercrime.
As soon as there is a viable market connecting cybercriminals with local criminals, whether it’s vertically integrated organised crime or something like carding forums, physical access exploitation is bound to become a problem.
Comment by whateveracct 1 day ago
Comment by LocalH 1 day ago
Comment by skybrian 1 day ago
Suppose there were a website that helped would-be contributors of AI assistance to match up with projects that want help?
Comment by throw5 1 day ago
The userbase is also changing. There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to. They just spin up their favorite LLM and make a PR out of whatever slop comes out.
At this point why not move to something like Codeberg? It's based in Europe. It's run by a non-profit. Good chance it won't suffer from the same fate a greedy corporate owned platform would suffer?
Comment by raincole 1 day ago
The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.
But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.
Comment by embedding-shape 1 day ago
History might prove me wrong on this one, but I really believe that the platforms that are pushing people to use as much LLMs as possible for everything (Microsoft-GitHub) will surely be more flooded by AI bots than the platforms that are focusing on just hosting code instead (Codeberg).
Comment by throw5 1 day ago
I'm not sure how one follows from the other. I am paid by a US for-profit company. But I still think EU has done some things better. People's beliefs are not determined by the company they work for. It would be a very sad world if people couldn't think outside the bubble of their employers.
Comment by kdhaskjdhadjk 1 day ago
You can be assured that the leanings of Valve are always going to be USA, USA, USA, for reasons that will be clear when you follow the chain of ownership to its source.
Comment by hurricanepootis 1 day ago
Comment by jamesfinlayson 22 hours ago
Comment by kdhaskjdhadjk 1 day ago
2) New Zealand is a favorite place for Western apparatchiks to build their bunkers. They don't move there out of a love for Kiwi culture and desire to integrate with the locals. Much like their interest in Wyoming/Montana also; they see a place they like, and they go take it over and drive out/murder whoever was there before.
Comment by hurricanepootis 1 day ago
Comment by ahartmetz 3 hours ago
Comment by anymouse123456 1 day ago
The Eternal September eventually comes for us all.
Comment by fuhsnn 1 day ago
Comment by MiiMe19 1 day ago
Comment by embedding-shape 1 day ago
At this point, projects are already on GitHub due to inertia, or they're chasing vanity-metrics together with all the other people on GitHub chasing vanity-metrics.
Since the advent of the "README-profiles" many started using with badges/metrics, it been painfully obvious how large this group of people are, where everything is about getting more stars, merging more PRs and having more visits to your website, rather than the code and project itself.
These same people put their project on GitHub because the "value" they want is quite literally "GitHub Stars" and try to find more followers. It's basically a platform they hope will help them get discovered via.
Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.
Comment by duskdozer 1 day ago
As I've seen it, there's a lot of git=GitHub going on. It wasn't even clear to me for a while that you didn't even need a "git server" and could just use a filepath or ssh location for example.
Comment by level09 1 day ago
Comment by em-bee 6 hours ago
your AI coder is worse than a junior developer, because junior devs may write bad code but generally they won't write code that they don't understand. AI on the other hand has no clue what it is writing.
Comment by sph 1 day ago
In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die. LLMs are always a few months away from the (u|dys)topia of never having to write code ever again. Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Personally, I have made my existential worries a little better by pivoting away from big tech where the only metric is line of code committed per day, and moving towards those fields where human craftsmanship is still king.
Comment by fnimick 1 day ago
Comment by duskdozer 1 day ago
Comment by LLMCodeAuditor 1 day ago
- I have refused to use LLMs since 2023, when I caught ChatGPT stealing 200 lines of my own 2019-era F#. So in 2026 I have some anxiety that I need to practice AI-assisted development or else Be Left Behind. This makes me especially cross and uncharitable when speaking with AI boosters.
- Instead of LLMs I have tripled-down on improving my own code quality and CS fundamentals. I imagine a lot of AI boosters are somewhat anxious that LLM skills will become dime-a-dozen in a few years, and people whose organic brains actually understand computers will be highly in-demand. So they probably have the same thing going on as me - "nuh uh you're wrong and stupid."
I hope it's clear I'm trying to be charitable!
Comment by tkel 1 day ago
Comment by sph 1 day ago
I mean, it's either that or I quit software development completely; it would be a shame to throw away two decades of experience in the field.
Comment by ryandvm 1 day ago
Comment by quikoa 1 day ago
Comment by sph 1 day ago
The itch I want to scratch is that I'm on Linux, and our native image editing apps are very clunky, or you have to spend a weekend every time reacquainting yourself with ImageMagick.
The other project in the back of my head is a font repository, manager and downloader for Linux. It's an unserved niche, and there is no popular central repository of fonts, despite a large majority of them are released with permissive licenses. I just want to be able to do `font-app install Inter Iosevka "IBM Plex"` and they appear under ~/.local/share/fonts
Comment by quikoa 1 day ago
Comment by PeterStuer 1 day ago
A craftsman knows how to use his tools. You can with AI produce very complete, polished, maintainable and tested, secure, performant high quality code.
It does take planning and lots of work on your part, but there is a high payoff.
So many people just dump a one paragraph brainfart into a prompt and then label the AI "slop".
Slop in , slop out. Play silly games, win stupid prizes. Don't blame your tools. Sometimes, you are 'holding it wrong'.
Comment by em-bee 6 hours ago
less hard work than writing code myself? a higher payoff than the satisfaction of having written code myself?
i want to be a coder, not a prompt manager. (not sure i want to call that engineer)
Comment by qsera 7 hours ago
So how much is enough?
Comment by JKCalhoun 1 day ago
I think it likely that a typical HN'er [1] has actually used an LLM in coding and if they sound like they are proposing that LLMs in coding are inevitable ("the unavoidable future") it may well be from an informed, personal experience.
(Of course there's no reason not to believe that those pushing back against LLM-Assisted-Coding are also doing so from personal experience. Me, I am on "Team-LLMAC".)
[1] Never used that term before, not sure I like it.
Comment by palmotea 1 day ago
When you look across all software development, I think this kind of AI contribution ban is probably the exception. Because open source maintainers can have standards and have the ability to decide to enforce them.
Corporate America is enraptured by an even dumber and less thoughtful version of the HN echo chamber.
> Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Are you talking about indie games? Because I could see that having a similar dynamic to open source. I would think a big studio would be similar to any other corporate America office.
Comment by luxuryballs 1 day ago
Comment by jmalicki 1 day ago
This after I started catching it commit directly to upstream main without PRs among other things.
Comment by em-bee 6 hours ago
Comment by jmalicki 2 hours ago
"You must be a human to create an Account. Accounts registered by "bots" or other automated methods are not permitted. We do permit machine accounts: A machine account is an Account set up by an individual human who accepts the Terms on behalf of the Account, provides a valid email address, and is responsible for its actions. A machine account is used exclusively for performing automated tasks. Multiple users may direct the actions of a machine account, but the owner of the Account is ultimately responsible for the machine's actions. You may maintain no more than one free machine account in addition to your free Personal Account. One person or legal entity may maintain no more than one free Account (if you choose to control a machine account as well, that's fine, but it can only be used for running a machine)."
https://docs.github.com/en/site-policy/github-terms/github-t...
Comment by SuperV1234 21 hours ago
You're never going to be able to prove that a contributor didn't ask an LLM to help them make some changes, or review/optimize changes that were made.
Capable people who like to get stuff done will use LLMs, review their work carefully, and never disclose it. And you'll never be able to tell.
People who generated slop PRs won't even read your policy before submitting a slop PR.
Comment by em-bee 6 hours ago
the policy allows me to reject the things i know are done with AI and it allows me to punish (ban) devs who lie to me when i find out. without a policy i have no argument.
Comment by spicyusername 1 day ago
On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.
I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.
Comment by xxs 1 day ago
That depends on the 'regular old code' but most stuff I have seen doesn't come close to 'maintainable'. The amount of cruft is proper.
Comment by yarn_ 1 day ago
Comment by simiones 1 day ago
Comment by LLMCodeAuditor 1 day ago
Comment by bakugo 1 day ago
I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.
Comment by llmssuck 1 day ago
There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.
If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.
Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)
AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.
Comment by bheadmaster 1 day ago
Obligatory xkcd:
Comment by juped 1 day ago
Comment by or_am_i 1 day ago
Comment by ratrace 1 day ago
Comment by pelasaco 1 day ago
Comment by sph 1 day ago
Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.
Comment by pelasaco 1 day ago
That's just Virtue signaling.
"AI-improved" projects like "rewrite $FOO in rust" are popping up everywhere. I dont support it, sqlite3 being rewritten in rust makes me just sad https://turso.tech/blog/introducing-limbo-a-complete-rewrite..., but this "$PROJECT bans AI" is just ridiculous. Ideally we should try to use it for the good, instead of ban it.
Comment by xxs 1 day ago
why so? If they don't feel like reviewing code (or ensure copyright compliance) they are free to reject that.
If you feel strong about it, go fork and maintain it on your own.
Comment by orwin 1 day ago
I only manage 3 'new' hires and I am of the mind of banning AI usage myself despite my heavy usage (the new hires don't level up, that's my main issue now, but the reviewing loops and the shit that got through our reviews are also issues).
Comment by ratrace 1 day ago
Comment by LLMCodeAuditor 1 day ago
Like, look: https://github.com/tursodatabase/turso/issues/6412 It's stunning considering this project is advertised as a beta. There are hundreds of bugs like this. It's AI slop that gets worse the more AI is thrown at it.
SDL is 100% correct to keep this AI mess as far away from their project as possible.
Comment by raincole 1 day ago
Comment by pelasaco 1 day ago
it never happens in 3 weeks? The AI revolution is just starting.. too soon to jump in conclusions, i guess?
Comment by ethin 1 day ago
Comment by skydhash 1 day ago
Comment by pelasaco 1 day ago
Comment by thunderfork 1 day ago
Comment by pelasaco 1 day ago
and they are right. We never saw that before. That's why we all fear it.
Comment by ethin 1 day ago
Please please please tell me this is sarcasm. Because if you are serious, I think a lot of people have a long list of bridges to cell you.
Comment by arnvald 1 day ago
Comment by pelasaco 1 day ago
I think you are wrong. The "a lot of work maintaining a project" would be reduced, specially issues investigation, code improvement, security issues detection and fixes. SDL isn't a that relevant project, but "ban AI-written commit" - which reading the issue, sounds more like ban "AI usage" - is counterproductive to project.
Comment by spookie 15 hours ago
Unreal 5 uses SDL to be able to create "windows" in a cross platform manner (specific use case, but not just a thing on Linux [1]). Many others do as well.
[1] https://dev.epicgames.com/documentation/unreal-engine/updati...
Comment by skydhash 1 day ago
SDL is kinda the king of “I want graphic, but not enough to bring a whole toolkit, or suffer with opengl”. I have a small digital audio player (shangling m0) where the whole interface is built with SDL.
Comment by krapp 6 hours ago
Many, many things use SDL. It's one of those bottom pieces in the Jenga tower of infrastructure dependency[0].
Not maintained by some random person (that would be Sean Barret's STB library) but still, it seems irrelevant only because it's already ubiquitous.
Comment by nottorp 1 day ago
No. My impression is that most AI PRs aren't made to improve anything, but to inflate the requester's reputation as an "AI" expert.
> and feature development
There's also this misconception that more features == better...
Comment by ChrisRR 1 day ago
Comment by signa11 1 day ago
Comment by democracy 1 day ago
Comment by orwin 1 day ago
Comment by democracy 19 hours ago
Comment by tapoxi 1 day ago
This reasonably means AI contributions where a human has guided the AI are not subject to copyright, and thus can't be supported by a project's license.
Comment by dtech 1 day ago
At least a monkey is an unambiguous autonomous entity. A LLM is a - heck of a complicated - piece of software, and could very well be ruled a tool like any other
Comment by redwall_hp 1 day ago
https://www.reuters.com/legal/government/us-supreme-court-de...
It's still early, but this is absolutely going to be precedent used in a software related case, and it's going to lead to fun times with SOX/PCI style compliance issues, where developers will have to attest that merges did not use AI so compliance can ensure repos don't pass a threshold where there's too much LLM code.
Comment by tapoxi 1 day ago
The legal question was "did a human author the work"?
Comment by Sharlin 1 day ago
Comment by sscaryterry 1 day ago
Comment by sscaryterry 1 day ago
Comment by duskdozer 1 day ago
Comment by cwillu 1 day ago
Comment by sscaryterry 1 day ago
Comment by thunderfork 1 day ago
Comment by ecopoesis 1 day ago
Why not just specify all contributions must be written with a steady hand and a strong magnet.
Comment by throwawayqqq11 1 day ago
To show you your hyperbole: Allowing monkeys on typewriters.
LLMs are neither IDEs nor random.
I am very sceptical about iterative AI deployment too. People pretend the success threshold is vibing somethging that gets widely used, but its more than that. These one-shot solutions are not project maintenance. Answer yourself this one, could LLMs do what the linux kernel cummunity did over the same time span? This would be a good measure of success and if so, a strong argument to allow generated contributions.
Comment by grg0 21 hours ago
They're going to force you to use vim. Better start learning those key bindings as soon as possible.
Comment by askI12 1 day ago
They simply don't want people like you and lose nothing.
Comment by ramon156 1 day ago
So what about SO code snippets? I'm not here to make a stance for AI, but this thread is leaning towards biased.
Address the elephant, LLM-assisted PR's have a chance of being lower quality. People are not obligated to review their code. Doing this manually, you are more inclined to review what you're submitting.
I don't get why these conversations always target their opinion, not the facts. I totally agree about the ethicality, the fact it's bound to get monopolized (unless GLM becomes SOTA soon), and is harming the environment. That's my opinion though, and shouldn't interfere with what others do. I don't scoff at people eating meat, let them be.
The issue is real, the solution is not.
Comment by johndough 1 day ago
StackOverflow snippets are mostly licensed under CC BY-SA 3.0 or 4.0, so I'd wager that they are not allowed, either.
The SDL source code makes a few references to stackoverflow.com, but the only place I could find an exact copy was where the author explicitly licensed the code under a more permissive license: https://github.com/libsdl-org/SDL/blob/5bda0ccfb06ea56c1f15a...
Comment by Sharlin 1 day ago
Comment by johndough 1 day ago
Comment by cwillu 1 day ago
Comment by shevy-java 1 day ago
Most SO contributions are dead-simple; often just being a link to the documentation or an extended example. I mean just have a look at it.
Finding a comparable SO entry that is similar to Google versus Oracle example, is in my opinion much much harder. I have been using SO in the last 10 years a lot for snippets, and most snippets are low quality. (Some are good though; SO still has use cases, even though it kind of aged out now.)
Comment by embedding-shape 1 day ago
How is this different from LLM outputs? Literally trained on the output of N programmers so it can give you a snippet of code based on what it has seen.
Comment by sdJah18 1 day ago
Not only by comparing the scale of infringement, but because direct Stackoverflow snippets are very rare. For example, C++ snippets are 95% code cleverness monstrosities and you can only learn a principle but not use the code directly.
I'd say that Stackoverflow snippets in well maintained open source projects are practically zero. I've never seen any PR that is accepted that would even trigger that suspicion.
Comment by rzmmm 1 day ago
Comment by LLMCodeAuditor 1 day ago
Comment by missingdays 1 day ago
Why not let the animals be?
Comment by crackez 1 day ago
Comment by reactordev 1 day ago
Comment by fhd2 1 day ago
Comment by miningape 1 day ago
Comment by canelonesdeverd 1 day ago
Comment by reactordev 1 day ago
Comment by LLMCodeAuditor 1 day ago
>> ………I have purchased and tested the following USB steering wheels [blob of AI nonsense] and verified they all work perfectly, according to your genius design.
“Wow, that was fast! It would take a stoopid human 48 hours just to receive the shipment.”
[I would think Claude would recommend using SDL instead of running some janky homespun thing]
Comment by reactordev 1 day ago
Comment by jhasse 1 day ago
Comment by thunderfork 1 day ago