The Community Pull Request Is Dead

The PR-as-contribution model was built for a world where writing code was the bottleneck. AI agents broke that assumption. GitHub shipped a kill switch on PRs. Hashimoto built Vouch. We closed the door entirely. The community pull request is dead. Here's what replaces it.

Wall of AI-generated pull request notifications fading into darkness, with "The Community Pull Request Is Dead" breaking through in white and red
Every maintainers morning notifications. Most of them tagged "bot." All of them need reviewing

Open source maintainers are drowning in AI-generated pull requests.

Not the good kind, where someone used an agent to help them write a thoughtful fix. The kind where someone pointed an agent at an open issue, told it to "fix this," and opened the PR without reading the diff. The code compiles. The tests pass. And the change is wrong in ways that take twenty minutes to explain, because the agent didn't understand the codebase or the tradeoffs behind the code it changed.

This isn't hypothetical. Go look at any popular open source project right now. The review burden has shifted from "does this code work?" to "does this person even understand what they're changing?" That's a much harder question to answer from a diff.

We made a decision early with swamp: no external code contributions. Not "we prefer issues over PRs." No code contributions, period. The CONTRIBUTING.md is four sentences long before the explanation. The first: "Only employees of System Initiative, Inc. are allowed to contribute code to Swamp." The last: "There are no exceptions."

People find that jarring. Open source projects are supposed to accept PRs. That's been the social contract for decades. You find a bug, you fix it, you send a pull request, the maintainer reviews it, everybody wins.

Except it's not how open source works anymore.

The old model assumed human-speed contributions

The traditional PR model has an implicit assumption: the contributor spent real time understanding the code before changing it. They read the relevant files, traced the logic, thought about edge cases. Maybe they got some of it wrong, but writing the code forced a minimum level of engagement with the problem.

That held for thirty years because writing code was slow enough that it was true. You couldn't fake understanding when you had to type every line.

AI agents break that completely. An agent can produce a PR in minutes that would have taken a human days. The commit messages reference the right files, the code follows existing patterns. None of that means the contributor read the diff or validated the approach. The signal that used to be embedded in "someone wrote this code" is gone.

For maintainers, this inverts the economics. Reviewing a PR always took more effort than writing one. That was fine when contributors were rate-limited by how fast they could code. Now contributors generate PRs faster than maintainers can review them. Every project with open issues becomes a target for someone who wants to look productive by pointing agents at other people's bug trackers.

This didn't start with AI

The contribution model was already fraying before agents entered the picture. Hacktoberfest turned open source into a game. Every October, maintainers got buried in low-effort PRs from people chasing a free t-shirt. Fix a typo in a README, add a period to a comment, open four more just like it across four more repos. Maintainers spent more time closing them than the contributors spent opening them.

Then GitHub made the contribution graph a hiring signal. Green squares became a proxy for engineering ability, and suddenly people were gaming their commit history. "Good first issue" labels, originally meant to help newcomers find meaningful entry points, became targets for resume-driven contributions. The issue gets claimed, a surface-level fix appears within the hour, and the contributor disappears.

Each wave eroded the same thing: the assumption that a contribution represents genuine engagement with the codebase. AI agents are the latest and most extreme version. The difference is scale. A person gaming Hacktoberfest might open ten bad PRs in a week. An agent can open ten in an hour.

What maintainers are actually dealing with

If you haven't maintained a popular repo recently, here's what the day looks like. You open your notifications and there are twelve new PRs. At least half are AI-generated. You can usually tell, but not always. The obvious ones have that distinctive pattern: slightly over-engineered solution, tests that cover the happy path but miss the edge case the issue was actually about, commit message that reads like a summary of the issue rather than a description of the change.

The non-obvious ones are worse. The code is clean, but the approach doesn't fit the architecture. It adds a dependency the project has intentionally avoided. It solves the symptom instead of the root cause. It handles the reported case perfectly and breaks three others. You write a detailed review explaining why. The contributor responds with another AI-generated revision that addresses your comments literally but still misses the point. Two more review rounds and you've spent an hour on a PR that the contributor spent thirty seconds on.

That's what burns people out. Not the volume alone, but the asymmetry. The contributor invested almost nothing. The maintainer has to invest real time to determine if the change is any good. And closing the PR without a thorough review feels wrong, because what if this one is actually fine?

Multiply that by every repo with an issue tracker and you have the current state of open source maintenance.

Why we closed the door

We didn't write that CONTRIBUTING.md because we don't value community input. We wrote it because we can't distinguish between a large human change and a large agent-generated change, and the only way to maintain supply chain security is to control the inputs to the development process.

That's not a philosophical position. It's an operational one. We have a PR pipeline with four layers of AI review, adversarial testing, CI security checks, and prompt injection defenses. That pipeline assumes we trust the identity of the contributor, that they have context on the architecture, have read the design docs, understand the conventions. Our agents have that context because we gave it to them. An external contributor's agent has none of it.

A well-intentioned PR from an external agent is indistinguishable from a supply chain attack at the CI layer. Both are untrusted code flowing into a pipeline that makes merge decisions. Both arrive with plausible descriptions. Both pass surface-level checks. The only difference is intent, and intent isn't something you can verify from a diff.

So we drew a bright line. File issues. Describe bugs. Request features. We vet those for prompt injection and malicious intent. If we agree the work should be done, we do it ourselves, with our agents, under our trust model. We'll add you as a co-author on the resulting PR if you want. You get credit for identifying the problem. We maintain control over the solution.

What open source contribution should look like now

I don't think every project needs to go as far as we did. But every maintainer needs to reckon with the fact that the PR-as-contribution model was designed for a world where writing code was the bottleneck. That world is gone.

The valuable part of an open source contribution was never the code. It was the understanding. Someone found a problem, understood it well enough to describe it, maybe understood the codebase well enough to propose a fix. The code was a vehicle for that understanding. Now the code is free and the understanding is what's scarce. A detailed bug report with reproduction steps and a clear description of expected versus actual behavior is worth more than ten agent-generated PRs. It always was. The gap has just gotten enormous.

The ecosystem is already responding. In February, GitHub shipped two new repository settings: disable pull requests entirely, or restrict them to collaborators only. The platform that built its identity around the pull request now lets projects turn them off.

The community discussion that followed was brutal. Xavier Portilla Edo, part of the Genkit core team: only "1 out of 10 PRs created with AI is legitimate." Jiaxi Zhou, a Microsoft engineer and Containerd maintainer: "reviewers can no longer assume authors understand or wrote the code they submit." Chad Wilson, primary maintainer of GoCD: undisclosed AI use turns maintainers into "unknowing AI prompters." Daniel Stenberg shut down curl's bug bounty program entirely because the incentive attracted too much AI slop.

Mitchell Hashimoto, co-founder of HashiCorp, put it bluntly: "AI eliminated the natural barrier to entry that let OSS projects trust by default." His response was Vouch, an explicit trust management system where unvouched users can't contribute and trusted people vouch for others. His vision is a web of trust across projects, so vouching or denouncing someone in one project ripples through to others. Ghostty is already integrating it.

GitHub shipping a kill switch on PRs. Hashimoto building explicit trust graphs. Us closing the door entirely. As one of our swamp users, Sean Escriva, put it: "We've coasted on 'OSS' as a lossy idea for too many years." This moment is forcing everyone to intentionally decide what the interaction model looks like. The default-open contribution model doesn't survive contact with agents. The question is how each project adapts.

Projects that continue accepting all PRs as if nothing changed will burn out their maintainers. Projects that close the door entirely, like we did, will be accused of not being "truly open source." The right answer for most projects is somewhere in between: higher bars for code contributions, explicit trust mechanisms like Vouch, and a shift toward valuing problem identification over solution delivery.

The contribution that matters

We built tooling into swamp's CLI so that when an agent encounters a bug during a session, it can check out the source code at the exact commit the binary was built from and file a detailed issue automatically. A summary describing the exact failure. Steps to reproduce with the specific commands. Expected versus actual behavior with real error output. Environment details down to the OS and extension version. A tested workaround. The agent never touched our codebase. It found the bug, documented it thoroughly, and handed it to us.

That's a real contribution. It's understanding we didn't have, delivered in a form we can act on. Our agents triage it, plan a fix, get it reviewed, and ship it through the pipeline we trust. The person who hit the bug gets co-author credit on the fix if they want it.

When someone points an agent at our issue tracker and opens a PR they haven't read, that's not a contribution. It's a cost.

Open source has always run on a social contract. I use your code for free, I expect the bug fixes to keep coming, and if I have time I might contribute back. AI agents haven't changed the spirit of that contract. But they've completely changed what "contribute back" needs to mean. The most valuable contribution you can make to an open source project right now isn't code. It's a clear description of a problem the maintainers didn't know about, written by someone who actually experienced it.

The agents can write the fix. They need you to find the bug.