According to the release:
Adds experimental PostgreSQL support
The code was written by Cursor and Claude
14,997 added lines of code, and 10,202 lines removed
reviewed and heavily tested over 2-3 weeks
This makes me uneasy, especially as ntfy is an internet facing service. I am now looking for alternatives.
Am I overreacting or do you all share the same concern?
They are not even trusting it themselves. This is from the release notes
I’ll not instantly switch ntfy.sh over. Instead, I’m kindly asking the community to test the Postgres support and report back to me if things are working
Fuck that.
Classic “test in production” strategy, very solid!
Damn, I guess I’ll stick to the older release for now. Hopefully a viable alternative/fork comes around.
It looks like that tool is more or less built by a single developer (you already trust their judgment anyways!), and even though the code came through in a single PR it was a merge from a branch that had 79 separate commits: https://github.com/binwiederhier/ntfy/pull/1619
Also glancing through it a bit, huge portions of that are straightforward refactors or even just formatting changes caused by adding a new backend option.
I’m not going to say it’s fine, but they didn’t just throw Claude at a problem and let it rewrite 25k lines of code unnecessarily.
Any AI usage immediately discredits the software for me, because it calls into question all of their past and future work.
Oh boy, do I have bad news about 90% of the internet for you…
Linus sent an email recently to the Kernel Mailing List trashing AI slop and rejecting AI generated patches. The fact that he used it to play around with a script doesn’t invalidate the fact that he distrusts code written by LLMs when it actually matters.
Definitely share your initial concern. Without strong review processes to ensure that every line of code follows the intent of the human developer, there’s no way of knowing what exactly is in there and the implications for the human users. And I’m not just talking about bugs.
They say it’s reviewed, but the temptation to blindly trust is there. In this case, developer appears to have taken some care.
The code was written by Cursor and Claude, but reviewed and heavily tested over 2-3 weeks by me. I created comparison documents, went through all queries multiple times and reviewed the logic over and over again. I also did load tests and manual regression tests, which took lots of evenings.
Let us hope so. Handle with care to ensure responsibility is not offloaded to a machine instead of a person.
Yeah, this is now inherently untrustworthy. Better to switch to an alternative.
Do you know any? I’ve never really looked beyond ntfy.sh until now
There’s SunUp on F-droid, but I don’t know anything about them.
That’s from Mozilla, another AI company…
Ugh, seriously? Great…
(Edit) I don’t think this is true? They use Mozilla’s push services, but nothing about their Codeberg repo (yes, it’s on Codeberg, not Github) indicates they’re part of Mozilla.
Read the README
I’d run for the hills
There are so many issues with AI
Like ppl thinking skilled engineers cannot vet AI output. AI is pretty good for programming.
You’re absolutely right, and the vast majority of people on this platform seem to get offended by anything AI related. Software engineers have been reviewing code made by other people since the dawn of the craft. Guess what y’all, AI generated code looks exactly the same, if not better on the first pass at creating a thing.
Down vote me all you want homies. You’re living in a fantasy if you think all AI is slop. Sure, I can see how it’s ruining some content on the Internet, but for code related tasks, its going to dramatically change the world for the better.
I think you would need to first make the case that software is making the world a better place. So far, it’s got a spotty record…
The same thing happened to music when GarageBand and similar tools lowered the effort required to produce quality tracks. It took power away from the old gatekeepers and gave it to people with ideas but not traditional access. AI is doing that to software now.
I’m assuming this is some sort of canary message to indicate that the code base has been compromised, the author can’t talk about it, and everyone should immediately stop using the service. Surely no-one would be unwise enough to commit this otherwise?
Even ignoring the huge red LLM flag, a 25kLOC delta in a single PR should be cause for instant rejection as there’s no way to fully understand or test it, let alone in 2-3 weeks.











