TheHolm
- 0 Posts
- 8 Comments
TheHolm@aussie.zoneto
Selfhosted@lemmy.world•Which Llama Server Hardware do you use?English
1·4 days agoProbably. It just not as fast as 9070 XT. I’m using 9070 XT myself and limitation for running LLMs is memory, not speed. If model fit in memory it will runs fast enough to be practical.
TheHolm@aussie.zoneto
Selfhosted@lemmy.world•Which Llama Server Hardware do you use?English
2·4 days agoIf you happy with 16g , nothing can beat in speed/cost of AMD RX 9070 XT.
Some standalone WAPs for WiFi and PC based router. Depends on what you are getting you can get it dirt cheap. WAP also need firmware upgrades, but it is less a problem.
TheHolm@aussie.zoneto
Selfhosted@lemmy.world•I prompt injected my CONTRIBUTING.md – 50% of PRs are botsEnglish
11·15 days agoThen read my post again. Contributing and writing opens source is no longer about how much time one willing to spend on it, it is about how much money someone willing to spend on LLMs which will write code. And all these money will go to AI overlords.
TheHolm@aussie.zoneto
Selfhosted@lemmy.world•I prompt injected my CONTRIBUTING.md – 50% of PRs are botsEnglish
11·16 days agoYes, but in each joke there is bit of truth. Open Source have to change. Open Source code written by LLMs is still open source, but it drastically different from current one.
Instead of spending time to “scratch the itch and help others in the process” - now people should give money to corps to use LLM to to do same.
TheHolm@aussie.zoneto
Selfhosted@lemmy.world•I prompt injected my CONTRIBUTING.md – 50% of PRs are botsEnglish
535·16 days agoThis is one good article. I guess humans are now mostly redundant in open source. Bots can do everything themself, write code, submit PR, merge them and even blog about it. Time to book a place for myself in a graveyard.

With AI search engines hosting public repo is very expensive.