I spent the better part of 3 days pulling my hair out over a script that just wouldn’t cooperate. Logs, testing, asking deepseek—nothing.

I made a post here yesterday asking about agentic llm models, and someone mentioned opencode.

I ran it from the code location, asked it to find the bug, and within a minute it pointed out a stupid error that I think I wouldn’t have ever found out. A silly little mistake. Fixed in a minute.

If a free model caught that instantly, it really puts things into perspective. Anthropic recently found 22 vulnerabilities in Firefox using their largest models. That’s not just fixing syntax; that’s hardening a massive browser against exploits.

I’m excited because the barrier to shipping stable code just dropped through the floor. But I’m also scared. Not of the tech itself, but of what happens when capitalists decide to fully automate labor. The game is changing fast.

The open‑source community is great at building tools. We need to get equally good at talking about who those tools really serve—and how we make sure they empower workers, not just replace them.

  • mattreb@feddit.it
    link
    fedilink
    arrow-up
    2
    ·
    18 days ago

    But I’m also scared. Not of the tech itself, but of what happens when capitalists decide to fully automate labor.

    Dont worry that’s not gonna happen. A collegue of mine use llms regularly for code reviews, but you have to put in the work to understand whether what they say makes sense and filter out false positives which are not occasional.

    • marmelab@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      16 days ago

      Exactly. Llms can output code, but they don’t understand long-term intent. As llm usage grows, engineers who truly understand the system become more valuable, not less. Knowing which changes are safe and why decisions were made is now an even more critical skill. It’s the backbone of any resilient system.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    While it is a good use case, also require human verification. I use to run my code (C++) trough clang analizers and sometime also with open llm models. The clang is pretty good, the llms also hallucinate on that too.

    Code review is a much better use case than writing code for llm s, but still require a grain of salt.

  • glitching@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    aside from the glaringly obvious, i.e. you’re incinerating the planet for shits and giggles, the thing is unsustainable. the best guess (as they’re hiding actual info) is that they have a capex in the ballpark of $1K/day/user. you’re meanwhile giving them $20/mo, if at all - no math on this planet can make that sustainable.

    vulture capital and greater-fool-theory can only intermittently hold the bag, but not indefinite - sora was just shut down for that reason.