• Mordikan@kbin.earth
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    This is the definition of a zero effort post.

    You don’t want to put forth the effort to bug hunt, you want an AI agent to bug hunt for you. You don’t want to learn to even setup the agent, you want other people to explain step by step how to do that for you.

    I’m assuming you aren’t even going to review before it submits. Honestly though, how would you review it given you don’t know anything about the topic its submitting on.

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    Dear God, please don’t. FF does not want your AI slop bug reports. You people are ruining open source.

  • Hexarei@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    run a local LLM like Claude!

    Look inside

    “Run ollama”

    Ollama will almost always be slower than running vllm or llama.cpp, nobody should be suggesting it for anything agentic. On most consumer hardware, the availability of llama.cpp’s --cpu-moe flag alone is absurdly good and worth the effort to familiarize yourself with llamacpp instead of ollama.

    • Quibblekrust@thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      –cpu-moe

      AI Acknowledgement

      The joke is worth the slop, imo. “Cpu Moe”. 😂 Find me an anime drawing of a CPU (especially an iconic one) and I’ll use that instead.

  • ZWQbpkzl [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    You’ll have to be more specific with how anthropic is debuging Firefox. There’s many sort of possible setups. In general though, you’ll need

    • an llm model file
    • some openai compatible server, eg lmstudio, llama.cpp, ollama.
    • some sort of client to that server there’s a myriad of options here. OpenCode is the most like Claude. But there’s also more modular programmatic clients, which might suit a long term task
    • the Firefox source code and/or an MCP server via some plugin.

    You’ll also need to know which models your hardware can run. “Smarter” models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.