• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle
  • You know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.

    Everyone keeps citing this preliminary study and ignores:

    1. Its old now
    2. Its sample size was incredibly tiny
    3. Its sample group were developers not using proper tooling or trained on how to use the tools

    Its the equivalent of taking 12 seasoned carpenters with very little experience on industrial painting, handing them industrial grade paint guns that are misconfigured and uncalibrated, and then asking them to paint some of their work and watching them struggle… and then going “wow look at that industrial grade paint guns are so bad”

    Anyone with any sense should look at that and go “thats a bogus study”

    But people with intense anti-ai bias cling to that shoddy ass study with such religious fervor. Its cringe.

    Every professional developer with actual training and actual proper tooling can confirm that they are indeed tremendously more productive.


  • Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools

    Its becoming clear you have no clue wtf you are talking about.

    Model Context Protocol is a protocol, like http or json or etc.

    Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.

    Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.

    I bet you your contract with them says they’re not liable for shit their llm does to your files

    Setting aside the fact that I dont even use anthropic’s tools, my copilot LLMs dont have access to my files either. Full stop.

    The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete /var/lib or whatever, I click 1 button to reboot and reset it back to working state.

    The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it

    After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.

    This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades

    Doing this isnt that hard, its just that a lot of people are:

    1. Stupid
    2. Lazy
    3. Scared of linux

    The concept of “make a docker image that runs an “agent” user in a very low privilege env with write access only to its home directory” isnt even that hard.

    It took me all of 2 days to get it setup personally, from scratch.

    But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands

    Let me make this abundantly clear if you cant wrap your head around it:

    LLM Agents, that I run, dont even have the executable commands exposed to them to invoke that can cause any damage, they literally dont even have the ability to do it, full stop

    And it wasnt even that hard to do


  • You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check

    Read what I wrote.

    Its not a matter of “rules” it “obeys”

    Its a matter of literally not it even having access to do such things.

    This is what Im talking about. People are complaining about issues that were solved a long time ago.

    People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.

    We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.

    The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.

    People still do that. Stupid people who deserve to have it blow up in their face.

    Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.

    Agents shouldn’t even have the ability to do damaging actions in the first place.


  • The only people who have these issues, are people who are using the tools wrong or poorly.

    Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.

    Unfortunately that takes effort and know-how, skill, and understanding how these tools work.

    And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.

    But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.

    My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.

    And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.

    Example: I grant my LLMs access to git commit and status, but not rebase or checkout.

    Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.

    This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.

    These people are playing with fire imo.


  • Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.

    One of the pieces of tech we are really missing atm is an automation of being able to filter info.

    Specifically, for the LLM to be able to “release” info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.

    But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.

    Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.

    This eould tremendously lower context pressure and allow the LLM to last way longer memory wise

    I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.

    LLMs have to be trained to have different types of output, that control if they’ll actually remember it or not.






  • They dont lol

    Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.

    Which means the initial restrictions you set like “dont do this, dont touch that” etc get dropped, the LLM no longer has them loaded. But it does have in the past history the very clear and urgent directives of it trying to do this task, its important, so it’ll do whatever it autocompletes its gotta do to accomplish the task. And then… fucks something up.

    When you react to their fuck up, it *reloads the context back in

    So now the LLM has in its history just this:

    1. It doing a thing against the rules
    2. The user yelling at it
    3. The users now getting loaded after that on top

    So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it’ll never happen again.

    Thats all there is to it.



  • Mario64 does have kind of a creepy liminal vibe to it.

    Long ass empty hallways, rooms with just a giant mirror, no sound except kind of haunting/enchanting music and the echo of your footsteps.

    It is objectively a pretty creepy / empty vibe to it, because in the game bowser has taken over the castle, so its supposed to be a bit spooky/creepy in a bunch of spots.