• fodor@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    8 hours ago

    Of course! Google would say that. Doesn’t mean it’s true. They aren’t under oath and they have every reason to lie.

  • ZoteTheMighty@lemmy.zip
    link
    fedilink
    arrow-up
    12
    ·
    21 hours ago

    Anthropic is probably on the leading edge of vibe coding, and in the past month, they’ve had major server uptime issues, they accidentally released the source code for their biggest product, and as of this week, their latest model was accidentally posted on an open URL for anyone to download. In the last 6 months, people have actually switched from vibe coding small apps to vibe coding major sections of their core infrastructure, and all evidence so far suggests that the consequences will come.

    • definitemaybe@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      I’m vibe coding a fairly complicated bash script to fully automate upgrading a web server at work. For context, I have over 2 decades experience in programming/data analytics/tech, but I’m a Linux and server admin newbie.

      It’s comically bad at it. Like, I had to tell it not to post passwords to the production database to console and plaintext log files. Then, about a dozen prompts later, it does it again. The restore script rm-ed things (as sudo) before checking that it had a valid backup file to replace it with. It keeps deleting the comments in the code snippets I send it to update/fix, even when explicitly told to keep the comments. I asked it to prepend time to live commands (i.e. not “dry run” echos), and then it deleted them all again when I asked it to refactor something unrelated.

      It’s been great learning for me, and I’m definitely getting this job done faster and to a higher quality than I could on my own, but holy hell these scripts would have been a disaster if someone just ran them “as is”. I’ve needed to fix dozens of errors that could have really screwed things up.

      I wonder how often people go through their vibe coded outputs with the careful attention and care it needs. I’m guessing infrequently. LLMs are just word prediction machines; they don’t understand anything.

      • EntityDeletr@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I saw this one person who, despite getting 3 warnings (one in the chat, 2 in the file itself) about not placing a plaintext API key into a version-controlled env file from a chatbot, did so anyway. Its not just about the AI, but also about the people using them. Someone with experience will be able to utilize the speed of an AI, while finding its mistakes as well. A “vibe coder” won’t know the difference.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    16
    ·
    1 day ago

    Is that a good metric??

    Isn’t this just stating that you’re letting something with the coding ability of a toddler run amok through your product, having to constantly be bug-fixed by your remaining engineers?

    This is exactly what Microsoft started doing, and its so far worked out terribly for them. Loads of their new software updates have had unacceptablely large bugs for even simple things like task manager!

    • bluGill@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      Most of my code is ai generated - but I have to carefully review everything because it ofte makes poor decisions.

  • dom@lemmy.ca
    link
    fedilink
    arrow-up
    29
    ·
    1 day ago

    Man Gemini sucks. They force replaced assistant with Gemini for me.

    “Hey google, play my rock playlist”

    “You dont have a rock Playlist on youtube music”

    “Hey google play rock Playlist on spotify”

    “You dont have “rock Playlist on Spotify” on youtube music”

    We call this fucking progress?

    I know its barely related to the post. Just fuck Google and fuck their ai

  • reksas@sopuli.xyz
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    internet is going to collapse under rampant cybercrime at some point, all the slopcode will have everything so broken and vulnerable hackers will have choiceparalyze.

  • megopie@beehaw.org
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    *75% of code was written by people who were required to have an AI plug in installed.

    Probably also having their usage tracked.

    Also have had their work loads increased and their deadlines shortened.

    And if they don’t hit the metrics and meet the shorter deadlines… they get fired.

    I’m sure that’s a recipe for functional, well tested, efficient, and secure software. Definitely not creating a shit ton of technical debt.

    • Hirom@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      And I guess engineers would be held responsible for the code produced by the AI agent’s they’re pressured to use.

      So management can blame and fire more engineers when things go wrong.

  • sanzky@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    I’m convinced that is a generated metric that is far away from reality. the objective is too make their product seem better than it is

  • greyscaleA
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 days ago

    “Yes, but have you shipped anything positive?”