• Bluegrass_Addict@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    7 hours ago

    …workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.

    tbh it just reads like people are just using it ai, not actually sabotaging it. lol it’s such trash

    • Monument@piefed.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      “We have poor customer data safeguards, confidently present subpar work as acceptable, and have failed to adequately train our intended users but would like you to believe it’s all the users fault.”

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      21
      ·
      7 hours ago

      “workers admitted to sabotaging their company’s AI by […] intentionally using low-quality AI output in their work without fixing it”

      Lol. Sounds an awful lot like the company is sabotaging itself in this case.

    • greyscaleA
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 hours ago

      The sabotage narrative did feel weak when I was listening to Natasha Bernal talking. Its probably not sabotage, its just their data is wank and the employees aren’t paid enough to care to fix it.

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 hours ago

        Just like just doing your job is quiet quitting, AI sabotage means not spending unpaid overtime to completely redo the slop.