…workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.
tbh it just reads like people are just using it ai, not actually sabotaging it. lol it’s such trash
“We have poor customer data safeguards, confidently present subpar work as acceptable, and have failed to adequately train our intended users but would like you to believe it’s all the users fault.”
The sabotage narrative did feel weak when I was listening to Natasha Bernal talking. Its probably not sabotage, its just their data is wank and the employees aren’t paid enough to care to fix it.
tbh it just reads like people are just using it ai, not actually sabotaging it. lol it’s such trash
“We have poor customer data safeguards, confidently present subpar work as acceptable, and have failed to adequately train our intended users but would like you to believe it’s all the users fault.”
Lol. Sounds an awful lot like the company is sabotaging itself in this case.
The sabotage narrative did feel weak when I was listening to Natasha Bernal talking. Its probably not sabotage, its just their data is wank and the employees aren’t paid enough to care to fix it.
Just like just doing your job is quiet quitting, AI sabotage means not spending unpaid overtime to completely redo the slop.