• polakkenak@feddit.dk
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    No, absolutely not. It is safe to assume that most/all open source (and otherwise) has been part of the training data. You need not look further than the fact that some models can recite Harry Potter from memory. There is no such thing as “clean room” for AI.

    • Captain Beyond@linkage.ds8.zone
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Ironically though this makes the reverse a bit more defensible (i.e. using an LLM to reverse engineer a proprietary app) because that proprietary app’s source code is less likely to be among the publicly available dataset.

      But I imagine the corpos aren’t going to look fondly on that for obvious reasons.

    • StellarExtract@lemmy.zip
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      This really isn’t true though, even if it is currently true in many cases. Case in point, if I wrote something and published it right now, it wouldn’t be part of any AI model yet. A party with a lot of money (like, say, a tech corporation) could easily create a bespoke coding model that is trained on everything but the desired libraries, thus achieving “clean room”.