• 0 Posts
  • 6 Comments
Joined 1Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

If you give me several paragraphs instead of a single sentence, do you still think it’s impossible to tell?


I don’t see how that affects my point.

  • Today’s AI detector can’t tell apart the output of today’s LLM.
  • Future AI detector WILL be able to tell apart the output of today’s LLM.
  • Of course, future AI detector won’t be able to tell apart the output of future LLM.

So at any point in time, only recent text could be “contaminated”. The claim that “all text after 2023 is forever contaminated” just isn’t true. Researchers would simply have to be a bit more careful including it.


Not really. If it’s truly impossible to tell the text apart, than it doesn’t really pose a problem for training AI. Otherwise, next-gen AI will be able to tell apart text generated by current gen AI, and it will get filtered out. So only the most recent data will have unfiltered shitty AI-generated stuff, but they don’t train AI on super-recent text anyway.


They don’t redistribute. They learn information about the material they’ve been trained on - not there natural itself*, and can use it to generate material they’ve never seen.

  • Bigger models seem to memorize some of the material and can infringe, but that’s not really the goal.

Language models actually do learn things in the sense that: the information encoded in the training model isn’t usually* taken directly from the training data; instead, it’s information that describes the training data, but is new. That’s why it can generate text that’s never appeared in the data.

  • the bigger models seem to remember some of the data and can reproduce it verbatim; but that’s not really the goal.

It’s specifically distribution of the work or derivatives that copyright prevents.

So you could make an argument that an LLM that’s memorized the book and can reproduce (parts of) it upon request is infringing. But one that’s merely trained on the book, but hasn’t memorized it, should be fine.