OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.
I am not a lawyer either or a programmer for that matter, but the Copilot case looks pretty fucked. We can’t really get a look at the plaintiff’s examples since they have to be kept anonymous. Generative models weights don’t copy and paste from their training data unless there’s been some kind of overfitting, and some cases of similar or identical code snippets, might be inevitable given the nature of programming languages and common tasks. If the model was trained correctly, it should only ever see infinitesimally tiny parts of its training data. We also can’t tell how much of the plaintiff’s code is being used for the same reasons. The same is true of the plaintiff’s claims about the “Suggestions matching public code”.
This case is still in discovery and mired in secrecy, we might not ever find out what’s going on even once the proceedings have concluded.