Stable Diffusion and other AI systems are in uncharted legal waters.
The "almost exact copy" problem seems solvable? It should be possible to construct a similarity score between a generated image and any of the source images, based on some set of criteria.
Great article! I think the big entertainment companies are the 800lb gorilla, waiting in the wings. I wouldn't want to be on the other side of Disney's copyright lawyers. Disney also has the connections in government to lobby for favourable legislation. I think it's also a little telling that the entertainment companies haven't been making a big stink about it. Disney would love a tool that spits out an endless stream of Spider-man colouring books.
Wow, this article really highlights the complexities of copyright issues in the AI industry! Excellent work, Tim! It's fascinating how Stable Diffusion's ability to generate new images based on latent representations can lead to potential copyright infringement, even when not intended.
It's definitely concerning that if plaintiffs win these lawsuits, the entire generative AI industry could be thrown into chaos, potentially giving even more power to big tech companies like Google, Microsoft, and Meta. I can't help but wonder how this will impact the future of AI startups and the industry as a whole....
What are your thoughts on how these copyright issues might be resolved in a way that still fosters innovation and competition in the AI space?
Will we be seeing any more technical aspects of AI and LLMs? I would love to see stories about how to fine tune an Open Source model. It would also be cool to see stories about how to write interfaces between LLMs and other software.
The lawsuits can't come quick enough! Shut this job-killing thievery down ASAP!
Doesn't this rest on the same (old) legal battle over "copying" vs "inspired by"? And don't we have lots of data points (case law) on this? At some point if I (pre-AI) paint something that is obviously in the style of say, Thomas Kinkade, if he sues me, the court will have to determine 1) what features of art that are covered by his intellectual property and 2) whether or not what I drew was close enough to one of his works to violate that. Same with any trademarked characters - you can draw Batman or Mickey Mouse with certain traits (that are public domain) but not with others (that are still covered). Any case has to analyze that and determine "this is 92% the same, and we think the threshold is 90%, therefore you're infringing." I recognize that's a nerdy computer-person way of thinking about it, but it boils down to the core of the case.
Except that for these AI image models, the judge/jury doesn't have to "think" about it - the software/model will tell you. For your example of the Ann Lotz image above, the model can take in 2 images and tell you how different they are, along a number of different axes! Difference in pixels, shades, even a (perhaps rough estimate, given imprecision of models at time) numeric slider of how much of a given style or source material was incorporated. Certainly I think it's fair to say that generated picture is sufficiently similar (and I think an AI image processor - perhaps even a facial recognition one - would label it as highly similar) to the original to count as a recreation, but this is a thing that copyright infringement cases have always had to deal with.
Again, maybe it's my computer nerd reaction of "we've already designed an object oriented template for this type of case!" or "this case seems like everyone is pointedly ignoring that in order to generate lots of lawyer legal fees" but it seems like this is something that is already well established in previous cases. The change that it's now software remixing instead of the artist's brain - but I don't see how that changes the legal precedent, but then, there's a reason that I Am Not A Lawyer.