vimarsana.com

And id like to especially thank my friend and colleague, Ranking Member, tom tillis and his staff for working with us on such a collaborative basis to put this hearing together. Welcome back from vilnius, senator tillis was over at the nato summit and im thrilled , hes able to join us and were able to do this hearing today. This is our second hearing in as many months on the intersection of Artificial Intelligence and intellectual property law and policy. You and your team have been great partners in pursuing this. If you will indulge me for a moment, senator, before i proceed with my remarks, id like to ask that we play just a little clip of something to frame the challenges of this this topic. Made with the permission of all the relevant rights holders. [laughter] start spreading the news. Hey, i got something to say. Its growing its own way of learning the rules today. Each digital mind, they are making the headlines. Creating new sounds in a world. Thank you for your forbearance. Yes a round of applause is , certainly welcome. [laughter] my team actually produced a version of that where it is a duet between me and Frank Sinatra. But my voice came out so horribly flat. I didnt want to impose that on any of you. Thank you, mr. Chair for your judgment. [laughter] creating the song ai ai to the tune of new york, new york was great fun and i appreciate my team that works so hard on pulling that off. But the very existence of it highlights a couple of the core questions around copyright raised by generative Artificial Intelligence. Chat gpt wrote the lyrics following the style of new york, new york although perhaps not quite as moving and inspiring as the original words to any but ip enthusiasts. Another generative ai tool was used to take mr. Sinatras recorded songs and his voice and his phrasing and his style and set that to music. So a couple of those core questions, did chatgpt infringe the copyright on New York New York when it drafted lyrics representing its lyrical style . What about the ai tool that set those lyrics to music . Did either that tool or did i run afoul of mr. Sinatras rights by mimicking his voice in my case . No, because we got specific approval. Or did i just use ai tools to enhance my own creativity . And if so, should this newly created song be entitled to copyright protection . These are just a few of the questions i hope we will explore with our panel of talented and insightful witnesses as we consider the impact of Artificial Intelligence on copyright law and policy and the Creative Community. As we all know, ai is rapidly developing. Generative ai tools have brought many new people into the creative fold and have opened new avenues for innovation. People who may not have ever considered themselves creatives are now using these tools to produce new works of art. Artists themselves have used ai tools to enhance their own creativity. Paul mccartney recently made headlines announcing that ai helped create the very last beatles song 50 years after the band broke up. As ive previewed ai creates new copyright law issues including whether using copyrighted content to train ai models is Copyright Infringement, whether ai generated content should be , given copyright protection. And many more. These questions are working their way through the courts, how the courts decide them and the decisions we make here in the senate and in congress about how to respect and reinforce existing copyright protections and works will have significant consequences not just for the Creative Community, but our overall competitiveness as a country. While generative ai models are trained on copyrighted content ip considerations havent been , included or sufficiently considered in proposed ip Regulatory Frameworks here in the us. In contrast, some of our competitors recognize ip policy as an important tool. The eu is currently planning to require ai companies to publish the copyrighted materials used in training models. The uk provides copyright protection for Computer Generated works. These are just some initial concerns and i think there are , initial steps that we can take to ensure sustained us leadership on Artificial Intelligence. First, its critical to include ip considerations in any Regulatory Framework for Artificial Intelligence and to give our Copyright Office and this framework a seat at the table. We should also consider whether changes to our copyright laws or whole new protections like a federal right of publicity may be necessary to strike the right balance between creators rights and ais ability to enhance innovation and creativity. Im excited to explore these issues today. Weve got a great panel and great partner and great members of the committee with senator tillis cooperation. Weve assembled this wonderful panel. Ill introduce them shortly, but first, ill turn it over to my Ranking Member, senator tom tillis. Thank you, mr. Chairman and thanks for everyone here. Its great to see the number of participants in the audience. Its even more amazing to see an equal number who would like to get in here, but who says ai and ip cant be sexy . But you know, in all seriousness, i appreciate that were having another hearing on the opportunity to highlight the importance of intellectual property when it comes to emerging technologies. And today were talking about ai. During our last hearing, we really discussed the impact of ai on a patent in a patent context which explored ideas such as whether or not ai can be considered an inventor, and it cannot and hopefully it will not in the future. But while many of these issues we discussed in the last hearing were perspective, the Creative Community is experiencing immediate and acute challenges due to the impact of generative ai in a copyright context. Strong, reliable, predictable ip rights are paramount to incentivizing Us Innovation and creativity. Its this innovation and creativity that fuels growth of our countrys prosperity and drives Enormous Economic growth. In fact, Core Copyright Industries add at 1. 8 trillion of value to us gdp. Accounting for almost 8 of the us economy. These Copyright Industries also employed 9. 6 million American Workers. The sales of major us copyright products overseas markets also constitute 230 billion and outpaced exports of other major us industries. Advances in generative ai have raised new questions regarding how copyright principles such as authorship, infringement and fair use will apply to content created or used by ai. We must not only consider how our current ip laws apply to the field of generative ai, but also what changes if any may be necessary to incentivize future ai innovators and creators. So chairman coons, im happy to have this committee. Ill submit the remainder of my statement for the record. But for those of you who have watched our committee over the past several congresses where either senator coons or i were and Ranking Member or chairmanship, i think if anything, i hope people understand that were very thorough and were very persistent in our approach and were inclusive. Ive told everyone on this issue, whichever end of the spectrum youre at if youre at , the table and the work groups, were gonna find a reasonable solution and compromises. If youre outside of the work group process and youre just taking shots at it, you may find yourself on the table from my perspective. So we encourage you to get to the table and make what were doing better. The reason why i think its so important and im glad the ip subcommittee is leading on this in terms of formal hearings with the focus on potentially drafting legislation is i think we run the risk of some in congress who think ai is bad, that its a threat to the future. Im not in that camp. I think that ai is good. Its something that i first developed expertise in back in the late eighties and have followed it ever since. Its a matter of getting the regulatory construct, the intellectual property construct, all the other underlying policies that you need when a new, i think positive in a positive term, Disruptive Technology hits the field. So the reason that we need to move forward, address potential concerns is precisely because i want the United States to lead an innovation. And so much innovation is going to be premised on properly exploiting the capabilities responsibly. And thats what i hope we learn in this hearing and subsequent hearings and work groups. So thank you all for being here and thank you mr. Chair for having the hearing. Sen. Coons thank you, senator tillis. Im now going to turn to our witness panel. Today we welcome five witnesses , to testify about the intersection of Artificial Intelligence and copyright law. Our first witness is mr. Ben brooks, head of Public Policy at stability ai, a company that develops a range of ai models that help users generate images, text, audio and video. Next, we have dana rao executive vp general counsel and chief trust officer at adobe. Id like to be chief trust officer in the United States. They dont have titles like that. Mr. Rao leads adobes Legal Security and policy organization including adobes content Authenticity Initiative which promotes Transparency Principles around the use of ai. Next, we have professor matthew sag, a professor of law and Artificial Intelligence, Machine Learning and data science at Emory University school of law. Professor sag is a leading us authority on the fair use doctrine in copyright law and its implications for researchers in text data mining, Machine Learning and ai. Next, well hear from karla ortiz an artist, a concept artist, illustrator, and fine artist who has worked on a variety of wellknown and widely enjoyed projects including jurassic world, black panther, loki, and she is most famous for designing in my assessment at least for designing Doctor Strange for marvels first Doctor Strange film. Welcome. Last but certainly not least we have jeffrey harleston, general counsel and executive vp of business and Legal Affairs for Universal Music Group. Mr. Harleston is responsible for overseeing business transactions, contracts litigation for all of Universal Music Groups worldwide operations in more than 60 countries. After i swear all of you in, each of you will have five minutes to make an opening statement. Well then proceed to questioning each senator depending on attendance, questioning and time. We will have a first round of five minutes. We may well have a second round of five minutes and we may be the only two left for a third round of five minutes, but well see. So could all the witnesses please stand to be sworn in . Please raise your right hand. Do you swear or affirm that the testimony you are about to give before this committee will be the truth, the whole truth and nothing but the truth, so help you god . Thank you. Mr. Brooks, you may proceed with your opening statement. Thank you, chair coons and Ranking Member tillis for the opportunity to testify today. Ai can help to unlock creativity, drive, innovation and open up new opportunities for creators and entrepreneurs across the United States. As with any groundbreaking technology, ai raises important questions and we recognize the depth of concern among creators. While we dont have all the answers, were committed to an open and constructive dialogue. And were actively working to address emerging concerns through new Technology Standards and good practices at stability ai. Our goal is to unlock ais potential. We develop a range of ai models. These models are essentially Software Programs that can help a user to create new content. Our flagship model, Stable Diffusion can take plain language instructions from a user and help to produce a new image. Were also working on research for safe language models that can help to produce new passages of text or software code. Ai is a tool that can help to accelerate the creative process. In our written testimony, we shared examples of how broadway designers, architects, photographers and researchers are using our models to boost their productivity, experiment with new concepts or even study new approaches to diagnosing complex medical disorders. We are committed to releasing our models openly with appropriate safeguards. That means we share the Underlying Software as a public resource. Creators, entrepreneurs and researchers can customize these models to develop their own ai tools, build their own ai businesses and find novel applications for ai that best support their work. Importantly, open models are transparent. We can look under the hood to scrutinize the technology for safety, performance and bias. These ai models study vast amounts of data to understand the subtle relationships between words, ideas and visual or textual features. Much like a person visiting an art gallery or library to learn how to draw or how to write. They learn the irreducible facts and structures that make up our systems of communication. And through this process, they develop an adaptable body of knowledge that they can then apply to help produce new and unseen content. In other words, compositions that did not appear in the Training Data and may not have appeared anywhere else. These models dont rely on a single work in their Training Data nor do they store that Training Data. But instead they learn by observing recurring patterns over billions of images and trillions of words of text. We believe that developing these models is an acceptable and socially beneficial use of existing content that is permitted by fair use and helps to promote the progress of science and useful arts, fair. Fair use and a culture of open learning is essential to recent developments in ai. It is essential to help make ai useful, safe, unbiased and it is doubtful that these groundbreaking technologies would be possible without it. The us has established Global Leadership in ai thanks in part to an adaptable and principlesbased fair use doctrine that balances creative rights with open innovation. We acknowledge emerging concerns, and these are early days and we dont have all the answers. But were actively working to address these concerns, as i say, through Technology Standards and good practices. First, we have committed to voluntary optouts so that creators can choose if they dont want their online work to be used for ai training. Weve received optout requests for over 160 million images to date, and were incorporating these into upcoming training. Were hoping to develop digital s optout labels as well that follow the content wherever it goes on the internet. Second, were implementing features to help users and tech platforms identify ai content. Images generated through our platform can be digitally stamped with metadata and water marks to indicate if the content was generated with ai. These signals can help ensure that users exercise appropriate care when interacting with ai content and help tech platforms distinguish ai content before amplifying it online. We welcome adobes leadership in driving the development of some of these open standards. Third, weve developed layers of mitigations to make it easier to do the right thing with ai and harder to do the wrong thing. Today, we filter data sets for unsafe content. We test and evaluate our models before release. We apply ethical use licenses, disclose known risks, filter content generated through our Computing Services and implement new techniques to mitigate bias. As we integrate ai into the Digital Economy, we believe the community will continue to value human generated art and perhaps value it at a premium. Smartphones didnt destroy photography, and word processes didnt diminish literature. Despite radically transforming the economics of creation. Instead, they gave rise to new demand for services, new markets for content and new creators. We expect the same will be true of ai. And we welcome an ongoing dialogue with the Creative Community about the fair deployment of these technologies. Thank you for the opportunity to testify and we welcome your questions. Sen. Coons thank you, mr. Brooks. Mr. Rao. Mr. Rao chair coons, Ranking Member tillerson, members of the committee, thank you for the opportunity to testify here today. My name is dana rao and im general counsel and as senator coons noted, chief trust officer at adobe. Im happy to provide you with a secret certificate. You need to get that title if youd like after the hearing. Since our founding in 1982, adobe has pioneered transformative technologies and all types of digital creation from Digital Documents like pdf to image editing with photoshop. Our products allow our customers who range from aspiring artists to wartime photojournalists to advertisers and more to unleash their creativity, protect their craft, empower their businesses in a digital world. Ai is the latest Disruptive Technology weve been incorporating into our tools to help creators realize their potential. Youve all seen the magic of text to image generative ai. Type in the prompt, cat driving a 1950 sports car through the desert and in seconds, youll see multiple variations of a cat on a retro road trip appear before your eyes. Weve launched a generative ai in our own tools, adobe firefly and it has proved to be wildly popular with our Creative Professionals and consumers alike. In my written testimony, i explore a comprehensive framework for responsible Ai Development that includes addressing misinformation, harmful bias, creative rights and intellectual property. Today, given adobes focus on our millions of creative customers and our leadership in ai i will focus on how the , United States can continue to lead the world in Ai Development by both supporting the access to data that ai requires in strengthening creator rights. The question of data access is critical for the development of ai because ai is only as powerful and as good as the data on which it is trained. Like the human brain, ai learns from the information you give it. In the ais case, the data it is trained on. Training on a larger data set can help ensure your results are more accurate because the ai has more facts to learn from. A larger data set will also help the ai avoid perpetuating harmful biases in its results by giving it a wider breadth of experiences from which it can build its understanding of the world. More data means better answers and fewer biases. Given those technical realities, the United States and governments should support access to data to ensure that Ai Innovation can flourish accurately and responsibly. However, one of the most important implications of ais need for data is the impact on copyright and creators rights. There are many outstanding questions in this space including whether creating an ai model which is a Software Program from a set of images is a permitted fair use. And whether that analysis changes, if the output of that ai model creates an image that is substantially similar to an image on which it is trained. These questions will certainly be addressed by courts and perhaps congress and we are prepared to help assist in those discussions. Adobe recognized the potential impact of ai on creators and society, and we have taken several steps. First, we train our own generative ai tool. Adobe firefly only on licensed images from our adobe stock collection, which is a stock Photography Collection, openly licensed content and works that are in the Public Domain where the copyright has expired. This approach supports creators and customers by training on a data set that is designed to be commercially safe. In addition, were advocating for other steps we can all take to strengthen creator rights. First, we believe creators should be able to attach a do not train tag to their work with industry and government support. We can ensure ai data crawlers will read and respect this tag. Giving creators the option to keep their data out of ai Training Data sets. Second, creators using ai tools want to ensure they can obtain copyright protection over their work in this new era of ai assisted digital creation and ai output alone may not receive copyright protection. But we believe the combination of human expression and ai expression will and should. Content editing tools should enable creators to obtain a copyright by allowing them to distinguish the ai work from the human work. In my written testimony, i discuss our open standardsbased Technology Content credentials which can help enable both of , these creative protections. Finally, even though adobe has trained its ai on permitted work, we understand the concern that an artist can be economically dispossessed by an ai trained on their work that generates art in their style. In the Frank Sinatra example you gave. We believe artists should be protected against this type of economic harm and we propose , congress establish a new federal antiimpersonation right that would give artists a right to enforce against someone intentionally attempting to impersonate their style or likeness. Holding people accountable who misuse ai tools is a solution we believe goes to the heart of some of the issues our customers have. And this new rate would help address that concern. The United States has led the world through technological transformations in the past, and we have all learned it is important to be proactively responsible about the impact of these technologies. Pairing innovation with responsible innovation will ensure that ai ultimately becomes a transformative and true benefit to our society. Thank you, chair coons, Ranking Member tillis and members of the committee. Sen. Coons thank you. Mr. Professor . Mr. Sag Professor Jay coons, Ranking Member tillis members of the subcommittee. Thank you for the opportunity to testify here today. Im a professor of law in ai Machine Learning and data science at Emory University where i was hired as part of emorys ai human initiative. Although were still a long way from the Science Fiction version of artificial general intelligence that thinks, feels, and refuses to open the pod bay doors, recent advances in Machine Learning and Artificial Intelligence have captured the publics attention and apparently lawmakers interest. We now have Large Language Models or llms that can pass the bar exam, carry on a conversation, create new music and new visual art. Nonetheless, copyright law does not and should not recognize Computer Systems as authors even where an ai produces images, text or music that is indistinguishable from humanauthored works. It makes no sense to think of a Machine Learning program as the author. The copyright act rightly reserves copyright for original works of authorship as the Supreme Court explained long ago in the 1884 case of burrow giles lithographic authorship entails original intellectual conception and ai cant produce a work that reflects its own original intellectual conception because it has none. Thus, when ai models produce content with little or no human oversight, there is no copyright in those outputs. However, humans using ai as tools of expression may claim authorship if the final form of the work reflects their original intellectual conception in sufficient detail. And ive elaborated in my written submissions how this will depend on the circumstances. Training generative ai on copyrighted works is usually fair use because it falls into the category of nonexpressive use. Courts addressing technologies such as reverse engineering, Search Engines and Plagiarism Detection Software have held that these nonexpressive uses are fair use. These cases reflect copyrights fundamental distinction between protect original expression and unprotective facts, ideas and abstractions. Whether training an llm is a nonexpressive use depends ultimately on the outputs of the model. If an llm is trained properly and operated with appropriate safeguards, its outputs will not resemble its inputs in a way that would trigger copyright liability. Training such an llm on copyrighted works would thus be justified under current fair use principles. Its important to understand that generative ai are not designed to copy original expression. One of the most Common Misconceptions about generative ai is the notion that the Training Data is somehow copied into the model. Machine learning models are influenced by the data. They would be pretty useless without it, but they typically dont copy the data in any literal sense. So rather than thinking of an llm as copying the Training Data like a scribe in a monastery, it makes more sense to think of it as learning from the Training Data like a student. If an llm like gp t three is working as intended, it doesnt copy the Training Data at all. The only copying that takes place is when the training corpus is assembled and preprocessed, and that is what you need a fair use justification for. Whether a generative ai produces truly new content or simply conjures up an infringing cut and paste of works in the Training Data depends on how it is trained accordingly. Companies should adopt best practices to reduce the risk of Copyright Infringement and other related harms. And ive elaborated on some of these best practices in my written submission. Failure to adopt best practices may potentially undermine claims of fair use. Generative ai does not, in my opinion require a Major Overhaul of the us copyright system at this time. If congress is considering new legislation in relation to ai and copyright, that legislation should be targeted at clarifying the application of existing fair use jurisprudence, not overhauling it. Israel, singapore and south korea have recently incorporated fair use into their copyright statutes because these countries recognize that the flexibility of the fair use doctrine gives us companies and us researchers a Significant Competitive advantage, several other jurisdictions, most notably japan, the United Kingdom and the European Union have specifically adopted exemptions for text data mining that allow use of copyrighted works as training for Machine Learning and other purposes. Copyright laws should encourage the developers of generative ai to act responsibly. However, if our laws become overly restrictive, then corporations and researchers will simply move key aspects of Technology Development overseas to our competitors. Thank you very much. Sen. Coons thank you, professor. Ms. Ortiz . Ms. Ortiz chairman coons, Ranking Member tillis and esteemed members of the committee. It is an honor to testify before you today about ai and copyright. My name is karla ortiz. I am a concert artist, illustrator and fine artist and you may not know my name. But you know my work, my paintings have shaped the words world of blockbuster marvel films and tv shows including guardians of the galaxy, black panther, lucky, you know. But specifically, the one im most happiest of is that i my work helped shape the look of Doctor Strange in the first Doctor Strange movie. I have to brag about that a little bit, sir. I love what i do. I love my craft. Artists train their entire lives to be able to bring the imaginary to life. All of us who engage in this craft love every little bit of it. Through hard work, support of loved ones and dedication i have , been able to make a good living from my craft. Be it the entertainment industry, an industry that thrives when artists rights to consent, credit and compensation are respected. I have never worried about my future as an artist until now. Generative ai is unlike any other technology that has come before. It is a technology that uniquely consumes and exploits the hard work, creativity and innovation of others. No other tool is like this. What i found when first researching ai horrified me, i found that almost the entirety of my work, the work of almost every artist i know and the work of hundreds of thousands of artists had been taken without our consent, credit or compensation. These works were stolen and used to train for profit technologies with data sets that contain billions of image and text data pairs. Through my research, i learned many ai Companies Gather copyrighted Training Data by relying on a practice called data laundering. This is where a company outsources Data Collection to a third party under the pretext of research to then immediately use commercially. I found these Companies Use big terms like publicly available data or openly licensed content to disguise their extensive reliance on copyrighted works. No matter what theyre saying, these models are illegally trained on copyrighted works to to add even more insult to injury, i found that these four Profit Companies were not only permitting users to use our full names to generate imagery but encouraging it. For example, polish artist Greg Rutkowski has had his name used as a prompt in ai products over 400,000 times and thus are the those are the lower ends of the estimate. My own name, karla ortiz has also been used by these companies thousands of times. Never once did i give consent. Never once have i gotten credit. Never once have i gotten compensation. It should come as no surprise that major productions are replacing artists with generative ai. Goldman sachs estimates that a generative ai will diminish or outright destroy approximately 300 million full time jobs worldwide. As Ranking Member tillis mentioned earlier, copyright Reliant Industries alone contribute 1. 8 trillion of value to the us gdp. Accounting for 7. 76 of the entire us economy. This is an industry that employs 9. 6 million American Workers alone. The game plan is simple to go as fast as possible to create mesmerizing tales of progress and to normalize the exploitation of artists as quickly as possible. They hope when we catch our breath, itll be too late to right the wrongs and exploiting americans will become an accepted way of doing things, but that game cant succeed as we are here now, given that the urgency is so desperately deserved. Congress should act to ensure what we call the three cs and a t. Consent, credit compensation and transparency. The work of artists like myself were taken without our consent, credit or compensation and then used to compete with us directly in our own markets. An outrageous act that under any other context would immediately be seen as unfair immoral and illegal senators. Senators, there is a fundamental fairness issue here. Im asking congress to address this by enacting laws that require these companies to obtain consent, give credit, pay compensation and be transparent. Thank you. Sen. Coons thank you, ms. Ortiz. Last but certainly not least, mr. Houston. Mr. Harleston thank you, chairman coons, Ranking Member tillis and members of the committee. Its an honor to be here before you today. Im jeff harleston. Im the General Council of Universal Music Group. And what is Universal Music Group . Were the world leader in music based entertainment. We are home to legendary record labels such as motown, def, jam island, blue note capital. Just to name a few. We have a Music Publishing company that signs songwriters and we have a music merchandizing company as well and an Audio Visual Division that produces Award Winning docentaries based on music g. We develop artists across every musical genre. I think its, its fair to note that Frank Sinatra is one of our artists and i think based on what we didnt hear today, im not sure if well be pursuing a developing artist out of delaware named chris coons, but may be. We will get back to that. All jokes aside, ive been with the company, ive been honored to be with the company for 30 years and most of the time ive spent as a lawyer, but ive also spent some time leading the def jam label and also as the Management Team of geffen records. So i have been on both sides of the business. We have also helped broker deals with Digital Services platforms, social Media Outlets where you , all of you can access the , music that you love. Its been my lifes honor to work with countless talented and creative artists. Their creativity is the soundtrack to our lives and without the fundamentals of copyright, we might not have ever known them. Id like to make four key points to you today. The copyright artist and human first, creativity must be protected. Art and human creativity are central to our identity. Artists and creators have rights. They must be respected. If i leave you with one message today, it is this ai in the service of artists and creativity can be a very, very good thing. But ai that uses or worse yet appropriates the work of these artists and creators and their creative expression, their name, their image, their likeness, their voice without authorization, without consent simply is not a good thing. The second point i want to make is that generative ai raises challenging issues in the copyright space. I think youve heard from the other panelists and they all would agree. We are the stewards at universal of tens of thousands, if not hundreds of thousands of copyrighted creative works from our songwriters and artists. And theyve entrusted us to honor, value, and protect them. Today they are being used to , train generative ai systems without authorization. This irresponsible ai is violative of copyright law and completely unnecessary. There is a robust digital marketplace today in which thousands of responsible companies properly obtain the rights that they need to operate. Theres no reason that the same rules that apply to everyone else should not apply equally to ai companies and ai developers. My third point, ai can be used responsibly to enhance artistic expression. Just like other technologies before, artists can use ai to enhance their art. Ai tools have long been used in Recording Studios for drum tracks chord progressions and , even creative creating immersive audio technologies. One of our distributed artists used a generative ai tool to simultaneously release a single in six languages in his own voice. And it was on the same day. The generative ai tool extended the artists creative intent and expression with his consent to new markets and fans instantly. In this case, consent is the key. There is no reason we cant create a legitimate ai marketplace in the service of artists. Theres a robust free market for music sampling, synchronization licensing, and deals with new entrants to the digital rugged place, social Media Companies in all manner of new technologies. We can do the same thing with ai. And my fourth and final point to cultivate a lawful responsible ai marketplace Congress Needs to , establish rules that ensure creators are respected and protected. One way to do that is to enact a federal right of publicity. Deep fakes and or unauthorized recordings or visuals of artists generated by ai can lead to consumer confusion, unfair competition against the artist. That actually was the original creator, market dilution, and damage to the artists reputation, potentially irreparably harming their career. An artists voice is often the most valuable part of their livelihood and public persona. And to steal it no matter the means is wrong. A federal right of publicity would clarify and harmonize the protections currently provided at the state level. Visibility into ai Training Data is also needed. If the data on ai training is not transparent, then the potential for a healthy marketplace will be stymied as information on infringing content would be largely inaccessible to individual creators. And i might add based on some of the comments i heard earlier, it would be hard to opt out if you dont know whats been opted in. Finally, ai generated content should be labeled as such. We are committed to protecting our artists and the authenticity of their creative works. As you all know, consumers deserve to know exactly what theyre getting. I look forward to the discussion this afternoon and i thank you for the opportunity to present my point of view. Thank you. Sen. Coons thank you, mr. Harleston. Thank you to all of the witnesses today. Well begin our first fiveminute round, and im going to start by just exploring how we can respect existing copyright works, copyright protections while continuing to safely develop and advanced ai technologies. If we run out of time, well do a second round. My hunch is theres at least that much interest. Mr. Brooks, if i might just start with you generative ai models like those your Company Creates are trained in no small part on vast quantities of copyrighted content on data. Copyrighted content, on data from copyrighted content. Do the copyright owners know if their works have been used to train stability models is stability paying rights holders for that use. Why not if not . And how would doing so impact your business and your Business Model . Mr. Brooks thank you, senator. So to the first question models like Stable Diffusion are trained on open data sets or curated subsets of those data sets. So Stable Diffusion, for example, takes a 5 billion image data set. We filter that for content bias , quality, and then we use a 2 billion image subset to train a model like Stable Diffusion. Because its open, you can go to a website, you can type in the url of an image, you can type in a name, you can see if that work has appeared in the Training Data set. And then were working with partners to take those opt out requests. And as i say, to incorporate them into our own training and development processes. So we do think open datasets are important. Theyre one part of how we are able to inspect ai for fairness and bias and safety. And so thats the first part. Sen. Coons so if i heard youre right, if an artist takes the initiative to search your training set, they might be able to identify that a copyrighted work was used and then submit an opt out request and you are in the process of facilitating that use. But to my second question, do you pay any of the rights holders . Mr. Brooks as i say, senator, we, this is 2 billion images, large amount of content. A lot of it, all kinds of content. Take language models from simple. Does not just books. Its snippets of text from all over the internet. As i say, to make that workable, we believe that is important to have that diversity to have that , scale. Thats how we make these models safe. Its how we make them effective. And so, we collect it, as i say from online data. What i will say is that the data sets that we use like that 5 billion image data set i mentioned, they respect protocols like robots. Text. So robots. Text is a digital standard that basically says i want my website to be available for ancillary purposes such as Search Engine indexing. And so the data set that was compiled, respected that robots. Text signal. And then on top of that, as i say, we have the optout facility that weve implemented. Sen. Coons thank you, mr. Rao. Its my understanding that adobe is taking a distinctly different approach. Your generative ai model firefly was only trained on licensed data. Were there any downsides economically to that decision . Is your model less robust or has it had any impact on its performance . And how would you compare these two approaches in terms of the incorporation of opt out and licensed . Mr. Rao and thank you for the question. So, as i mentioned in the opening remarks, firefly, our generative ai tool was trained on our stock Photography Collection which are all , licensed assets with the contributors. And thats actually the only data that used in the version that you can use on w firefly on the web. We think about the quality of this and when we think about the quality to your question, we had to put a lot of image Science Behind that to make sure it was up to the level we require because we didnt have the most expansive version of that data set. So we had to put more computer Science Behind it to make it have the Higher Quality we needed. As we go forward, were looking at whether or not there are other areas where we need to supplement that data set. And then for those we refer to as openly licensed content or places where the company has expired, opening license to us means images that come from the rights holders who have licensed it without restriction. So very similar to what were talking about in the license content. This is also what we call commercially safe. Sen. Coons my sense, mr. Brooks, is stability is trying to honor Something Like 160 million optout requests in training your next model. Mr. Brooks, this will be my last question. Then ill turn to senator tillis. Should congress be working to ensure that creatives can opt out of having their works used to train ai models. How would you best do that briefly . Mr. Rao so we have this technology we refer to as content credentials and my opening remarks and what that does its a metadata that goes , along with any content. So if youre in photoshop right now, you could say i want content credentials to be associated with this image. As part of that, you can choose to say i want it not to be trained on a do not train tag that gets associated with the image and it goes wherever the content goes. So we do think that technology is there and available and were talking to other Companies Including stability about this is an approach to honor that tag. So people when they are crawling, it can see the tag and choose not to train them. Sen. Coons should we require that . Mr. Rao and i do think that theres an opportunity for congress to mandate the carrying of a tag like that, a credential like that. Wherever the content goes right now, its a voluntary decision to choose to do that. Sen. Coons should we require that . Mr. Brooks theres some very interesting precedent internationally for this. The European Union has introduced certain kinds of text and data mining exceptions. And part of that is to say that you can use this for commercial, noncommercial purposes. There is an opt out requirement. But the opt out has to be machine readable as i say, as a matter of practicality, when youre dealing with trillions of words of content, for example, or billions of images in this case, the machine readability is important and thats where these tags become an important part of how jim demint it in practice. Ill keep exploring this further. Thank you, chair. Im gonna have senator blackburn go and then ill follow senator hirono. Excellent. Thank you, senator tillis and mr. Chairman. Thank you for the hearing today. Its so appropriate that we have this. Im from tennessee. We have thousands of artists and songwriters and musicians and, but we have actors and actresses and we have authors and publishers. And everywhere i go, people are talking about the impact of ai to the positive or the negative. You look at health care, you look at logistics, you look at autos, you look at entertainment and there are pros and cons. But the one point of agreement is you gotta do something about this so that it is going to be fair and its going to be the level. Mr. Harleston, i wanna come to you right off the bat because, you mentioned the nil issue, which i think is an imperative for artists to be able to own that. And you also mentioned the right of publicity laws and of course, those are state level laws. And as you rightly said, we dont have a federally preemptive right to publicity law. And i think the dust up, a lot of people came to realize this over drake and the weekend and a heart on the sleeve. And this is something that does have to be addressed. So for the record, give us about 30 seconds and then you can see your capable team behind you, you can submit something longer in writing if youd like on the reason, state level, publicity laws are not enough in 30 seconds. Mr. Harleston state level right of publicity laws are inconsistent from state to state. A federal right of publicity that really elevates right of publicity to a to an intellectual property is critically important. Im going to help you out with this. A federally preemptive right to publicity law would provide more of that constitutional guarantee to her works that ms. Ortiz has mentioned. Mr. Harleston absolutely. All right. Sen. Coons and we will follow mr. Harleston up with you, senator. Yeah. Excellent. I think something in writing would be very helpful there. Now, i think it was very appropriate that you had spotify and apple music take down hard on my sleeve. Important to do, and talk about the role that the streaming platforms should play. Should they be the arbiter when it comes to dealing with this genitive ai content . Mr. Harleston the streaming platforms, we acknowledge that theyre in a challenging position but certainly in some instances when theres clear when its clear that the content that has been submitted to them for distribution. So a knowing and willingness standard would be nice. Mr. Brooks that would be nice, yes. Professor, i want to come to you. This spring, the Supreme Court issued a what i thought was a very appropriate decision in warhol versus goldsmith and i was very pleased to see them come down on the side of a artist. I filed an amicus brief in this case arguing for strong fair use protections for creators. Now weve been through this thing in the Music Industry where fair use became a fairly useful way to steal my property , and artists dont want to go through that again. Right ms. Ortiz . It didnt work the way it was supposed to. And i would like for you to talk for a moment. Should ai unlicensed ai ingestion of copyrighted works by be considered fair use when the output of ai replaces or competes with the human generated work . Now, miss ortiz has laid this out fairly well in her comments and the Supreme Court has sided with the artist in warhol versus goldsmith, but this fair use standard comes into play every time we talk about our fabulous Creative Community and keeping them compensated. So the floor is yours. Mr. Rao senator blackburn commercial replacement should not be the test. The test should be exactly what the Supreme Court said in the andy warhol case. My question is is is significantly informative . What that means in relation to training ai models . Is, does the output of the model bear too much resemblance to the inputs . Now, thats a different question to, is it competing with the inputs . Could it be used as a commercial substitute . If you look at some of the old cases on reverse engineering software, companies were allowed to crack open software, find the secret keys to interoperability and build new competing products that did not contain any copyrightable expression. And the court said that that was fair use. So, i think on current law, the answer is no. Potential substitution in terms of a competing product is not the test. The test is, are you taking an inappropriate amount of an artist original expression . My time has expired. Thank you for that. We just dont want it to become a fairly useful way to steal an artists product. Thank you, mr. Chairman. Sen. Coons thank you senator blackburn, and thank you for the passionate engagement youve always brought to these issues on behalf of the Creative Community. Thank you mr. Chairman. Mr. Harleston, whenever the idea of negotiating licenses is raised, people express concerns about how complex it would be and how ai Platform Developers could never possibly negotiate with all rights holders. But in the music context, at least you have a lot of experience negotiating rights. Could you tell us a little bit about your industrys history of negotiating rights with Digital Music services and lessons that history could teach us for whether rights negotiations would be possible with ai platforms . Mr. Harleston thank you, senator. As you referenced, we have had a long history with the transition of our business from a physical business to a digital business. And having to encounter digital platforms that were very quickly adapted by consumers and, and had lots of our content on there. What we found was in ingenuity does play a does play a role. It is not easy but we were able to identify or find ways to identify our copyrights, to work out licensing schemes that allowed the platforms to, to be be able to carry and distribute the music in a commercial environment that was positive for them while at the same time allowing the artist to be properly compensated. This is on the music side, we have two sets of rights which makes it even more, could but we have done great work over the years to develop systems that allow identifying not only the sound recording but also the underlying composition. So it could be done, but what it needs, we need help to make sure that everyone understands that there are rights that are affected and that the activity that is happening now is violative. And once they understand what they are doing is violative that brings them to the table so we cannot negotiate a deal. I note that in your testimony, you said that consent is the key. So is your position that every artists work before it can be used to train ai models and companies that want to use that information has got to get the consent of that originated her . Mr. Harleston in a very short answer, yes. You think that we are able to do this knowing that these platforms incorporate billions and billions of information to train their ai models . Mr. Harleston understanding that but it actually can be done as the digital platforms that exist today, the license that forms in just millions and millions of songs every week. So it is not a problem in that respect. There is metadata that we could license. We could absolutely do that but there has to be an initiative on the side of the companies to reach out. Mr. Brooks, what i heard you say in your response to the germans question is that for all of the data that you input into your model, you do not get the consent of the artist or originator. Is that correct, mr. Brooks . Mr. Brooks senator, we believe that yes, if that image data is out on the internet and robots. Text says it can be subject to aggregated Data Collection and if its not subject to an opt out request in our upcoming models, then certainly we will use those, images or potentially use those images if it passes our filters. So basically do not pay for the data that you put in to train your models . Mr. Brooks but for the base the kind of initial training or teaching of these models with those billions of images there is no arrangement in place. So you have ms. Ortiz who says tthat is wrong. Is that correct . Ms. Ortiz wrong. 100 . I think you mentioned that your work has been used to train ai models and you have gotten not one sin for that use. Ms. Ortiz i have never been asked. I have never been credited. I have never been compensated one penny and thats for the use of almost the entirety of my work, both personal and commercial senator. If there was a law that required composition, then that composition negotiation should be left to you and the entity such as mr. Brooks . Ms. Ortiz i love what i do so i would not outsource it to an ai but that is not a choice for me to make. It is all about that. Its about being able to have that choice and artists dont have that right now. Ok. Thank you. Thank you, mr. Chair. I was actually inspired by one of the opening statements. So i went in and generated a cat driving a 1960 corvette with a surfboard in it. And i produced that picture. It actually gave me four options. This is the one i found the most interesting, but it raised a question that i wanted to ask you, mr. Brooks. If an artist looked at that and said that is in part developed by that sixties corvette in south beach, how does that artist then go about saying i am trying to get an understanding of your current opt out policy and one of the issues we have had here and not clearly related that we have a notice and takedown or notice and stay down discussions in the past around creative works. So is just trying to understand and i think it will be a lengthy answer. For the record, would be very helpful to me for your specific letter from to understand how that opt out process works. I think i heard that you could embed within the works certain things that already createthat that work should not be used by want to drill down. We dont have time to do that now. In a twist of irony, i was wondering if any of the witnesses would suggest any creative works by other governmental bodies that we should use as a baseline. In other words, what good policy seems to be being discussed or passed and what particularly problematic at either end of the spectrum because i am so pathetic to the issues at maybe we start with you, professor. Are you aware of any western democracy states, im not interested in what china is doing, because whatever they agree to they are going to rebuff anyway. But any best practices that we should look at out there are bad practices or trends that we should avoid or be concerned with as we move forward . I think that the European Unions approach, where they have different rules for commercial and noncommercial use and opt outs have to be respected for commercial uses such as text finding has something to recommend it. But by the same token i would note that opt outs do not apply to researchers working at Proper Research institutions in the you, nor do contractual overrides, which is a position that i cannot see congress adopting. It is certainly something to look at. Thats really it. Anyone else . Ms. Ortiz, i should also add i should see i have seen all of your works. At 11 00 last night i was talking about guardians of the galaxy with my colleagues as we were coming back from vilnius. It was a really fun project to work on, senator. Thank you. So, what the artist unity has suggested is that models be built starting from scratch vibrant Public Domain only works, work that belongs to everyone. Any expansion upon that is to be done by licensing. And there is a couple reasons for this. Current opt out measures are inefficient. Machine learning models, once trained on data, cannot forget. And machine unlearning procedures are just dead in the water right up. This is not according to me, this is according to Machine Learning experts in the field. Things safety filters like prompt filters are so easily bypassed by users. Unfortunately, when Companies Say hey, opt out, there is no real way to do that. But even further, what happens if someone does not know how to write a robot. Txt, like how does a person who may not know the language may not know the internet or may not even know that their work is in there . How do they recognize that they need to opt out . That is why community in particular has adjusted over and over suggested over and over opt in should be the basic foundations of consent credit and compensation. And mr. Burr, i can understand the challenges with opt in versus opt out in terms of the task you would have ahead of you, but what is your view of the concerns that creatives hat have expressed in this light . And the current opt out process you all have in place or procedures which i would like to get information for for the record. Thank you reiki member. I will say at the start that we do need to think through what the future of the Digital Economy looks like. What do incentives looks like look like . How do we make these technologies a winwin for everybody involved . These are very early days. We dont have all the answers. We are working to see through what that looks like. Im going to stick around for a second round and we will get deeper into that, but i want to defer to my calling from california. Thank you senator. I yield. Thank you mr. Trump, and i want to thank you for your testimony and participation today. Speaking of calpurnia, i cannot help but observe that california is very well represented on this panel. Not only a point of pride for me as a center from california, but frankly, not a surprise since we are the creative and tech hub of the nation. Generative ai tools, as weve been talking about, represent remarkable opportunities and challenges for the Greater Community in and broader society. I couldnt help but observe in testimony from each of you that there was a common goal of seeking to leverage and develop ai tools to complement and encourage human creativity and artistry while also respecting the rights and dignity of the original creators. So, its a tall order, a delicate balancing act in many ways, but that is it seems to be the shared objective here. So i want to thank you again for participating in this hearing as we are working to determine what role we play in fostering the developed of ai in a matter that is a net positive for innovation and creativity. My first question, and ill keep it brief because its piggybacking on senator tillis, just try to expand upon that, and directed at mr. Brooks, this opt in opt out, so whether or not its easy and clear or not, i dont agree with you that we are in an early stage because it is happening fast. Tell us how it is possible to have a system unlearn inputs that have already been taken if you get this afterthefact opt out from an artist. It is happening now. While you are trying to think long term, it is happening now, how does it work . Its not just checking a box or filling out a form. Botanically. I want to make it clear that today it is a work in process framework. You can go to this website and indicate that you want to walk opt out, and we take those requests as they come in. As we were talking about before it is important that there is a standardized metadata which is attached to these works as they go out into the wild. And what the eeo is requiring, there will be developed its in that space with adobe and others. In terms of what happens, you know, as i say, we filter that Training Data for a few reasons. We take out unsafe content and adjust for issues like bias to correct for bias, and in addition to that, we start to incorporate, the opt out requests. Sometimes some of the models we release are betraying from scratch with new data sets and they take into account the Lessons Learned through Previous Development both as an organization and a company and intensely technical things we have learned as well in that process. Some of the models that are released are just fine tune variations. Those ones might have the same kind of basic knowledge from that original training process. And there has just been additional training to correct for certain behaviors or improve performance in specific tasks. So, in terms of the future of the space, there is a lot of work being done on unlearning in general. How do you interpret the relationship between training and the data in training and the performance of the model . How do potentially dusted that in different ways . But as i say at this stage, we treated as a process of incorporating those opt out requests, retraining and then releasing new Model Trained on that new data. I hear you, and i just want to level set a little bit, not just out of concern for the artists, but knowing that unless youre getting 133 inputs at 123 puts a day inputs today, as we get into the thousands of inputs per day to go in and relearn, unlearn, and comply with any consent or opt out. It gets overwhelming and unfeasible real quick. And its happening now. I also wanted to follow up on a subject matter, i think senator the other senator touched on it earlier, we know that models need to be fed large data sets to generate images based on user prompts like senator tillis said. That looked much more like Pacific Coast highway in south beach, by the way. Now, ai, this is talking to folks back home, can only understand what it is taught, making it critical that for ai companies to train the model with data that captures the full range of the human experience. We want to be inclusive and diverse. If youre going to be accurate in representing our users, representing the diverse backgrounds of all users. Mr. Sir, you have exwhite house adobes firefly seeks to avoid Copyright Infringement by being trained only on licensed adobe stock images, openly licensed content and Public Domain content. How do you reconcile both . Want to be as inclusive as possible which means as much data input as possible, but to avoid copyright refrigerant, your being selected in those inputs. The diversity of input is important, i think for the diversity of output. How do you reconcile . It is definitely attention in the system. The more did i do have the less bias you see. Its great to have more data. When you set the expectations that we have for ourselves of trying to design a model that was going to be commercially safe, we took on the challenge of saying can we also do that and minimize harmful bias and the way we did that is that we have an ai ethics team. We started that four years ago, and one of the key things they did when we were developing adobe firefly was not only do we have the data set and we understand what that is, we also did a lot of testing on it. We have a series of prompts, hundreds and hundreds and hundreds of prompts. We were testing against it to see what the dissolution of the bottle is. Is there going to be a bias. If you type in lawyer, are you only going to get men or white men, and what does that mean, and how do you change that. You can change it by adding more data, making it more diverse, ethically sourcing more data to diversify the data set, or you can add filters on top of the data set to force a distribution of what you expect to see if youre typing in certain search terms and making sure the bias is removed. You can either do it by adding more data or you can do it through adding filters on top of the model itself to ensure youre going to get the right result. If you asked to input center, what comes out, out . And amazingly handsome man and woman, just very intellectual. Men and women. Colors across the spectrum. Across the spectrum. The first time we did lawyer, we only had white men and as general counsel i was like there should be some people who look like me as well. Senator thank you senator padilla and senator klobuchar. Klobuchar i was kind to be here for all of your testimony and thank you for that. I will start with you, mr. Holton. Harlton. I know that you talked about this for senator blackburn. A proximally half the states have laws that give individuals control over the use of their name, image and voice, but in the other half of the country, someone who is harmed by a fake recording purporting to be them has little recourse in your testimony. You also talk about new laws and how that could protect me musicians, names, likenesses and voices, the right of publicity. I think you called. Can you talk about why creating this is important in the face of emerging ai and how have statutes in states that have these protections help artists . Thank you for the question. It is critical in this environment when we are talking about the grade of expression that the artist has made that the right of publicity also be extended at the federal level. There is inconsistency, but more importantly, the primitive element is critical, raising it to the level of an intellectual property is also kregel. What we have seen, and this is really in the area error the area of deepfakes, where you have seen, you know, i think miss ortiz referenced how many times her name was listed. We are fighting with our artists, particularly the ones that are most established, that their names are, you know, daily, hundreds and thousands of posts with their names. And theres sometimes images that are used as well. So, its critical to have this right to protect the artists in their use. If i could just say one thing on date, i know this is not your question, but i have to say. There you go, its killing me. Ill add it to my time. On the opt in, opt out, there is an element beyond commerciality and i want to make sure every buddy understands. Ms. Ortiz did reference it, but she probably would not want to licensed to nai and we have artists that dont want to licensed streaming services. It is not always about the commerciality. Artists just dont want their artistry did in certain ways. And the beatles did not come onto street platforms until 78 years ago. That was a decision that was very important to them. So i want to add that into the conversation. I know that was not your question. Ok, very good. And what do see as the allegations of suit social media platforms on this . With respect to a hike . Fantastic question. We believe that social media platforms absolutely have an obligation. I will say this, that we can help them by giving them a hoe beyond copyright. In terms of being able to take down some of the deepfakes, challenges with some of the platforms. Klobuchar right exactly, and we are seeing the same thing, i would turn to you, mr. Brooks, you talked about advocating for creative ways to help people identify ai credit content and when we talk about deepfakes we are already seeing this with political ads and not even paid ads, just videos that are put out there. There is one of my colleague, senator warren, that was a total lie saying that it was her that she was saying people from one Property Party should not be able to vote. We have seen it in the rep. Duncan president ial primary. A number of us on a bipartisan basis are working on this. As might i chaired the rules committee so it is my other hat. Do you agree that without tools for people to determine whether an image or video generated by a by ai would pose a risk to our fair and free elections if you cannot tell the candidate you are seeing is the candidate or not . We absolutely believe that these transparency initiatives like cai and adobe are really important about how we make the information ecosystem more robust. This is not just an ad problem or a social media problem, it is going to require everyone to be accountable across that ecosystem. What we think is, you know, we have things in place like metadata, watermarking for content. There are more signals that social media platforms can use to decide whether the appetizer to content. Klobuchar and we had this real political advertisement act with senator booker and senator bennett. There is a version initially that was also introduced in the house. And so, thats one solution, but were also going to have to look at i would say banning some of this content because even a label or a watermark, its not going to help the artist the candidate if everyone thinks it is them and it is not. And at the end its generated by ai. Its a great question and a really important one, because i think there are a few things in there. There is discretion in the use of likeness, critically went for improper purposes, when youre applying there is endorsement or affiliation between a particular person and a particular work or idea. That is different to the use of free extremity with style at some of these other issues that tend to get locked together in ai outputs. So, in terms of the scenarios that youre talking about, there is this improper use. You apply that sum endorses or immerses a cause that they are not affiliate with, and there three clear rules around how ai is used in that context, whether through right of publicity or through some bespoke deep legislation. Klobuchar our recent study, i know you have worked on this, i appreciate that, a study by northwestern predicted that one third of the u. S. Newspapers that existed roughly two decades ago will be gone by 2025. The bill that senator kennedy and i have, the journalism connotation and preservation act, would allow local news organizations to negotiate with on my platforms including generative ai platforms. This bill passed through this committee now twice. Could you describe how adobe approaches this issue and in your stance is it possible to train sophisticated generative and models without using copyrighted materials absent consent . We our current model that is out there is trade easing the licensed content that i mentioned before, content that has no assertion on it and comes from the rights holders directly. We think it is possible, we have done it, it is out there on our website and in photoshop and people love it. The crated professionals are using that ai, it makes their day easy, it lets them start their crate of work in just one click and they finish it in the tool. Its really revolutionize how we think about things in terms of how we acquire data sets. And we have a group inside adobe who, thats their job, their job is to think about where do we need to go next, do we need to get to different media types . Are we missing some sort of subject matter for it had to be more accurate . That was a question that we had before way think about that context, maybe there is a newspaper that has the kind of content we need, appreciate approach that organization and say that we need to licensed that content so the ai is more accurate. We have a team that things about this versus just taking it. What impact could this have on local journalism if there is no rules put in place . I think that, you know, both on the authenticity side and on this side if people are able to create images and these newspapers are not able to get the ability to license what theyre doing, it could have a negative impact on the authenticity side. The reason why so many media copies have joined the content Authenticity Initiative like ap, reuters, buster journal, new york times, Washington Post is because they know that when they are showing images they need to be able to show that they are actually true. They need to be able to prove they need to prove that these dimensions digital images they are seeing a real because then theyre going to stop consuming newspapers and theyre not want to believe it. You need to prove what youre showing is true. There are a lot less famous newspapers, including very small ones in my state just, you might not mention, right . And so, i think that is part of it. You know, the miss ortiz of the story, they need to be able to have some kind of power to protect the content too, they dont have a general counsel and theyre not going to be able to, on their own, start some major lawsuit. And so i think that is how we have to think about that as well. Especially as we look at all of this. That is why i would say again, when we decide firefly we designed it that way. Be commercially safe first. Im saying it rhetorically, for the world and to Everybody Needs to get this done as opposed to you, so thank you so much. Appreciate it. And i thank you, both of you for your continual bipartisan work and in taking on this very important issue, thanks. Thank you senator klobuchar. Theyre going to do a last round of questioning. We may be joined by other colleagues, but we are also in the middle of a vote. My hunch is that we will resolve this in 1015 minutes at the most. I interested in the question of a federal statutory right of publishing. And perhaps it is motivated by a desire for there to be consistency, elevation in terms of process and access to justice and potential remedies that come with federal rights as opposed to a state right. Professor, if i can start with you, you testified earlier in response to a question from senator blackburn that commercial replacement is not the proper test under current fair use law itself the United States. Should we adopt a federal rights of publicity with commercial placement as the test or part of the test . And how would that play out . What other remedy might you suggest under a new federal right . Senator, thank you for that question, because i was quite alarmed by some of the discourse here about the right of publicity. I think as well as well as thinking about publicity routes ash rights for wellknown artists, musicians, etc. , congress should be thinking about the right of publicity of ordinary people, people who are anonymous, who have no commercially valuable reputation. All of us deserve to be protected from deepfakes and synthetic reproductions of our name, image and likeness, regardless of whether we are a famous politician or Famous Artist or an anonymous law professor. So i think, how how would you focus the remedy in order to make that effective . In terms of remedy, i think that right of publicity statutes traditionally had injunctive relief usually incorporating equitable balancing tests. That is what i would go for. Which would mean that mobs might have to be retrained. Injected for leave, only not commercial, damages . Potentially as well. Statutory damages, i dont think so. Statutory damages can be distorted. They tend to be a honey pot for opportunistic lawyers as well as generally aggrieved plaintiffs. So i would steer clear of statutory damages but actual damage and injections, absolute. I would be interested in your views, mr. Brown on the right of publicity and what it might do, and your thoughts on how we should be trying to balance respect copyright through this or other means while incentivizing investment in ai and accelerating innovation. Thank you. So we talked about in our testimony similar to but not exactly like a right of publicity, we have referred to as a federal antiimpersonation right. And the reason we have thought it front of it from an antichristian nation perspective was because of the questions professor sack race, which is that we want to make sure that he does not have a deepfake made of him. If you think about and impersonation right that would apply to everybody, and we are targeting there is the economic disbursement we have been talking about here where nai is trained on an artist in crates and output that is exactly like the artist and they are getting to that work. And copyright may not reach them, like that has been the question. That is why we believe they do need this, right. They can go after these people were impersonating their work, whether that is likeness, or style, and the test would be something that we would work out through six months of deliberately liberation here in this body. Exactly how you would decide that. But i think that is the right approach because you want to focus on people who are intentionally inperson addict someone in order to make, get some commercial benefit. I think that will help clarify what harm we are trying to address. Mr. Brooks, how do you think of a federal antiimpersonation right . By the way, that spells fair, i just want to make that clear. I know you love acting as him. We are enthusiastic about acting as an acronyms. We are producing a senate only version of chatgpt that only produces acronyms for bill names. Mr. Brooks, how do you think of a federal publicity right or antiimpersonation right, a federal requirement that there would be only an option rather than an opt out which would impact the Business Models you are representing . I think the astro instrument and the content of the instrument, i think we are relatively diagnostic agnostic at this stage. As i said to senator klobuchar it is important that there are clear rules governing the use of likeness in a proper way. It is a years and to some extent we cannot escape the fact that the determination of whether it is improper or improper will depend on the application, what the user does or does not do with that content downstream, and so, as you say, from our perspective the line in the sand between improper use of likeness , preexperimentation with style free extermination with style, or other kinds of bad or good use of these tools arent easy to draw. They are fact sensitive and it might be appropriate for courts to determine that. At a high level i would say that there is a core level of things around the improper use of likeness, especially voice likeness, that there may be legislative intervention there that makes sense and may have obligations, across the supply chain across the ecosystem. The Copyright Office issued guidance about human other ship being critical to any copyright protection. Is there guidance accessible enough, relevant . Did they strike the right balance . Should we be looking at a different policy in terms of how copyright protection should reach whether aa assisted creativity is supposed to add generated . I think the Copyright Office did a pretty good job. One can debate whether nai component in a broader work should also be afforded some form of cup are right. I call you know, i think they landed in the right place that it should not. That copyright should only be afforded to human creation. So, for example, if you had an add generated song, if you had a song that was crated by artists and they use a piece of it that was generative ai, there could be a copyright on the entire work, but the data generated portion would not be protectable. If someone were to sample or lifted out and use it in another talk context, it would not be subject to copyright. I think they did a good job trying to strike that balance. In a conversation i had previously the professor said how do you feel about that the scope of potential remedies on an antiimpersonation statute . Im glad you asked me that question. I think there should be a private right of action. I think commerciality is not always the proper standard here. I think that in some instances we have had artists who have been the victim of deepfakes where the voice was appropriated and the lyrical content was something that the artist would never have said. And that is something that can have irreparable harm to their career and trying to expand what that it was not them, this stuff is really good. Some of these add generated things are really good. Last but not least, has in producing some of the interesting, engaging, powerful, inspiring content that you generate, have you ever relied on nai tool to help you expand or produce some of the works that you have worked on . And what is your hope about what we might be doing Going Forward here in congress . In response to what we have heard from you about your concerns today. I am happy you asked us that. I have never really, i was curious very early on before i knew the extent of the exultation of artists, very briefly used nai to generate references and i did didnt enjoy it at all. I love every step of the process of being an artist, and ever since i found out, you know, last august and september, what exit went on behind the scenes, i just, i cannot use it. My peers refuse to use it. My industry is required that we do not want to explain each other. And again, its important to remember that these, you know, models basically compete in our own market and this is not so they that is hypothetical, it has happened now with our own work. And one of the things that i would hope, you know, would be kind of addressed here is that a lot of the solutions that have been proposed, or, you know, basically, you cannot enact them unless you know whats in the data set. And for this, we need to ensure that there is clear transparency built up from ground the ground up, like, no offense, to some of the copies here. But if you dont know what is in the data set, how do we know . How is the license or going to know that my data is in the data set . And if you like that is one of the starting foundations for artists and other individuals as well to be able to gain consent, credit and compensation. Thank you. Mr. Tillis and. Before i headed over to you and then we conclude, i just appreciate you all taken the time and effort and helping to educate us. You are literally training us as we try to produce some fidelity in our legislative work. To make, try to figure out what may or may not be in the leg which model is a lot like taking roll call in a dark classroom. I just dont understand how you would do it. So, you know, i could see we have to work it out. But i want to start mr. Brooks by thanking you for being here. I think that anyone that is watching the states understand that this is not unique to stability ai. This is a broader set of issues that we have to deal with. And i appreciate the fact that you would be willing to come here because you should expect that some of the concerns that were going to be expressed to begin with. I have one question, the bad news for you all is that my staff are really excited about this. These are the questions we are going just a made the record. But rather than expect you all to respond to everyone of them, youre welcome to do that, your area of expertise. Your priorities. Is that to guide you and get that information back for the record, one of the ones i will not have to ask because im asking now is that most consumers, new it percent believe that ai should be expressly disclosed. Now, in vilnius, i happen to stay at a hope held that is called the shakespeare hotel. And every room was named after the greats. I dont see a day where those rooms are going to be named after great llms. And the reason for that is because there is a natural cultural bias for rewarding the human beings who are truly the creators and the light, excused lifeblood of our Creative Community. Does anyone here disagree that a work that is derived from even, lets say, licensed content, that the consumer should not know that this was graded by machine versus an original crate of work crated by human beings . Anybody disagree with that . Or technical issues i should look at . I use photoshop, i could create corvette cap corvette cat with a skateboard or surfboard really quickly, no different than i want that, which is, you know, based on prior credit for, some health have disclosures. Does that make sense to you . The question that we thought about his differently of interest to our creative customers to beale to show something immigrated versus and created in adobe firefly, it all comes out saying that something is ai created is always on top by default. You will always know that is created. We anticipate that our ai futures features are the most popular features in photoshop. So, we expect Going Forward most images are going to have a part that is ai and a part that is human and then you have to think about what are you disclosing when you disclose that, right . The contact credential that we mentioned before which you could use for addressing deepfakes will also record the human part versus the ar part. So you can think of that as a disclosure, i am not sure over time people are going to be as interested in knowing the identity of the artist to crater the work as opposed to which part of it they did with ai. Thats fair. Professor, any comment . You also to think that you are not just talking visual works here, you can take the same thing with written works. Someone uses chatgpt to help smooth over there writing, refines up the, explain more clearly. There is some awkward line drawing questions, but the spirit of the disclosure is correct. The emblem petition will be difficult. I agree, i am checking the boats, it is probably time for us to wrap up the committee. I think you can see just by the sheer number of members who came to the subcommittee this is an area of interest and priority for us. Mr. Trump, i have decided that maybe for the next hearing, its going to take more 20 for me to get the answer, but im going to do with the, you know, my son, who let the dogs out . I was thinking we would set that to dont steal my ip. I will see if i can get that done. If you think about it, it would be produced happy. Ill work on that for those who may have to get a pickle room if people know about that in advance. We might do this as a duet. [laughter] i think this committee has demonstrated that we are very thoughtful and that we are barely very diligent and i for one could sit at that table and probably present the interests of either side of the spectrum, which is why i believe that we need legislative certainty. We need to learn like data privacy and data ownership in you are. They dont always get it right. The first rhyme try. We want to make sure that it is something that scales properly. But this is clearly an area where i dont think anyone they would be hardpressed to convince me that no action is required. And again, i bias on this committee from the beginning, having grown up in Innovation Technology and technological innovation is sank the compelling numbers about how important it is to our economy and our culture. There is a lot of work to do. Im confident with the leadership of the chair were going to get work done and we look forward to your continued engagement. Thank you senator. I think it was mr. Brooks who early onset that other technological governments, perhaps it was you, word processing did not and authorship. Smartphones did not and photography. But the impact of them. They impacted them. And we need to closely and with some deliberation realign what rights and protections there are about to do with things like deepfakes, some argue that shakespeare himself was a deepfake, to protect the rights of individuals, protect the rights of those who earn their living by being creative, to ensure that consumers understand what they are consuming and to make sure that we are lining with other countries that share our core values and our priority on a free market and the rights of individuals in contrast to other countries with other systems. So, and grateful to all of you for testifying today and taking your time and contribute to this , these are very challenging questions. Members can submit questions for the record for these witnesses if they were not able to attend, questions for the record are due by 5 00 one week from today. Thank you all for your input as we try to craft a good legislative solution. This hearing is adjourned. [indiscernable conversation] [inaudible] [indiscernible] [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. Visit ncicap. Org] [captions Copyright National cable satellite corp. 2023]

© 2024 Vimarsana

vimarsana.com © 2020. All Rights Reserved.