AI | ESRB Ratings https://www.esrb.org/tag/ai/ Provides ratings for video games and apps, including age ratings, content descriptors and interactive elements. Wed, 05 Jul 2023 18:46:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.esrb.org/wp-content/uploads/2019/06/cropped-Favicon.png AI | ESRB Ratings https://www.esrb.org/tag/ai/ 32 32 IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit https://www.esrb.org/privacy-certified-blog/iapp-or-ai-pp-generative-ai-games-and-the-global-privacy-summit/ Wed, 26 Apr 2023 13:47:12 +0000 https://www.esrb.org/?p=5488 As videogame companies increasingly embrace generative AI, privacy pros will need to drill down on regulatory enforcement and best practices.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>
Image generated by Canva Text-to-Image AI.

With over 5,000 attendees, seemingly hundreds of panels and speakers, and a brilliant opening talk by South African comedian and philanthropist, Trevor Noah, the recent International Association of Privacy Professionals (IAPP) Global Privacy Summit (#GPS23) was a terrific opportunity for ESRB Privacy Certified’s videogame and toy-focused team to connect with the wider privacy world. Despite the vast array of privacy issues and resources on offer, there was one topic that topped everything – Artificial Intelligence. AI.

Especially generative AI, made famous by viral chatbot, ChatGPT. The incredible advances in generative AI that have catapulted into games and everything else in the last few months were top of mind. Almost every panel, even when not directly about AI, touched on it. I found myself counting the minutes, even seconds, it took for someone to mention ChatGPT in a hallway conversation between sessions. (The average was just under three minutes.) The takeaway? Privacy practitioners must understand and plan for AI-related privacy issues.

That’s especially true for privacy pros at game companies. Videogame companies are increasingly embracing technology’s possibilities to revolutionize the way we learn, work, and play. Already, videogame companies are using generative AI to speed up game development, reduce costs, and help players interact with characters in new interactive and immersive ways.

Generative AI’s use of gargantuan amounts of data – including personal data – however, raises complex privacy issues. For example, even if some of the underlying data is technically public (at least in the U.S.), generative AI models could combine and use this information in unknown ways. OpenAI, Inc., the company behind ChatGPT, acknowledges that it scoops up “publicly available personal information.” There are also privacy issues around transparency, bias, and consumers’ rights to access, correct, and delete information used by the models. And yes, ChatGPT records all of your “prompts.”

All this “underline[s] the urgent need for robust privacy and security measures in the development and deployment of generative AI technologies,” asserts IAPP Principal Technology Researcher, Katharina Koerner. Many large videogame companies have already developed principles for what’s been variously called “trustworthy” or “ethical” or “responsible” AI. Most address consumer privacy and data security at a high level. Still, as videogame companies increasingly embrace generative AI and roll out new products, privacy pros will need to drill down on regulatory enforcement and best practices in this area. So here, to get you started, are three top takeaways from IAPP GPS23, aka the “AI-PP”:

  1. Get Ready for Federal Trade Commission (FTC) Generative AI Action FTC Commissioner Alvaro Bedoya, in an entertaining DALL-E illustrated keynote speech titled “Early Thoughts on Generative AI,” emphasized that the FTC can regulate AI today. Taking on what he called a “powerful myth out there that ‘AI is unregulated,’” Commissioner Bedoya said:
    Unfair and deceptive trade practices laws apply to AI. At the FTC, our core section 5 jurisdiction extends to companies making, selling, or using AI. If a company makes a deceptive claim using (or about) AI, that company can be held accountable. If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable. (Footnotes omitted.)

    Commissioner Bedoya also pointed to civil rights laws as well as tort and product liability laws. “Do I support stronger statutory protections?” he asked. “Absolutely. But AI does not, today, exist in a law-free environment.”

    A recent FTC Business Center blog emphasizes Bedoya’s point. The agency explained that new AI tools present “serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools.” It warned that, “Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.” And just yesterday, the FTC, along with the Department of Justice and several other federal agencies, released a joint statement announcing their “resolve to monitor the development and use of automated systems . . . [and] vigorously
    use our collective authorities to protect individuals’ rights regardless of whether legal
    violations occur through traditional means or advanced technologies.”

    My reaction? Commissioner Bedoya’s “early thoughts speech” should be seen as a current heads-up. Especially in light of the Center for AI and Digital Policy’s recent complaint to the FTC. The group urged the agency to investigate OpenAI and GPT-4 and prevent the release of further generative AI products before the “establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

  2. The Italian Data Protection Authority’s ChatGPT Injunction Is Just the Beginning of Worldwide Scrutiny
    Even though the FTC hasn’t yet acted on Commissioner Bedoya’s warning, other privacy authorities have already done so. GPS23 was filled with chatter about the action by the Italian Data Protection Authority (the Garante) against ChatGPT owner, OpenAI, under the General Data Protection Regulation (GDPR) temporarily banning ChatGPT in Italy.Since then, the agency has required OpenAI to comply with specific privacy requirements before lifting the ban. These include requiring the company to ask users for consent or establishing a legitimate interest for using consumer’s data, verify users’ ages to keep children off the platform, and provide users with access, correction, and deletion rights. Whether and how OpenAI can do so is an open, high-stakes, question.Meanwhile, more scrutiny is on the way. The U.K.’s Information Commissioner, John Edwards, and the President of the French CNIL (Commission Nationale Informatique & Libertés), Marie-Laure Denis, spent most of their session on Regulator Insights From Today to Help Design Privacy Rules for Tomorrow talking about the challenges of AI and the GDPR’s roadmap for compliance and enforcement. Last Thursday, the European Data Protection Board announced that it had launched a new task force to discuss a coordinated European approach to ChatGPT. And just this Monday, the Baden-Württemberg data protection authority announced it was seeking information from the company on behalf of Germany’s 16 state-run data protection authorities.In case you think only European agencies are investigating ChatGPT, Canadian Privacy Commissioner Philippe Dufresne announced his agency’s investigation into ChatGPT on the first morning of GPS23. There aren’t many details yet, but like the Italian Garante’s action, the Office of the Privacy Commissioner’s investigation appears to be focused on the product’s lack of transparency and failure to obtain consent from users for the data that powers the chatbot, which is trained on data collected from the open web.
  3. AI Governance and Risk Mitigation Are Key
    Although not as splashy as the main stage presentation by author and generative AI expert, Nina Schick, the panels that focused on the practical aspects of AI were invaluable. They also provided pointers on how to build a sturdy foundation for AI use, including by:

    • Adopting documented principles, policies, and procedures;
    • Establishing cross-functional teams;
    • Inventorying models, data and use cases;
    • Updating procurement and vendor oversight processes;
    • Providing employee training and awareness; and
    • Assessing risks.

    (Sounds a lot like a “how to” build a solid privacy program, no?) They also discussed the slew of AI legislation currently underway (e.g., the EU’s AI Act, California and other state bills) that will ultimately clarify the compliance landscape.

    At another session, panelists emphasized that there’s no one silver bullet for privacy issues in AI. Instead, practitioners will need to use some combination of privacy enhancing technologies (PETS), like differential privacy, and frameworks like the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and its Privacy Framework to help address the privacy challenges of generative AI.

*****
ChatGPT and other generative AI products can’t predict the future. Yet. Or, as ChatGPT itself told me, “[I]t is not capable of predicting the future with certainty.” But as IAPP GPS23 made clear, generative AI will certainly be part of the privacy discussion going forward.

• • •

If you have more questions about AI-related privacy issues or you want to learn more about our program, please reach out to us through our contact page. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>
IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit https://www.esrb.org/privacy-certified-blog/iapp-or-ai-pp-generative-ai-games-and-the-global-privacy-summit/ Wed, 26 Apr 2023 13:47:12 +0000 https://www.esrb.org/?p=5488 As videogame companies increasingly embrace generative AI, privacy pros will need to drill down on regulatory enforcement and best practices.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>
Image generated by Canva Text-to-Image AI.

With over 5,000 attendees, seemingly hundreds of panels and speakers, and a brilliant opening talk by South African comedian and philanthropist, Trevor Noah, the recent International Association of Privacy Professionals (IAPP) Global Privacy Summit (#GPS23) was a terrific opportunity for ESRB Privacy Certified’s videogame and toy-focused team to connect with the wider privacy world. Despite the vast array of privacy issues and resources on offer, there was one topic that topped everything – Artificial Intelligence. AI.

Especially generative AI, made famous by viral chatbot, ChatGPT. The incredible advances in generative AI that have catapulted into games and everything else in the last few months were top of mind. Almost every panel, even when not directly about AI, touched on it. I found myself counting the minutes, even seconds, it took for someone to mention ChatGPT in a hallway conversation between sessions. (The average was just under three minutes.) The takeaway? Privacy practitioners must understand and plan for AI-related privacy issues.

That’s especially true for privacy pros at game companies. Videogame companies are increasingly embracing technology’s possibilities to revolutionize the way we learn, work, and play. Already, videogame companies are using generative AI to speed up game development, reduce costs, and help players interact with characters in new interactive and immersive ways.

Generative AI’s use of gargantuan amounts of data – including personal data – however, raises complex privacy issues. For example, even if some of the underlying data is technically public (at least in the U.S.), generative AI models could combine and use this information in unknown ways. OpenAI, Inc., the company behind ChatGPT, acknowledges that it scoops up “publicly available personal information.” There are also privacy issues around transparency, bias, and consumers’ rights to access, correct, and delete information used by the models. And yes, ChatGPT records all of your “prompts.”

All this “underline[s] the urgent need for robust privacy and security measures in the development and deployment of generative AI technologies,” asserts IAPP Principal Technology Researcher, Katharina Koerner. Many large videogame companies have already developed principles for what’s been variously called “trustworthy” or “ethical” or “responsible” AI. Most address consumer privacy and data security at a high level. Still, as videogame companies increasingly embrace generative AI and roll out new products, privacy pros will need to drill down on regulatory enforcement and best practices in this area. So here, to get you started, are three top takeaways from IAPP GPS23, aka the “AI-PP”:

  1. Get Ready for Federal Trade Commission (FTC) Generative AI Action FTC Commissioner Alvaro Bedoya, in an entertaining DALL-E illustrated keynote speech titled “Early Thoughts on Generative AI,” emphasized that the FTC can regulate AI today. Taking on what he called a “powerful myth out there that ‘AI is unregulated,’” Commissioner Bedoya said:
    Unfair and deceptive trade practices laws apply to AI. At the FTC, our core section 5 jurisdiction extends to companies making, selling, or using AI. If a company makes a deceptive claim using (or about) AI, that company can be held accountable. If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable. (Footnotes omitted.)

    Commissioner Bedoya also pointed to civil rights laws as well as tort and product liability laws. “Do I support stronger statutory protections?” he asked. “Absolutely. But AI does not, today, exist in a law-free environment.”

    A recent FTC Business Center blog emphasizes Bedoya’s point. The agency explained that new AI tools present “serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools.” It warned that, “Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.” And just yesterday, the FTC, along with the Department of Justice and several other federal agencies, released a joint statement announcing their “resolve to monitor the development and use of automated systems . . . [and] vigorously
    use our collective authorities to protect individuals’ rights regardless of whether legal
    violations occur through traditional means or advanced technologies.”

    My reaction? Commissioner Bedoya’s “early thoughts speech” should be seen as a current heads-up. Especially in light of the Center for AI and Digital Policy’s recent complaint to the FTC. The group urged the agency to investigate OpenAI and GPT-4 and prevent the release of further generative AI products before the “establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

  2. The Italian Data Protection Authority’s ChatGPT Injunction Is Just the Beginning of Worldwide Scrutiny
    Even though the FTC hasn’t yet acted on Commissioner Bedoya’s warning, other privacy authorities have already done so. GPS23 was filled with chatter about the action by the Italian Data Protection Authority (the Garante) against ChatGPT owner, OpenAI, under the General Data Protection Regulation (GDPR) temporarily banning ChatGPT in Italy.Since then, the agency has required OpenAI to comply with specific privacy requirements before lifting the ban. These include requiring the company to ask users for consent or establishing a legitimate interest for using consumer’s data, verify users’ ages to keep children off the platform, and provide users with access, correction, and deletion rights. Whether and how OpenAI can do so is an open, high-stakes, question.Meanwhile, more scrutiny is on the way. The U.K.’s Information Commissioner, John Edwards, and the President of the French CNIL (Commission Nationale Informatique & Libertés), Marie-Laure Denis, spent most of their session on Regulator Insights From Today to Help Design Privacy Rules for Tomorrow talking about the challenges of AI and the GDPR’s roadmap for compliance and enforcement. Last Thursday, the European Data Protection Board announced that it had launched a new task force to discuss a coordinated European approach to ChatGPT. And just this Monday, the Baden-Württemberg data protection authority announced it was seeking information from the company on behalf of Germany’s 16 state-run data protection authorities.In case you think only European agencies are investigating ChatGPT, Canadian Privacy Commissioner Philippe Dufresne announced his agency’s investigation into ChatGPT on the first morning of GPS23. There aren’t many details yet, but like the Italian Garante’s action, the Office of the Privacy Commissioner’s investigation appears to be focused on the product’s lack of transparency and failure to obtain consent from users for the data that powers the chatbot, which is trained on data collected from the open web.
  3. AI Governance and Risk Mitigation Are Key
    Although not as splashy as the main stage presentation by author and generative AI expert, Nina Schick, the panels that focused on the practical aspects of AI were invaluable. They also provided pointers on how to build a sturdy foundation for AI use, including by:

    • Adopting documented principles, policies, and procedures;
    • Establishing cross-functional teams;
    • Inventorying models, data and use cases;
    • Updating procurement and vendor oversight processes;
    • Providing employee training and awareness; and
    • Assessing risks.

    (Sounds a lot like a “how to” build a solid privacy program, no?) They also discussed the slew of AI legislation currently underway (e.g., the EU’s AI Act, California and other state bills) that will ultimately clarify the compliance landscape.

    At another session, panelists emphasized that there’s no one silver bullet for privacy issues in AI. Instead, practitioners will need to use some combination of privacy enhancing technologies (PETS), like differential privacy, and frameworks like the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and its Privacy Framework to help address the privacy challenges of generative AI.

*****
ChatGPT and other generative AI products can’t predict the future. Yet. Or, as ChatGPT itself told me, “[I]t is not capable of predicting the future with certainty.” But as IAPP GPS23 made clear, generative AI will certainly be part of the privacy discussion going forward.

• • •

If you have more questions about AI-related privacy issues or you want to learn more about our program, please reach out to us through our contact page. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>