Privacy - ESRB Ratings https://www.esrb.org/tag/privacy/ Entertainment Software Rating Board Tue, 19 Sep 2023 23:52:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://www.esrb.org/wp-content/uploads/2019/06/cropped-Favicon.png Privacy - ESRB Ratings https://www.esrb.org/tag/privacy/ 32 32 A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern https://www.esrb.org/privacy-certified-blog/a-new-season-for-kids-privacy-court-enjoins-californias-landmark-youth-privacy-law-but-protecting-children-online-remains-a-prime-concern/ Tue, 19 Sep 2023 21:21:19 +0000 https://www.esrb.org/?p=5631 Read our analysis of the NetChoice decision and tips about what it might mean for your kids’ privacy program.

The post A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern appeared first on ESRB Ratings.

]]>
Summer is definitely over. With the autumnal equinox just days away (Saturday, September 23, to be exact), there’s been a definite shift in the air – and in the children’s privacy world. Just as the fastest sunsets and sunrises of the year happen at the equinoxes, kids’ privacy developments are piling on rapidly right now.

Since the beginning of September, we’ve seen the Irish Data Protection Commission issue a huge, €345 million ($367 million) fine against TikTok for using unfair design practices that violate kids’ privacy. Delaware’s governor just signed a new privacy law that bans profiling and targeted advertising for users under the age of 18 unless they opt-in. And the Dutch data protection authority, just this week, announced an investigation into businesses’ use of generative AI in apps directed at young children.

As I was catching up with these matters yesterday, news broke that a federal district court judge in California had granted a preliminary injunction (“PI”) prohibiting the landmark California Age Appropriate Design Code Act (“CAADCA”) from going into effect on July 1, 2024. The judge ruled that the law violates the First Amendment’s free speech guarantees.

As ESRB Privacy Certified blog readers might recall, in September 2022, California enacted the CAADCA, establishing a far-reaching privacy framework that requires businesses to prioritize the “best interests of the child” when designing, developing, and providing online services. At the time, I wrote that the California law had the “potential to transform data privacy protections for children and teens in the United States.”

In particular, I pointed to the law’s coverage of children under the age of 18, its applicability to all online services “likely to be accessed by a minor,” and its requirement that businesses set default privacy settings that offer a “high level” of privacy protection (e.g., turning off geolocation and app tracking settings) unless the business can present a “compelling reason” that different settings are in the best interests of children. I also noted the Act’s provisions on age estimation/verification, data protection impact assessments (“DPIAs”), and data minimization as significant features.

In December 2022, tech industry organization NetChoice filed a lawsuit challenging the CAADCA on a wide range of constitutional and other grounds. In addition to a cluster of First Amendment arguments, NetChoice asserted that the Children’s Online Privacy Protection Act (“COPPA”), which is enforced primarily by the Federal Trade Commission (“FTC”), preempts the California law. The State of California, represented by the Office of the Attorney General, defended the law, arguing that the “Act operates well within constitutional parameters.”

Yesterday’s PI shifts the “atmospherics” of the kids’ privacy landscape dramatically. But the injunction doesn’t mean that businesses and privacy practitioners can ignore the underlying reasons for the CAADCA (which was passed overwhelmingly by the California legislature) or the practices and provisions it contains. Here’s a very rough analysis of the decision and some tips about what it might mean for your kids’ privacy program.

The Court’s Holding: In her 45-page written opinion, Judge Beth Labson Freeman held that “NetChoice has shown that it is likely to succeed on the merits of its argument that the provisions of the CAADCA intended to achieve [the purpose of protecting children when they are online] likely violates the First Amendment.” The Court held that the CAADCA is a regulation of protected expression, and not simply a regulation of non-expressive conduct, i.e., activity without a significant expressive element. Because she viewed the statute as implicating “commercial speech,” the Court analyzed the CAADCA under an “intermediate scrutiny standard of review.”

The Relevant Test: Under that standard (often referred to as the Central Hudson test based on the name of the Supreme Court case that formulated it), if the challenged regulation concerns lawful activity and speech that is not misleading, the government bears the burden of proving that (i) it has a “substantial interest” in the regulation advanced, (ii) that the regulation directly and materially advance the government’s substantial interest, and (iii) that the regulation is “narrowly tailored” to achieve that interest.

The Court recognized that California would likely succeed in establishing a substantial interest in protecting minors from harms to their physical and psychological well-being caused by lax data and privacy protections online. Reviewing the CAADCA’s specific provisions, however, it found that that many of the provisions  challenged by NetChoice did not meet the remaining prongs of the intermediate scrutiny test.

The Court’s Central Hudson Analysis: The Court made findings on each of the specific provisions challenged by NetChoice keyed to the Central Hudson factors. I highlight a few here:

  • Data Protection Impact Assessments (DPIAs): The Court held that California did not meet its burden to demonstrate that the requirement for businesses to assess their practices in DPIAs would alleviate any harms from the design of digital products, services, and features, to a material degree.
  • Age Estimation: Judge Freeman also found that the statutory requirement to estimate the age of child users with a “reasonable level of certainty” would likely fail the Central Hudson test: “[T]he CAADCA’s age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information.”
    • The Court also found that the age estimation provision would likely fail to meet the Central Hudson test because the effect of a business choosing not to estimate age, but instead to apply privacy and data protections broadly, would impermissibly shield adults from that same content. In reaching this conclusion, Judge Freeman rejected California’s argument that the “CAADCA does not prevent any specific content from being displayed to a consumer, even if the consumer is a minor; it only prohibits a business from profiling a minor and using that information to provide targeted content.”
    • Notably, later in the decision, Judge Freeman held that the age estimation provision is the “linchpin” of most of most of the CAADCA’s provisions and therefore determined it is not “functionally severable” from the remainder of the statute.
  • High Default Privacy Settings: The Court found that the CAADCA’s requirement for “high default privacy settings” would be likely to cause at least some businesses to prohibit children from accessing their services and products altogether.
  • Profiling by Default: Here, Judge Freeman held that the provision banning profiling of children by default could discard “beneficial aspects” of targeted information to certain categories of children, e.g., pregnant teenagers.
  • Dark Patterns: The Judge held that California did not meet its burden to establish that prohibitions on the use of dark patterns to lead or encourage children to provide unnecessary personal information would ameliorate a causally connected harm.

COPPA Preemption: Although the Court granted the injunction based on First Amendment considerations alone, it did, briefly, address NetChoice’s argument that the COPPA preempts the CAADCA. The Court rejected this argument at the PI stage, explaining: “In the Court’s view, it is not clear that the cited provisions of the CAADCA contradict, rather than supplement, those of COPPA. Nor is it clear that the cited provisions of the CAADCA would stand as an obstacle to enforcement of COPPA. An online provider might well be able to comply with the provisions of both the CAADCA and COPPA . . . . “

  • N.B. Judge Freeman’s decision to act cautiously on this claim makes sense. Recently, the Ninth Circuit Court of Appeals, in Google v. Jones, overturned her decision that COPPA preempted state law claims asserted in a class action alleging that Google/You Tube used persistent identifiers to collect data and track children’s online behavior surreptitiously and without their consent – conduct that also violates COPPA. Interestingly, in that case, the Ninth Circuit invited the FTC, which enforces COPPA, to express its views on the preemption issue. The FTC accepted, stating that “Congress did not intend to wholly foreclose state protection of children’s online privacy, and the panel properly rejected an interpretation of COPPA that would achieve that outcome.”


Takeaways:
The CAADCA litigation is far from over, and it is likely that the California Attorney General will seek an immediate interlocutory appeal. It is clear, though, that the district court’s decision will have consequences in the short term for state privacy laws that are scheduled to come into effect soon as well as for efforts underway in Congress on child-related online privacy and safety legislation. Here are a few takeaways:

  • Privacy Laws Can Still Pack a Punch: Regardless of whether the Court ultimately strikes down the CAADCA or not, many of the concepts in the design code are already embedded in other privacy laws that apply to game and toy companies’ activities, both without and within the United States. On the U.S. front, there are newly enacted child privacy provisions in state laws that should be able to withstand constitutional challenge. Plus, the NetChoice ruling might loosen the California’s Congressional delegation’s resistance to bipartisan federal legislation. Although today’s some may view the Court’s ruling as a reprieve, companies still need to meet other legal obligations.
    • For example, Connecticut recently passed child privacy amendments (scheduled to go into effect on October 1, 2024) to its privacy law that skirt some of the elements Judge Freeman found provisionally unconstitutional. Unlike the CAADCA, the Connecticut law does not require that companies estimate the age of their users; it applies only to companies that have “actual knowledge” of or “willfully disregard” the presence of minor users, and it does not regulate “potentially harmful” (as opposed to illegal) content. Instead of using the CAADCA “best interest of the child” standard, the Connecticut law establishes a duty to avoid a “heightened risk of harm” to minors and delineates potential harms.
  • DPIAs are still a “Must Do”: Most of the new state privacy laws passed in the last year contain requirements for data protection impact assessments, similar to those already required by the European Union’s General Data Protection Regulation (GDPR). At the beginning of September, the California Privacy Protection Agency published draft regulations that contain practical examples of how DPIAs should work under California’s comprehensive privacy law. Regardless of what happens with the CAADCA, statutory requirements for more focused DPIAs such as those in the California Consumer Privacy Act will likely remain.
    • Judge Freeman’s skepticism about the CAADCA’s DPIA provision aside, DPIAs can be a useful accountability tool for identifying privacy risks, working out when, where, and how likely they are to occur, and assessing the impact of such risks on your customers and business.
  • COPPA Continues to Be Relevant: It will probably take years for the court battle over the CAADCA to play out. In the meantime, if you know that children — or teenagers — are using your products, expect the FTC to enforce COPPA and other privacy protections aggressively. (For quick review of the FTC’s recent COPPA cases, see my previous blog post COPPA Battlegrounds: The Quest to Uncover the Secrets of the FTC’s Kids’ Privacy Actions.)
    • Indeed, it’s likely the FTC will use both the substantive provisions of COPPA and the “unfairness” and “deception” prongs of Section 5 of the FTC Act to set requirements for child-friendly privacy disclosures, mandates for high privacy default settings, and prohibitions against manipulative dark patterns through its child-focused investigations and enforcement actions.
    • The NetChoice ruling – coupled with Congressional inaction – could also spur the FTC to complete its now-four-years-old COPPA Rule review and act on (at least parts of) last year’s privacy rulemaking proposal.

While this all unfolds, ESRB Privacy Certified will continue to help its program members comply with existing laws and adopt and implement best practices for children’s privacy. As privacy protections for kids and teens continue to evolve, we’ll be following closely and providing guidance to our program members on all of the moving parts of the complex children’s privacy landscape. To learn more about ESRB Privacy Certified’s compliance and certification program, please visit our website, find us on LinkedIn, or contact us at privacy@esrb.org.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern appeared first on ESRB Ratings.

]]>
A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern https://www.esrb.org/privacy-certified-blog/a-new-season-for-kids-privacy-court-enjoins-californias-landmark-youth-privacy-law-but-protecting-children-online-remains-a-prime-concern/ Tue, 19 Sep 2023 21:21:19 +0000 https://www.esrb.org/?p=5631 Read our analysis of the NetChoice decision and tips about what it might mean for your kids’ privacy program.

The post A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern appeared first on ESRB Ratings.

]]>
Summer is definitely over. With the autumnal equinox just days away (Saturday, September 23, to be exact), there’s been a definite shift in the air – and in the children’s privacy world. Just as the fastest sunsets and sunrises of the year happen at the equinoxes, kids’ privacy developments are piling on rapidly right now.

Since the beginning of September, we’ve seen the Irish Data Protection Commission issue a huge, €345 million ($367 million) fine against TikTok for using unfair design practices that violate kids’ privacy. Delaware’s governor just signed a new privacy law that bans profiling and targeted advertising for users under the age of 18 unless they opt-in. And the Dutch data protection authority, just this week, announced an investigation into businesses’ use of generative AI in apps directed at young children.

As I was catching up with these matters yesterday, news broke that a federal district court judge in California had granted a preliminary injunction (“PI”) prohibiting the landmark California Age Appropriate Design Code Act (“CAADCA”) from going into effect on July 1, 2024. The judge ruled that the law violates the First Amendment’s free speech guarantees.

As ESRB Privacy Certified blog readers might recall, in September 2022, California enacted the CAADCA, establishing a far-reaching privacy framework that requires businesses to prioritize the “best interests of the child” when designing, developing, and providing online services. At the time, I wrote that the California law had the “potential to transform data privacy protections for children and teens in the United States.”

In particular, I pointed to the law’s coverage of children under the age of 18, its applicability to all online services “likely to be accessed by a minor,” and its requirement that businesses set default privacy settings that offer a “high level” of privacy protection (e.g., turning off geolocation and app tracking settings) unless the business can present a “compelling reason” that different settings are in the best interests of children. I also noted the Act’s provisions on age estimation/verification, data protection impact assessments (“DPIAs”), and data minimization as significant features.

In December 2022, tech industry organization NetChoice filed a lawsuit challenging the CAADCA on a wide range of constitutional and other grounds. In addition to a cluster of First Amendment arguments, NetChoice asserted that the Children’s Online Privacy Protection Act (“COPPA”), which is enforced primarily by the Federal Trade Commission (“FTC”), preempts the California law. The State of California, represented by the Office of the Attorney General, defended the law, arguing that the “Act operates well within constitutional parameters.”

Yesterday’s PI shifts the “atmospherics” of the kids’ privacy landscape dramatically. But the injunction doesn’t mean that businesses and privacy practitioners can ignore the underlying reasons for the CAADCA (which was passed overwhelmingly by the California legislature) or the practices and provisions it contains. Here’s a very rough analysis of the decision and some tips about what it might mean for your kids’ privacy program.

The Court’s Holding: In her 45-page written opinion, Judge Beth Labson Freeman held that “NetChoice has shown that it is likely to succeed on the merits of its argument that the provisions of the CAADCA intended to achieve [the purpose of protecting children when they are online] likely violates the First Amendment.” The Court held that the CAADCA is a regulation of protected expression, and not simply a regulation of non-expressive conduct, i.e., activity without a significant expressive element. Because she viewed the statute as implicating “commercial speech,” the Court analyzed the CAADCA under an “intermediate scrutiny standard of review.”

The Relevant Test: Under that standard (often referred to as the Central Hudson test based on the name of the Supreme Court case that formulated it), if the challenged regulation concerns lawful activity and speech that is not misleading, the government bears the burden of proving that (i) it has a “substantial interest” in the regulation advanced, (ii) that the regulation directly and materially advance the government’s substantial interest, and (iii) that the regulation is “narrowly tailored” to achieve that interest.

The Court recognized that California would likely succeed in establishing a substantial interest in protecting minors from harms to their physical and psychological well-being caused by lax data and privacy protections online. Reviewing the CAADCA’s specific provisions, however, it found that that many of the provisions  challenged by NetChoice did not meet the remaining prongs of the intermediate scrutiny test.

The Court’s Central Hudson Analysis: The Court made findings on each of the specific provisions challenged by NetChoice keyed to the Central Hudson factors. I highlight a few here:

  • Data Protection Impact Assessments (DPIAs): The Court held that California did not meet its burden to demonstrate that the requirement for businesses to assess their practices in DPIAs would alleviate any harms from the design of digital products, services, and features, to a material degree.
  • Age Estimation: Judge Freeman also found that the statutory requirement to estimate the age of child users with a “reasonable level of certainty” would likely fail the Central Hudson test: “[T]he CAADCA’s age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information.”
    • The Court also found that the age estimation provision would likely fail to meet the Central Hudson test because the effect of a business choosing not to estimate age, but instead to apply privacy and data protections broadly, would impermissibly shield adults from that same content. In reaching this conclusion, Judge Freeman rejected California’s argument that the “CAADCA does not prevent any specific content from being displayed to a consumer, even if the consumer is a minor; it only prohibits a business from profiling a minor and using that information to provide targeted content.”
    • Notably, later in the decision, Judge Freeman held that the age estimation provision is the “linchpin” of most of most of the CAADCA’s provisions and therefore determined it is not “functionally severable” from the remainder of the statute.
  • High Default Privacy Settings: The Court found that the CAADCA’s requirement for “high default privacy settings” would be likely to cause at least some businesses to prohibit children from accessing their services and products altogether.
  • Profiling by Default: Here, Judge Freeman held that the provision banning profiling of children by default could discard “beneficial aspects” of targeted information to certain categories of children, e.g., pregnant teenagers.
  • Dark Patterns: The Judge held that California did not meet its burden to establish that prohibitions on the use of dark patterns to lead or encourage children to provide unnecessary personal information would ameliorate a causally connected harm.

COPPA Preemption: Although the Court granted the injunction based on First Amendment considerations alone, it did, briefly, address NetChoice’s argument that the COPPA preempts the CAADCA. The Court rejected this argument at the PI stage, explaining: “In the Court’s view, it is not clear that the cited provisions of the CAADCA contradict, rather than supplement, those of COPPA. Nor is it clear that the cited provisions of the CAADCA would stand as an obstacle to enforcement of COPPA. An online provider might well be able to comply with the provisions of both the CAADCA and COPPA . . . . “

  • N.B. Judge Freeman’s decision to act cautiously on this claim makes sense. Recently, the Ninth Circuit Court of Appeals, in Google v. Jones, overturned her decision that COPPA preempted state law claims asserted in a class action alleging that Google/You Tube used persistent identifiers to collect data and track children’s online behavior surreptitiously and without their consent – conduct that also violates COPPA. Interestingly, in that case, the Ninth Circuit invited the FTC, which enforces COPPA, to express its views on the preemption issue. The FTC accepted, stating that “Congress did not intend to wholly foreclose state protection of children’s online privacy, and the panel properly rejected an interpretation of COPPA that would achieve that outcome.”


Takeaways:
The CAADCA litigation is far from over, and it is likely that the California Attorney General will seek an immediate interlocutory appeal. It is clear, though, that the district court’s decision will have consequences in the short term for state privacy laws that are scheduled to come into effect soon as well as for efforts underway in Congress on child-related online privacy and safety legislation. Here are a few takeaways:

  • Privacy Laws Can Still Pack a Punch: Regardless of whether the Court ultimately strikes down the CAADCA or not, many of the concepts in the design code are already embedded in other privacy laws that apply to game and toy companies’ activities, both without and within the United States. On the U.S. front, there are newly enacted child privacy provisions in state laws that should be able to withstand constitutional challenge. Plus, the NetChoice ruling might loosen the California’s Congressional delegation’s resistance to bipartisan federal legislation. Although today’s some may view the Court’s ruling as a reprieve, companies still need to meet other legal obligations.
    • For example, Connecticut recently passed child privacy amendments (scheduled to go into effect on October 1, 2024) to its privacy law that skirt some of the elements Judge Freeman found provisionally unconstitutional. Unlike the CAADCA, the Connecticut law does not require that companies estimate the age of their users; it applies only to companies that have “actual knowledge” of or “willfully disregard” the presence of minor users, and it does not regulate “potentially harmful” (as opposed to illegal) content. Instead of using the CAADCA “best interest of the child” standard, the Connecticut law establishes a duty to avoid a “heightened risk of harm” to minors and delineates potential harms.
  • DPIAs are still a “Must Do”: Most of the new state privacy laws passed in the last year contain requirements for data protection impact assessments, similar to those already required by the European Union’s General Data Protection Regulation (GDPR). At the beginning of September, the California Privacy Protection Agency published draft regulations that contain practical examples of how DPIAs should work under California’s comprehensive privacy law. Regardless of what happens with the CAADCA, statutory requirements for more focused DPIAs such as those in the California Consumer Privacy Act will likely remain.
    • Judge Freeman’s skepticism about the CAADCA’s DPIA provision aside, DPIAs can be a useful accountability tool for identifying privacy risks, working out when, where, and how likely they are to occur, and assessing the impact of such risks on your customers and business.
  • COPPA Continues to Be Relevant: It will probably take years for the court battle over the CAADCA to play out. In the meantime, if you know that children — or teenagers — are using your products, expect the FTC to enforce COPPA and other privacy protections aggressively. (For quick review of the FTC’s recent COPPA cases, see my previous blog post COPPA Battlegrounds: The Quest to Uncover the Secrets of the FTC’s Kids’ Privacy Actions.)
    • Indeed, it’s likely the FTC will use both the substantive provisions of COPPA and the “unfairness” and “deception” prongs of Section 5 of the FTC Act to set requirements for child-friendly privacy disclosures, mandates for high privacy default settings, and prohibitions against manipulative dark patterns through its child-focused investigations and enforcement actions.
    • The NetChoice ruling – coupled with Congressional inaction – could also spur the FTC to complete its now-four-years-old COPPA Rule review and act on (at least parts of) last year’s privacy rulemaking proposal.

While this all unfolds, ESRB Privacy Certified will continue to help its program members comply with existing laws and adopt and implement best practices for children’s privacy. As privacy protections for kids and teens continue to evolve, we’ll be following closely and providing guidance to our program members on all of the moving parts of the complex children’s privacy landscape. To learn more about ESRB Privacy Certified’s compliance and certification program, please visit our website, find us on LinkedIn, or contact us at privacy@esrb.org.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post A New Season for Kids’ Privacy: Court enjoins California’s Landmark Youth Privacy — Protecting Children Online Remains a Prime Concern appeared first on ESRB Ratings.

]]>
IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit https://www.esrb.org/privacy-certified-blog/iapp-or-ai-pp-generative-ai-games-and-the-global-privacy-summit/ Wed, 26 Apr 2023 13:47:12 +0000 https://www.esrb.org/?p=5488 As videogame companies increasingly embrace generative AI, privacy pros will need to drill down on regulatory enforcement and best practices.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>
Image generated by Canva Text-to-Image AI.

With over 5,000 attendees, seemingly hundreds of panels and speakers, and a brilliant opening talk by South African comedian and philanthropist, Trevor Noah, the recent International Association of Privacy Professionals (IAPP) Global Privacy Summit (#GPS23) was a terrific opportunity for ESRB Privacy Certified’s videogame and toy-focused team to connect with the wider privacy world. Despite the vast array of privacy issues and resources on offer, there was one topic that topped everything – Artificial Intelligence. AI.

Especially generative AI, made famous by viral chatbot, ChatGPT. The incredible advances in generative AI that have catapulted into games and everything else in the last few months were top of mind. Almost every panel, even when not directly about AI, touched on it. I found myself counting the minutes, even seconds, it took for someone to mention ChatGPT in a hallway conversation between sessions. (The average was just under three minutes.) The takeaway? Privacy practitioners must understand and plan for AI-related privacy issues.

That’s especially true for privacy pros at game companies. Videogame companies are increasingly embracing technology’s possibilities to revolutionize the way we learn, work, and play. Already, videogame companies are using generative AI to speed up game development, reduce costs, and help players interact with characters in new interactive and immersive ways.

Generative AI’s use of gargantuan amounts of data – including personal data – however, raises complex privacy issues. For example, even if some of the underlying data is technically public (at least in the U.S.), generative AI models could combine and use this information in unknown ways. OpenAI, Inc., the company behind ChatGPT, acknowledges that it scoops up “publicly available personal information.” There are also privacy issues around transparency, bias, and consumers’ rights to access, correct, and delete information used by the models. And yes, ChatGPT records all of your “prompts.”

All this “underline[s] the urgent need for robust privacy and security measures in the development and deployment of generative AI technologies,” asserts IAPP Principal Technology Researcher, Katharina Koerner. Many large videogame companies have already developed principles for what’s been variously called “trustworthy” or “ethical” or “responsible” AI. Most address consumer privacy and data security at a high level. Still, as videogame companies increasingly embrace generative AI and roll out new products, privacy pros will need to drill down on regulatory enforcement and best practices in this area. So here, to get you started, are three top takeaways from IAPP GPS23, aka the “AI-PP”:

  1. Get Ready for Federal Trade Commission (FTC) Generative AI Action FTC Commissioner Alvaro Bedoya, in an entertaining DALL-E illustrated keynote speech titled “Early Thoughts on Generative AI,” emphasized that the FTC can regulate AI today. Taking on what he called a “powerful myth out there that ‘AI is unregulated,’” Commissioner Bedoya said:
    Unfair and deceptive trade practices laws apply to AI. At the FTC, our core section 5 jurisdiction extends to companies making, selling, or using AI. If a company makes a deceptive claim using (or about) AI, that company can be held accountable. If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable. (Footnotes omitted.)

    Commissioner Bedoya also pointed to civil rights laws as well as tort and product liability laws. “Do I support stronger statutory protections?” he asked. “Absolutely. But AI does not, today, exist in a law-free environment.”

    A recent FTC Business Center blog emphasizes Bedoya’s point. The agency explained that new AI tools present “serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools.” It warned that, “Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.” And just yesterday, the FTC, along with the Department of Justice and several other federal agencies, released a joint statement announcing their “resolve to monitor the development and use of automated systems . . . [and] vigorously
    use our collective authorities to protect individuals’ rights regardless of whether legal
    violations occur through traditional means or advanced technologies.”

    My reaction? Commissioner Bedoya’s “early thoughts speech” should be seen as a current heads-up. Especially in light of the Center for AI and Digital Policy’s recent complaint to the FTC. The group urged the agency to investigate OpenAI and GPT-4 and prevent the release of further generative AI products before the “establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

  2. The Italian Data Protection Authority’s ChatGPT Injunction Is Just the Beginning of Worldwide Scrutiny
    Even though the FTC hasn’t yet acted on Commissioner Bedoya’s warning, other privacy authorities have already done so. GPS23 was filled with chatter about the action by the Italian Data Protection Authority (the Garante) against ChatGPT owner, OpenAI, under the General Data Protection Regulation (GDPR) temporarily banning ChatGPT in Italy.Since then, the agency has required OpenAI to comply with specific privacy requirements before lifting the ban. These include requiring the company to ask users for consent or establishing a legitimate interest for using consumer’s data, verify users’ ages to keep children off the platform, and provide users with access, correction, and deletion rights. Whether and how OpenAI can do so is an open, high-stakes, question.Meanwhile, more scrutiny is on the way. The U.K.’s Information Commissioner, John Edwards, and the President of the French CNIL (Commission Nationale Informatique & Libertés), Marie-Laure Denis, spent most of their session on Regulator Insights From Today to Help Design Privacy Rules for Tomorrow talking about the challenges of AI and the GDPR’s roadmap for compliance and enforcement. Last Thursday, the European Data Protection Board announced that it had launched a new task force to discuss a coordinated European approach to ChatGPT. And just this Monday, the Baden-Württemberg data protection authority announced it was seeking information from the company on behalf of Germany’s 16 state-run data protection authorities.In case you think only European agencies are investigating ChatGPT, Canadian Privacy Commissioner Philippe Dufresne announced his agency’s investigation into ChatGPT on the first morning of GPS23. There aren’t many details yet, but like the Italian Garante’s action, the Office of the Privacy Commissioner’s investigation appears to be focused on the product’s lack of transparency and failure to obtain consent from users for the data that powers the chatbot, which is trained on data collected from the open web.
  3. AI Governance and Risk Mitigation Are Key
    Although not as splashy as the main stage presentation by author and generative AI expert, Nina Schick, the panels that focused on the practical aspects of AI were invaluable. They also provided pointers on how to build a sturdy foundation for AI use, including by:

    • Adopting documented principles, policies, and procedures;
    • Establishing cross-functional teams;
    • Inventorying models, data and use cases;
    • Updating procurement and vendor oversight processes;
    • Providing employee training and awareness; and
    • Assessing risks.

    (Sounds a lot like a “how to” build a solid privacy program, no?) They also discussed the slew of AI legislation currently underway (e.g., the EU’s AI Act, California and other state bills) that will ultimately clarify the compliance landscape.

    At another session, panelists emphasized that there’s no one silver bullet for privacy issues in AI. Instead, practitioners will need to use some combination of privacy enhancing technologies (PETS), like differential privacy, and frameworks like the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and its Privacy Framework to help address the privacy challenges of generative AI.

*****
ChatGPT and other generative AI products can’t predict the future. Yet. Or, as ChatGPT itself told me, “[I]t is not capable of predicting the future with certainty.” But as IAPP GPS23 made clear, generative AI will certainly be part of the privacy discussion going forward.

• • •

If you have more questions about AI-related privacy issues or you want to learn more about our program, please reach out to us through our contact page. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post IAPP or “AI”-PP?: Generative AI, Games, and the Global Privacy Summit appeared first on ESRB Ratings.

]]>
Ready Player Go: Getting Your Privacy Program Metaverse-Ready in 2023 https://www.esrb.org/privacy-certified-blog/ready-player-go-getting-your-privacy-program-metaverse-ready-in-2023/ Thu, 26 Jan 2023 13:55:29 +0000 https://www.esrb.org/?p=5359 Even though there’s no consensus on exactly what the metaverse is, or how, when, and whether it will transform our lives, now’s the time – from our privacy compliance perspective – for companies and consumers alike to get ready.

The post Ready Player Go: Getting Your Privacy Program Metaverse-Ready in 2023 appeared first on ESRB Ratings.

]]>
As ESRB Privacy Certified celebrates Data Privacy Week 2023, the hype about the metaverse – a single, immersive, persistent and three-dimensional space that enables people to socialize, play, shop, and work in ways that transcend the limits of the physical world – is reaching a crescendo. It’s easy to ignore some of it as baseless buzz. But, from a practical standpoint, the video game industry has long created metaverse-like experiences, building expansive virtual worlds with players using their custom-designed digital avatars to connect, socialize and play with one another. Some companies are experimenting with Web 3.0 features like extended reality (XR), an umbrella term for virtual (VR), augmented (AR), and mixed reality (MR). Others are considering the use of the blockchain to prove “ownership” of virtual goods and property and non-fungible tokens (NFTs) to enable purchases. So, even though there’s no consensus on exactly what the metaverse is, or how, when, and whether it will transform our lives, now’s the time – from our privacy compliance perspective – for companies and consumers alike to get ready.

How much time is up for debate. A recent study by Pew Research Center and Elon University’s Imagining the Internet Center surveyed over 600 experts about the trajectory and impact of the metaverse. More than half of the experts (54%) predicted that the metaverse will be part of daily life for a half billion people globally by 2040. Slightly less than half (46%) disagreed. They predicted that even though more people will embrace XR tools and experiences by then, the fully immersive world that people imagine as “the metaverse” will take more than 20 years to come to fruition.

Whichever group is right, it’s certain that privacy (and data security) issues will loom large. The array of XR technologies that enable the metaverse will create vast new troves of digital data and real-world privacy concerns. Companies will be able to collect enormous amounts of biometric data such as user’s eye movements and hand positions and bodily data such as blood pressure, and respiration levels. Through the emerging area of inferential biometrics, XR technologies, combined with AI, could be used to make inferences about user’s emotions, mental and physical health, and personality traits, among other things.

Even if users create virtual life identities without providing real-world personal information or using their personal characteristics such as gender, race, or age in avatars, they will likely share information with other digital avatars. As with today’s smart phones and IoT devices, this may allow others to piece together users’ real-world identities or obtain sensitive information from them. Companies may sweep up data from bystanders who happen to be in the range of an XR user’s sensors. If companies choose to incorporate blockchain technologies and NFTs into their metaverse plans, they will present their own privacy and security challenges.

It’s critical for companies in the video game industry and beyond to start addressing these challenges now. In a KPMG survey of 1,000 U.S. adults conducted last fall, 80% percent of respondents said that privacy is their top concern in the metaverse, while 79% said that the security of their personal information is their biggest worry. So, while we’re waiting for the real metaverse to stand up, you can make sure your company is using today’s XR technologies in privacy-protective ways and getting ready for the next iteration(s) of the metaverse.

Photo credit: Julien Tromeur via Unsplash

Here are three ways to start:

  1. Incorporate global laws and “best practices” into your current privacy compliance strategy: The metaverse is likely to be more fully global than even today’s internet. This makes it unlikely that any one data privacy regime will apply clearly to metaverse platforms or companies that operate on those platforms. There aren’t any metaverse-specific privacy rules or standards, and there likely won’t be for a long time. Companies should therefore prepare by analyzing and adopting responsible and transparent “best practices” from existing data protection and privacy frameworks. Instead of complying only with the current law in any one jurisdiction or trying to avoid other laws through a “choice of law” clause in your terms of use, you should look to a variety of laws, international standards, and global best practices to provide a high level of privacy protection for your metaverse users. (You can’t just choose your favorite law, though: You’ll need to continue complying with privacy laws that do exist in your jurisdiction.)

    Informational privacy principles, contained in global guidelines such as the OECD (Organization for Economic Cooperation and Development) Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, can form the core of your metaverse data protection strategy. These concepts, such as data minimization, purpose limitation, use specification, data security, individual participation, and accountability, can guide your implementation in a specific metaverse application or environment. Indeed, most modern data protection laws, such as the European Union’s General Data Protection Regulation (GDPR), California’s Consumer Privacy Act (CCPA), and the proposed bipartisan American Data Protection and Privacy Act (ADPPA) from the last Congress, all incorporate these concepts. Of course, there may be situations when laws conflict or these principles will simply be inadequate to deal with new technological developments. By considering how you can use laws, standards, and best practices in your privacy program now, though, you’ll have a head start on compliance.
  2. Do the “Ds” – Deploy Privacy by Design and Default Principles and Data Protection Impact Assessments: The concept of the metaverse as an open, multi-layered universe means that existing methods of privacy protection that rely on privacy disclosures and user consent may be difficult to deploy. But privacy by design – the idea that companies should design products, processes, and policies to proactively manage and avoid privacy risks – seems tailor-made for this new medium. (And it’s long been a core part of our program’s compliance approach.) Privacy by default, a closely related concept, may be even more salient. It requires companies to protect their users’ personal data automatically, embedding privacy into technology and systems from the beginning, not after-the-fact. (The UK Information Commissioner’s Office has a helpful guidance and checklist that addresses these principles in the context of the GDPR.)

    An important piece of privacy by design and default is assessment. Many modern data protection laws, such as the GDPR and California’s Age Appropriate Design Code Act, require companies to conduct data protection impact assessments (DPIAs) to identify, analyze, and minimize privacy risks. Even if you’re not required to conduct DPIAs now, you should start to do them (if you’re not doing so already) for technologies like XR and features like NFTs that may be part of your metaverse offerings. (The International Association of Privacy Professionals (IAPP) maintains an extremely useful resource page on DPIAs.)
  3. Don’t forget children and teens: As complex as data privacy in the metaverse will be for adults, the challenge of protecting the privacy of kids and teens in the metaverse will be even greater. Companies will need to follow a mélange of rules and laws such as the Children’s Online Privacy Protection Act (COPPA) in the U.S., and the newer Age-Appropriate Design Codes in the UK, Ireland, and California. They will also need to follow related laws and rules on safety, advertising and marketing, and digital wellness to protect children and teens from real and perceived risks in the metaverse. As the Federal Trade Commission’s recent settlement with Epic Games for COPPA and other privacy violations involving Fortnite makes clear, poor privacy practices in virtual worlds can lead to real-life harms. One of the FTC commissioners explained how the company’s alleged practices, such as opting children into voice and text communications with players around the world, exposed children to bullying, threats, and harassment, and even coerced or enticed them into sharing sexually explicit images and meeting offline for sexual activity.

Companies must double down on privacy by design and default for children and teens, build sophisticated privacy and parental controls, implement multi-layered age verification methods, and develop mechanisms to obtain parental consent (when required). Some companies may want to build out child-and-teen friendly metaverse spaces and experiences. Given the complexities of doing so, it’s a good thing that a Ready Player One-like universe that crosses over physical and digital realms doesn’t really exist. Yet.

• • •

If you have more questions about kids’ privacy in the metaverse or you want to learn more about our program, please reach out to us through our contact page. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.


Featured image credit: BrianPenny on Pixabay.

The post Ready Player Go: Getting Your Privacy Program Metaverse-Ready in 2023 appeared first on ESRB Ratings.

]]>
Wrapping Up 2022 with A Huge (Epic) Fortnite Privacy Case https://www.esrb.org/privacy-certified-blog/wrapping-up-2022-with-a-huge-epic-fortnite-privacy-case/ Wed, 21 Dec 2022 20:58:11 +0000 https://www.esrb.org/?p=5254 The Fortnite settlement gives insight into the FTC’s thinking on kids' and teens' privacy. Here are 7 takeaways from a case that will likely reverberate far past the New Year.

The post Wrapping Up 2022 with A Huge (Epic) Fortnite Privacy Case appeared first on ESRB Ratings.

]]>
With 2022 almost behind us, we’d planned on easing out of work mode and into festive celebrations this week for the end of this hectic and challenging privacy year. But Stacy’s former employer, the Federal Trade Commission (FTC), had other ideas. So, instead of wrapping presents, we’re wrapping up the year with an analysis of the FTC’s record-breaking $520 million settlements with Epic Games (Epic) for privacy and consumer protection violations in its wildly popular Fortnite video game.

The “s” in settlements is not a typo: On Monday, the FTC announced two separate enforcement actions against Epic. Consistent with ESRB Privacy Certified’s focus on privacy compliance, though, we’ll limit our analysis to the FTC’s privacy-related case. In short, the FTC (represented by the Department of Justice) filed a two-count Complaint and a Stipulated Order in federal court alleging that Epic violated the Children’s Online Privacy Protection Act (COPPA) and the related COPPA Rule. COPPA protects the personal information of children under the age of 13. The FTC asserted that Epic knew that Fortnite was “directed to children” and unlawfully collected personal data from them without verifiable parental consent (VPC).

The FTC also charged Epic with violating the FTC Act, which prohibits unfair and deceptive practices, by using unfair “on by default” voice and text chat settings in Fortnite that led to children and teens being bullied, threatened, and harassed within the game, including sexually. It charged that Epic’s privacy and parental controls did not meaningfully alleviate these harms or empower players to avoid them. If approved, this settlement will require Epic to pay $275 million in civil penalties. (The other $245 million is for the other case and is allotted for consumer refunds.)

Apart from the epic fine, the Fortnite action provides insight into the FTC’s thinking on children’s and teens’ privacy. Here are seven takeaways from a case that will likely reverberate far past the New Year:

  1. Declaring that your services are not directed to children is not enough: The FTC’s action makes clear that you can’t disclaim COPPA. In a paragraph that appeared on the next-to-last page of Epic’s lengthy global privacy policy, the company stated that it does not direct its websites, games, game engines, or applications to children or intentionally collect personal information from them. Although many companies make this claim in their privacy policies, it won’t help you if the facts show that your product is, in fact, child directed. (Remember, a mixed-audience product is one that targets children but not as the primary audience.)
  2. COPPA’s “actual knowledge” standard doesn’t allow you to ignore evidence that children are using your services – especially internal and empirical evidence: While many advocates and lawmakers have criticized COPPA’s “actual knowledge” standard, seeking to replace it with “constructive knowledge,” the Fortnite action shows the FTC will construe the standard broadly. The agency cited several of the standard COPPA Rule factors – subject matter, use of animation, child-oriented activities and language, and music content, evidence of intended audience, and empirical evidence about the game’s player demographics – to determine that Fortnite is directed to children. The key evidence, though, came from empirical evidence and Epic’s own internal documents including:

    • Demographic data: The FTC provided examples of public survey data, which Epic had reviewed, to demonstrate it knew a considerable portion of Fortnite players were under the age of 13. It pointed to publicly available survey results from a 2019 report showing that 53% of U.S. children aged 10-12 played Fortnite weekly, compared to 33% of U.S. teens aged 13-17, and 19% of the U.S. population aged 18-24. The agency alleged that these results also matched Epic’s internal data.
    • Advertising and marketing: The FTC homed in on Epic’s product licensing deals with a wide variety of companies for Fortnite-branded costumes, toys, books, youth-sized apparel, and “back to school” merchandise, many of which were targeted to the under-13 crowd. As in the FTC’s previous record-breaking COPPA matter, Google/YouTube ($170 million fine), the agency cited numerous internal statements and documentation that Epic had generated to emphasize Fortnite’s appeal to children to potential advertising and marketing partners.
    • Internal statements and events: The FTC also cited “ordinary course of business” communications such as consumer complaints and conversations among Epic employees that acknowledged explicitly that many of its users skewed younger. The FTC strung a number of them together (perhaps unfairly) but the phrases – “a large portion of our player base” consists of “underage kids,” / “high penetration among tweens/teens,” / “Fortnite is enjoyed by a very young audience at home and abroad” – convey, unmistakably, that Epic knew that it had a large user base of tweens and younger kids.
  3. Implement VPC and age gates from the get-go or make sure you apply them retroactively: The FTC faulted Epic for failing to obtain VPC for the personal information it collected from child users. In addition to data like name and email, the agency pointed to Epic’s broadcast of “display names” that put children and teens in direct, real-time contact with others through voice and text communication, as personal information that required parental consent. It also charged that even after Epic deployed age gates, it failed to deploy them retroactively to most of the hundreds of millions of Fortnite players who already had accounts. This is pretty much the same conduct that got TikTok (then Musical.ly) in trouble in an earlier, FTC COPPA case. (The $5.7 million civil penalty there was the largest ever fine at the time the case settled in 2019.) Like TikTok, Epic didn’t go back and request age information for people who already had accounts and adjust their default social features and privacy controls to comply with COPPA.
  4. Privacy by default is not just a catchphrase: Although the FTC has long emphasized privacy by design, the FTC hadn’t previously focused on “privacy-protective” default settings in games and other online services. Now it has. The FTC alleged that Epic’s default settings, which enabled live text and voice communications for all users – including children and teens – constituted an unfair practice that led kids and teens to be bullied, threatened, and harassed, including sexually, through Fortnite. Moreover, the agency, citing evidence from Epic’s own employees, alleged that Epic’s parental controls were insufficient. Even when Epic eventually added a button allowing users to turn voice chat off, the company made it difficult for users to find, according to the FTC.
  5. Injunctive relief can be tough – and retroactive: In addition to the whopping $275 million civil penalty, the proposed Stipulated Order sets out the standard injunctive relief the FTC has long obtained in privacy cases – requirements for FTC monitoring, reports, a comprehensive privacy plan, and regular, independent audits. The Order also requires Epic to implement privacy-protective default settings for children and teens. Following the agency’s newer trend of using injunctions to remedy past harms, the Order requires Epic to delete personal information previously collected from Fortnite users in violation of the COPPA Rule’s parental notice and consent requirements unless the company obtains parental consent to retain such data or the user identifies as 13 or older through a neutral age gate.
  6. Real-world harms matter a lot: Commissioner Christine Wilson, the only Republican currently on the Commission, issued a concurring statement supporting the agency’s action. Although she has cautioned the agency’s majority against overly-expansive uses of the FTC’s unfairness authority, Commissioner Wilson noted that the “elements of the unfairness test are clearly satisfied — because Epic Games allegedly opted children into voice and text communications with players around the world, children were exposed to bullying, threats, and harassment, and were enticed or coerced into sharing sexually explicit images and meeting offline for sexual activity.” Wilson also approved of the “novel injunctive mechanisms, which require Epic Games to implement heightened privacy default settings” for children and teens because they “directly address the privacy harms fostered by the company’s alleged business practices.”
  7. Failing to comply with COPPA can be expensive: There’s a clear upward trajectory from the $5.7 million civil penalty in the FTC’s TikTok/Musicl.ly action to the $170 million fine in Google/YouTube to the $275 million civil penalty that Epic will pay to resolve the FTC’s charges. That’s definitely something to remember as you make your plans for the New Year!

Following the FTC’s announcement, Epic explained that it had accepted the settlement agreements “because we want Epic to be at the forefront of consumer protection and provide the best experience for our players.” It set out – as a “helpful guide” to the industry – principles, policies, and recommendations that the company has instituted over the past few years to protect its players and meet regulators’ expectations globally. On the children’s privacy front, Epic recommended that game developers “proactively create age-appropriate ways for players to enjoy their games” – advice that mirrors our own. Maybe we can tie that up with a ribbon!

* * * * *

Wishing you and your loved ones a joyful and relaxing holiday season without any more blockbuster FTC announcements until 2023!


Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post Wrapping Up 2022 with A Huge (Epic) Fortnite Privacy Case appeared first on ESRB Ratings.

]]>
What Parents Need to Know About Privacy in Mobile Games: Communicate with Your Kids https://www.esrb.org/privacy-certified-blog/what-parents-need-to-know-about-privacy-in-mobile-games-communicate-with-your-kids/ Fri, 28 Oct 2022 13:00:03 +0000 https://www.esrb.org/?p=4969 We’ve pulled together five tips to help protect your children’s privacy throughout this week. The final tip? Make sure you communicate with your kids about how they can protect their privacy online.

The post What Parents Need to Know About Privacy in Mobile Games: Communicate with Your Kids appeared first on ESRB Ratings.

]]>
We’ve pulled together five tips to help protect your children’s privacy throughout this week. Catch up on the first four tips here. The final tip? Make sure you communicate with your kids about how they can protect their privacy online.

Our first four tips are privacy-specific while this last one applies to many parenting challenges: Communicate with your kids! Talk with them about what they should know and can do to protect their privacy online. If your kids are young, you can tell them to come to you or simply say no to all in-game requests for information. If your children are older, you can teach them how to use privacy settings and permissions.

You can also educate them in an age-appropriate way about the consequences of sharing too much personal information in a game. These can range from compromising the security of online accounts to attracting cyberbullies to damaging their personal reputation. Let them know that they can come talk to you if they’ve posted something online that they later realize is too personal (you can help them get it deleted) or if they’re receiving inappropriate advertisements, messages, or other communications. (You can report inappropriate ads to Apple and Google.)

Make sure your kids know they can turn to you for help in protecting their personal data and preferences, and that you know where to find answers and advice.

Sometimes, in a rush to play a game, your child might simply click “yes” on permissions, or even falsify their age, but when they understand how their personal data and preferences may be used, or more importantly misused, most kids will become more interested in managing their own privacy online. Make sure they know they can turn to you for help, and that you know where to find answers and advice.

Protecting your kids’ privacy in mobile games may sound overwhelming, but the benefits of playing games far outweigh the risks. Our tips – together with ESRB’s Family Gaming Guide and our “What Parents Need to Know” blogs can help you protect your kids’ privacy online.

• • •

If you have more questions about kids’ privacy in mobile apps or you want to learn more about our program, please reach out to us through our contact page to learn more about our program. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post What Parents Need to Know About Privacy in Mobile Games: Communicate with Your Kids appeared first on ESRB Ratings.

]]>
What Parents Need to Know About Privacy in Mobile Games: Don’t Let Your Children Lie About Their Ages https://www.esrb.org/privacy-certified-blog/what-parents-need-to-know-about-privacy-in-mobile-games-dont-let-your-children-lie-about-their-ages/ Thu, 27 Oct 2022 13:00:52 +0000 https://www.esrb.org/?p=4968 We’ve pulled together five tips to help protect your children’s privacy and are rolling one out each day. Tip #4 is to prevent your children from lying about their ages online. It’s important that your child uses an accurate birthdate or age when signing up for a new game or mobile app. Learn why in our fourth privacy tip.

The post What Parents Need to Know About Privacy in Mobile Games: Don’t Let Your Children Lie About Their Ages appeared first on ESRB Ratings.

]]>
We’ve pulled together five tips to help protect your children’s privacy and are rolling one out each day this week. Yesterday, we covered what the ESRB Privacy Certified seals mean and where you should look for them. Our fourth tip is to prevent your children from lying about their ages online.

It’s important that your child uses an accurate birthdate or age when signing up for a new game or mobile app. When companies know that children under the age of 13 are playing their games, they are required by law to follow the federal Children’s Online Privacy Protection Act (COPPA). COPPA and its associated Rule issued by the Federal Trade Commission (FTC) gives parents control over what information companies can collect from kids under 13 years of age through their websites, apps, and other online services, including mobile games. Under COPPA, companies with games, apps, and other services “directed to children” or who know that kids under 13 are using their game must:

  1. Notify you of how they use your kid’s information;
  2. Get your express consent (known as “verifiable parental consent”) before collecting, using, or disclosing your child’s personal information;
  3. Allow you to review and request deletion of your child’s information.

Under COPPA, a game company can’t condition participation in a game on a child disclosing more information than is necessary. They’re also prohibited from using information for commercial purposes such as targeted marketing and advertising that are unrelated to gameplay. This is part of why it’s so important to make sure you or your kid enters an accurate birthdate or age when signing up for a new game!

Make sure your children enter their ages accurately so they can benefit from legal protections tailored to protect kids’ personal information.

Beyond COPPA, recently enacted privacy laws in states like California, Colorado, Connecticut, Utah, and Virginia give kids and their parents additional privacy rights. Some extend certain privacy rights to teens. For example, several of these state laws prohibit companies from selling or sharing teenagers’ (typically ages 13-16) personal information without their consent or the consent of their parent or guardian. You can ask that a mobile game company not sell or share your child’s information by making a request using a form or email address available from the company’s app or website. Other laws, such as California’s recently-passed Age Appropriate Design Code Act, require companies to set privacy controls in games and other products to the most-protective level for all users under the age of 18.

Companies that don’t follow these rules can get in a lot of trouble. The FTC and state law enforcers have slammed mobile game companies that failed to comply with COPPA with large fines and other penalties. And more enforcement is likely on the way. Along with our other tips, making sure that your children enter their ages accurately will help ensure that they benefit from legal privacy protections tailored for kids and teens.

Click here to continue to the final tip: Communicate with Your Kids.

• • •

If you have more questions about kids’ privacy in mobile apps or you want to learn more about our program, please reach out to us through our contact page to learn more about our program. Be sure to follow us on LinkedIn for more privacy-related updates.

• • •

Stacy Feuer Headshot As senior vice president of ESRB Privacy Certified (EPC), Stacy Feuer ensures that member companies in the video game and toy industries adopt and maintain lawful, transparent, and responsible data collection and privacy policies and practices for their websites, mobile apps, and online services. She oversees compliance with ESRB’s privacy certifications, including its “Kids Certified” seal, which is an approved Safe Harbor program under the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA) Rule.

The post What Parents Need to Know About Privacy in Mobile Games: Don’t Let Your Children Lie About Their Ages appeared first on ESRB Ratings.

]]>
P.S.R. Reinforces Fundamental Privacy Principles in a Changing World https://www.esrb.org/privacy-certified-blog/psr-reinforces-fundamental-privacy-principles-in-a-changing-world/ Thu, 20 Oct 2022 12:00:44 +0000 https://www.esrb.org/?p=4979 Read our key takeaways from the IAPP's Privacy. Security. Risk conference: kids and teen privacy, sensitive data and data minimization, and deidentification.

The post P.S.R. Reinforces Fundamental Privacy Principles in a Changing World appeared first on ESRB Ratings.

]]>
After a busy few days in Austin, I’ve pulled together my key takeaways from last week’s International Association of Privacy Professional’s (IAPP) Privacy. Security. Risk. 2022 conference (P.S.R.). P.S.R. is a one-of-a-kind conference that focuses on the intersection of privacy and technology. And there certainly was lots of tech, from content dealing with bias in AI to privacy engineering. But given the location in Texas, one of many states that now place significant restrictions on women’s reproductive rights, the effect of the U.S. Supreme Court’s recent decision in the Dobbs case on the constitutional right to privacy, was a strong undercurrent throughout the event.

Starting with the keynote session (which you can watch here, if you’re an IAPP member) and going through sessions on geolocation, cybersecurity, and advertising, many speakers grappled with new privacy challenges arising from Dobbs. Much of the conversation, though, focused on applying privacy basics to new and emerging technologies. This year’s P.S.R. highlighted that it’s an important time for companies to be good and responsible stewards of data. Here are more details on three topics that came up repeatedly at the conference: (1) Kids and Teens; (2) Data Minimization; and (3) Deidentification.

Kids and Teens
It’s clear that the UK Children’s Code and its offshoot, the recently passed California Age Appropriate Design Code (CA AADC), are top of mind. Companies are looking for more guidance and best practices from regulators on how to best comply. Both the UK and California codes feature similar concepts such as “the best interests of the child,” and privacy by default and prohibit behavioral ads/profiling. There are some differences, of course, but they are more technical than conceptual. If you’re looking for further analysis, we recommend checking out our post on the CA AADC and reading through the Future of Privacy Forum’s excellent analysis here.

During the keynote session featuring Federal Trade Commission (FTC) Commissioner Rebecca Kelly Slaughter, the IAPP’s Chief Knowledge Officer, Caitlin Fennessy, asked her if there are questions from the FTC’s 95-question Advance Notice of Proposed Rulemaking (ANPR) on commercial surveillance and data security that people should focus on when submitting comments. Commissioner Slaughter mentioned issues of tech addiction and psychological harms to teens that traditionally aren’t thought of as privacy problems, but stem from the same data sets. While the Commissioner did not have any updates on the FTC’s review of the Children’s Online Privacy Protection Act (COPPA) review to share, she strongly encouraged the public to submit comments on the ANPR. Many attendees interpreted the Commissioner’s COPPA comment as yet another signal that the FTC has effectively abandoned the COPPA Rule Review in favor of the ANPR. The FTC just extended the comment period, so you have plenty of time to file your comment.

Sensitive Data and Data Minimization
With five new state privacy laws (California, Virginia, Colorado, Utah, Connecticut) coming into effect next year, there was a lot of discussion about privacy law basics. It’s no surprise then that that the panels focused on defining personal data. In particular, sensitive data came up at nearly every session.

The state laws have similar definitions of sensitive data, but there are some key differences privacy professionals must pay attention to. For example, all states consider special category data like ethnic origin, religious beliefs and sexual orientation to be sensitive data. Virginia, Colorado, and Connecticut all consider personal data collected from a known child to be sensitive information. Each of the state laws specifies precise geolocation as sensitive data, except for Colorado. Colorado instead, is planning to cover geolocation information under its proposed rules for processing “sensitive data inferences.” Sensitive data inferences are “inferences made by a [c]ontroller based on [p]ersonal [d]ata, alone or in combination with other data, which indicate an individual’s racial or ethnic origin; religious beliefs; mental or physical health condition or diagnosis; sex life or sexual orientation; or citizenship or citizenship status.”

And just about every time someone spoke about sensitive data, they stressed the importance of data minimization. This concept that goes back to the Fair Information Practice Principles (FIPPs), first developed in the 1970’s, which contained the collection limitation principle, designed to prevent overcollection of information. As many speakers made clear (referring in part to the Dobbs decision and fears about the use of reproductive data), data can’t be breached, hacked, or turned over to law enforcement if it’s not collected in the first place.

Deidentification
The issue of deidentification also came up frequently, often in relation to data minimization. Deidentification refers to actions that organizations can take to remove identifying characteristics from their data.

Where can you look for deidentification standards? P.S.R. panelists mentioned governmental sources, such as the Health Insurance Portability and Accountability Act’s (HIPAA) deidentification standards in the medical privacy context and the FTC’s three-part test for deidentified data (pasted below from page 10 of this report) as good starting points. The FTC standard states that deidentified data is not:

“reasonably linkable” to the extent that a company: (1) takes reasonable measures to ensure that the data is de-identified; (2) publicly commits not to try to reidentify the data; and (3) contractually prohibits downstream recipients from trying to re-identify the data.

(The California Privacy Rights Act, which comes into effect in January 2023, also uses a similar standard.) That said, deidentification may not last long as a privacy-enhancing tool. As one speaker noted, some data scientists predict that technological advances will allow most data sets to be identifiable within three to five years. Our takeaway: It’s best to err on the side of minimizing the data you collect, use, and share from the outset. This is a principle we’ve long preached to members of ESRB Privacy Certified program.

* * *

Although P.S.R. explored newer technologies from biometrics to data clean rooms, much of the conference focused on core privacy practices: Have you done your risk assessments and data protection impact assessments, and implemented mitigating factors? Do you apply best practices for cybersecurity and have documentation for how and why you might deviate from those best practices and standards? Are you keeping the FIPPs in mind? These, of course, are the types of questions we think about all of the time at ESRB Privacy Certified. Amidst all the changing laws and technologies, it’s reassuring to know that sticking to privacy fundamentals can boost your compliance efforts. And don’t forget, we’re here to help our members with the issues I summarized above – child and teen privacy, sensitive data and data minimization, deidentification – and more.

Photo credit: Meghan Ventura

The post P.S.R. Reinforces Fundamental Privacy Principles in a Changing World appeared first on ESRB Ratings.

]]>
Former FTC Regulator Stacy Feuer Joins ESRB as Senior Vice President, Privacy Certified https://www.esrb.org/blog/former-ftc-regulator-stacy-feuer-joins-esrb-as-senior-vice-president-privacy-certified/ Tue, 04 Jan 2022 16:14:47 +0000 https://www.esrb.org/?p=4628 NEW YORK, Jan. 4, 2022 – The Entertainment Software Rating Board (ESRB) today announced that Stacy Feuer has joined the organization as Senior Vice President, Privacy Certified, a leading online and mobile privacy compliance program. Established in 1999, the ESRB Privacy Certified program helps members navigate privacy protection laws in the U.S. and internationally, and […]

The post Former FTC Regulator Stacy Feuer Joins ESRB as Senior Vice President, Privacy Certified appeared first on ESRB Ratings.

]]>
NEW YORK, Jan. 4, 2022 – The Entertainment Software Rating Board (ESRB) today announced that Stacy Feuer has joined the organization as Senior Vice President, Privacy Certified, a leading online and mobile privacy compliance program. Established in 1999, the ESRB Privacy Certified program helps members navigate privacy protection laws in the U.S. and internationally, and was one of the first of its kind to be authorized by the Federal Trade Commission as a Safe Harbor under the Children’s Online Privacy Protection Act (COPPA).

Feuer brings more than two decades of experience in consumer protection and privacy policy and enforcement to the ESRB. In her past role as the Assistant Director for International Consumer Protection at the Federal Trade Commission (FTC), she represented the U.S. and the FTC internationally on consumer-related advertising, marketing, and data privacy issues involving new and emerging digital technologies. She also investigated and litigated advertising cases and coordinated the FTC’s work on the U.S. SAFE WEB Act, legislation that enhances the FTC’s cross-border cooperation powers.

“The ESRB Privacy Certified program continues to set a high bar with its self-regulatory standards and commitment to best practices,” said Feuer. “As a result, consumers, parents, and caregivers can be assured that their and their children’s personal data will be protected whenever they see Privacy Certified seals displayed. I am thrilled to join ESRB at this pivotal moment for data privacy to help Privacy Certified members meet ongoing and future compliance challenges creatively.”

“Stacy’s deep expertise in navigating the domestic and global regulatory landscape for privacy, consumer protection and e-commerce makes her a perfect choice to lead the Privacy Certified program,” said ESRB President Patricia Vance. “Stacy will bring enormous value to our member companies, helping guide them on compliance with an ever-increasingly complex array of consumer privacy regulations on the state, federal and global levels.”

Before joining the FTC, Stacy practiced international law at a Washington, DC firm, and served as a law clerk for a federal district court judge. Stacy graduated from Cornell University and the New York University School of Law. She holds a CIPP-US accreditation from the International Association of Privacy Professionals.


About ESRB

The ESRB is a non-profit, self-regulatory body that independently assigns age and content ratings for video games and mobile apps so parents can make informed choices. It also enforces advertising guidelines adopted by the video game industry and helps companies implement responsible online, mobile and internet connected device privacy practices under its Privacy Certified program. Visit www.esrb.org for more information.

About Privacy Certified

ESRB’s Privacy Certified program, an authorized Safe Harbor under the Children’s Online Privacy Protection Act (COPPA), helps companies comply with online and mobile privacy protection laws in the United States and beyond. Privacy Certified protects consumer privacy and is consistent with ESRB’s mission to help interactive entertainment companies conduct business responsibly while assuring consumers, especially parents, that their personal data is collected and managed responsibly. Look for the Privacy Certified seal. For more information, visit esrb.org/privacy.

Contact:

Johner Riehl
858.220.5626
johner@zebrapartners.net

The post Former FTC Regulator Stacy Feuer Joins ESRB as Senior Vice President, Privacy Certified appeared first on ESRB Ratings.

]]>
The UK Age Appropriate Design Code: Childproofing the Digital World https://www.esrb.org/privacy-certified-blog/the-uk-age-appropriate-design-code-childproofing-the-digital-world/ Thu, 21 Jan 2021 15:47:36 +0000 https://www.esrb.org/?p=4046 “A generation from now, I believe we will look back and find it peculiar that online services weren’t always designed with children in mind. When my grandchildren are grown and have children of their own, the need to keep children safer online will be as second nature as the need to ensure they eat healthy, […]

The post The UK Age Appropriate Design Code: Childproofing the Digital World appeared first on ESRB Ratings.

]]>

“A generation from now, I believe we will look back and find it peculiar that online services weren’t always designed with children in mind. When my grandchildren are grown and have children of their own, the need to keep children safer online will be as second nature as the need to ensure they eat healthy, get a good education or buckle up in the back of a car.”
– Information Commissioner Elizabeth Denham

In May 2018, the European Union’s General Data Protection Regulation (GDPR) went into effect, recognizing for the first time within the European Union (EU) that children’s personal data warrants special protection. The United Kingdom’s Data Protection Act 2018 adopted GDPR within the United Kingdom and, among other things, charged the Information Commissioner’s Office (ICO) with developing a code of practice to protect children’s personal data online. The result is the Age Appropriate Design Code (also referred to as the Children’s Code), an ambitious attempt to childproof the digital world.

The Internet was not built with children in mind, yet children are prolific users of the Internet. The Children’s Code, which is comprised of fifteen “Standards,” seeks to correct that incongruity by requiring online services that children are likely to use to be designed with their best interests in mind.

For over the last twenty years, the U.S. Children’s Online Privacy Protection Act (COPPA) has been the primary source of protection for children’s privacy online. COPPA protects the privacy of internet users under 13 years old, primarily by requiring informed, verifiable consent from a parent or guardian. The Children’s Code, however, has much grander aspirations. It protects all children under 18 years old, asking companies to reimagine their online services from the bottom up.

The foundational principle of the Children’s Code calls for online services likely to be accessed by children under 18 years old to be designed and developed with the best interests of the child as a primary consideration. The Children’s Code is grounded in the United Nations Convention on the Rights of the Child (UNRC), which recognizes that children have several rights, including the rights to privacy and to be free from economic exploitation; to access information; to associate with others and play; and to have a voice in matters that affect them.

To meet the best interests of the child, online services must comply with each of the applicable fifteen Standards. Those Standards are distilled below.

1. Assessing and Mitigating Risks
Compliance with the Children’s Code begins with a Data Protection Impact Assessment (DPIA), a roadmap to compliance and a requirement for all online services that are likely to be accessed by children under 18 years old. The DPIA must identify the risks the online service poses to children, the ways in which the online service mitigates those risks, and how it balances the varying and sometimes competing rights and interests of children of different age groups. If the ICO conducts an audit of an online service or investigates a consumer complaint, the DPIA will be among the first documents requested.
The ICO suggests involving experts and consulting research to help with this process. This might not be feasible for all companies. At a minimum, however, small- and medium-sized companies with online services that create risks to children will be expected to keep up to date with resources that are publicly available. More will be expected of larger companies.
While the Internet cannot exist without commercial interests, the primary consideration must be the best interests of the child. If there is a conflict between the commercial interests of an online service and the child’s interests, the child’s interests must prevail.

2. Achieving Risk-Proportionate Age Assurance
To adequately assess and mitigate risk, an online service must have a level of confidence in the age range(s) of its users that is proportionate to the risks posed by the online service. The greater the risk, the more confidence the online service must have.
The ICO identifies several options to obtain what it calls “age assurance,” which can be used alone or in combination depending on the circumstances. Age assurance options include self-declaration by users (a/k/a age gates), artificial intelligence (AI), third-party verification services, and hard identifiers (e.g., government IDs). Less reliable options, like age gates, are only permitted in low-risk situations or when combined with other age assurance mechanisms.

Achieving an adequate level of confidence will be challenging. The Verification of Children Online (VoCO), a multi-stakeholder child online safety research project led by the U.K.’s Department for Digital, Culture, Media & Sport (DCMS), is attempting to address that challenge. The VoCO Phase 2 Report provided the following potential flow as an example:
[F]or a platform that needs a medium level of confidence, a user could initially declare their age as part of the onboarding process, and alongside this an automated age assurance method (such as using AI analysis) could be used to confirm the declared age. If this measure suggests a different age band than that stated, which reduces confidence in the initial assessment, a request could be made to validate the user’s age through a verified digital parent.

Ultimately, if an online service is unable to reach an appropriate level of confidence, it has two options: 1) take steps to adequately reduce the level of risk; or 2) apply the Children’s Code to all users, even adults.

3. Setting High Privacy by Default
For all children, high privacy must be the default setting. This means an online service may only collect the minimum amount of personal data needed to provide the core or most basic service. Additional, optional elements of the online service, for example to personalize offerings, would have to be individually selected and activated by the child. To illustrate this point, the ICO uses the example of a music download service.

An example of privacy settings that could apply to a music service

High privacy by default also means that children’s personal information cannot be used in ways that have been shown to be detrimental. Based on specific Standards within the Children’s Code, this means the following must be turned off by default:

  • Profiling (for example, behavioral advertising);
  • Geolocation tracking;
  • Marketing and advertising that does not comply with The Committee of Advertising Practice (CAP) Code in the United Kingdom;
  • Sharing children’s personal data; and
  • Utilizing nudge techniques that lead children to make poor choices.

To turn these on, the online service must be able to demonstrate a compelling reason and adequate safeguards.

4. Making Online Tools Available
Children must be given the tools to exercise their privacy rights, whether it be opting into optional parts of a service or asking to delete or get access to their personal information. The tools should be highlighted during the start-up process and must be prominently placed on the user’s screen. They must also be tailored to the age ranges of the users that access the online service. The ICO encourages using easily identifiable icons and other age-appropriate mechanisms.

5. Communicating Age-Appropriate Privacy Information
The Children’s Code requires all privacy-related information to be communicated to children in a way they can understand. This includes traditional privacy policies, as well as bite-sized, just-in-time notices. To help achieve this daunting task, the ICO provides age-based guidance. For example, for children 6 to 9 years old, the ICO recommends providing complete privacy disclosures for parents, while explaining the basic concepts to the children. If a child in this age range attempts to change a default setting, the ICO recommends using a prompt to get the child’s attention, explaining what will happen and instructing the child to get a trusted adult. The ICO also encourages the use of cartoons, videos and audio materials to help make the information understandable to children in different age groups and at different stages of development.

For connected toys and devices, the Children’s Code requires notice to be provided at the point of purchase, for example, a disclosure or easily identifiable icon on the packaging of the physical product. Disclosures about the collection and use of personal data should also be provided prior to setup (e.g., in the instructions or a special insert). Anytime a connected device is collecting information, it should be obvious to the user (e.g., a light goes on), and collection should always be avoided when in standby mode.

6. Being Fair
The Children’s Code expects online services to act fairly when processing children’s personal data. In essence, this means online services must say what they do, and do what they say. This edict applies not just to privacy disclosures, but to all published terms, policies and community standards. If, for example, an online service’s community standards prohibit bullying, the failure to enforce that standard could result in a finding that the online service unfairly collected a child’s personal data.

Initial implementation of the Children’s Code will be a challenge. User disruption is inevitable, as are increased compliance and engineering costs. The return on that initial investment, however, will hopefully make it all worthwhile. If Commissioner Denham’s vision is realized, the digital world will become a safe place for children to socialize, create, play, and learn.

This article has been published in PLI Chronicle, https://plus.pli.edu.

If you have more questions about the Age Appropriate Design Code or you want to learn more about our Program, please reach out to us through our Contact page to learn more about our program. Be sure to follow us on Twitter and LinkedIn for more privacy-related updates.

The post The UK Age Appropriate Design Code: Childproofing the Digital World appeared first on ESRB Ratings.

]]>
The UK’s Age Appropriate Design Code: 5 Steps to Get You Started https://www.esrb.org/privacy-certified-blog/the-uks-age-appropriate-design-code-5-steps-to-get-you-started/ Wed, 14 Oct 2020 14:59:30 +0000 https://www.esrb.org/?p=3890 On September 2, 2020, the United Kingdom’s Age Appropriate Design Code (Code) went into effect. There is, however, a 12-month transition period to allow companies to bring their online services—including websites, mobile apps and connected toys and devices—into compliance. While 12 months might seem like a lot of time, it is not. There is much […]

The post The UK’s Age Appropriate Design Code: 5 Steps to Get You Started appeared first on ESRB Ratings.

]]>
On September 2, 2020, the United Kingdom’s Age Appropriate Design Code (Code) went into effect. There is, however, a 12-month transition period to allow companies to bring their online services—including websites, mobile apps and connected toys and devices—into compliance. While 12 months might seem like a lot of time, it is not. There is much work to be done. To get started, we recommend taking 5 steps.

1. Begin Conducting a Data Protection Impact Assessment.
The Code applies to all online services that children under 18 years old are likely to access—in other words, most online services. And if the Code applies, then a Data Protection Impact Assessment (DPIA) is required. If you have not done a DPIA, this is the time. If you have done one, update it with the Code in mind.

Not sure if the Age Appropriate Design Code applies to your online service? Read more here.

The DPIA should be your road map to compliance with the Code. It should identify risks your online service poses to children, and then ways in which you plan to mitigate those risks. It should memorialize the varying and sometimes competing rights and interests of children of different age groups, and how you have balanced those rights and interests. Ultimately, the best interests of the children must be your primary consideration, even trumping your own commercial interests. Familiarize yourself with the UN Convention on the Rights of the Child and the General Comment on Children’s Rights in Relation to the Digital Environment.

The DPIA will take time to complete. While it should be started early, it will be a living document that is updated as new risks are identified and new solutions implemented.

It should be a multi-departmental effort, pulling from the design and development, marketing, data security, and legal teams, at a minimum. However, your Data Protection Officer should head the project.

Keep in mind that if the UK’s Information Commissioner’s Office (ICO) conducts an audit of your online service or investigates a complaint, its first ask will likely include a copy of your DPIA. If you have never done one and you are not sure where to get started, the ICO provides a helpful template on its Children’s Code Hub.

2.Take Steps to Know Your Users.
To conduct a proper DPIA, you will need to determine the level of confidence you have in the age ranges of your users. Specifically, what children are using or are likely to use your online service?

If you do not plan to apply the Code to your online service because you do not believe children under 18 years old are likely to access it, you must be prepared to defend that decision to the ICO. The ICO will expect evidence to support your decision. Do you have empirical data of your users’ ages? Have you conducted a user survey or done consumer research? If not, you may have work to do to satisfy the ICO.

Ultimately, the greater your uncertainty, the greater the risk and, therefore, the greater the need to mitigate. This might include eliminating elements of your online service especially risky to children or taking steps to limit children’s access. Please keep in mind, however, that the ICO does not want to see an age-gated Internet. In fact, according to the ICO, the use of age gates—i.e., where a user declares his or her age—is only appropriate in low-risk situations or where additional safeguards are in place.

3. Plan for “High Privacy” by Default.
The ICO seems to want “high privacy” to be the default setting for all users, but it is only required for users under 18 years old. High privacy by default means:
• Only collecting personal data needed to provide your “core” service;
• Allowing children to opt into optional elements of your service that require the additional collection and use of personal data, and minimizing the personal data you collect for those additional elements; and
• Turning off “detrimental uses,” like profiling, data sharing, and geolocation tracking, by default and only allowing them to be turned on when there is a compelling need and adequate protections in place.
The following is an example the ICO provides for a music download service, which helps to illustrate this point:

4. Begin Developing Online Tools.
Children must be given tools within your online service to make choices and exercise rights. This should include, for example, the ability to opt into and opt out of optional elements of your service, request the deletion of their personal data, and obtain access to their personal data. These tools must be highlighted to the child during the start-up process, prominently placed on the screen, and age appropriate.

5. Work on Age Appropriate Privacy Notices.
In addition to your standard privacy policy intended for adults, the Code requires privacy disclosures that are understandable and accessible to your child users. If your online service is accessed by or likely to be accessed by children in different age groups, appropriate disclosures will need to be tailored to each of those age groups. For children 6 to 9 years old, for example, the ICO expects you to explain the basic concepts of your online service’s privacy practices and online tools. You are encouraged to use cartoons, and video and audio materials to make the disclosures child friendly. Older teens, in contrast, should be given more detail and more choices.
Moreover, you are expected to do more than just post a privacy policy. If, for example, a child attempts to opt into a lower privacy setting, you are expected to display an age appropriate notice. Children should be encouraged to get or speak with a parent or trusted adult. They should be told what personal data will be collected if the default setting is changed and how that information will be used. If the personal data will be shared with a third party, the child should be given a separate opt-in choice for the sharing or, at a minimum, a clear and age appropriate notice that the data will be shared. Any risks should be highlighted, and children should be encouraged to keep the default setting if they are unsure or do not understand. The following is a sample notice provided by the ICO:

If you have more questions about the Age Appropriate Design Code or you want to learn more about our Program, please reach out to us through our Contact page to learn more about our program. Be sure to follow us on Twitter and LinkedIn for more privacy-related updates.

The post The UK’s Age Appropriate Design Code: 5 Steps to Get You Started appeared first on ESRB Ratings.

]]>