From July 15 to 18, 2024, the Future of Privacy Forum (FPF) participated in Personal Data Protection Week 2024 (PDP Week), an event organized and hosted by the Personal Data Protection Commission of Singapore (PDPC) at the Marina Bay Sands Expo and Convention Centre in Singapore.
As with PDP Weeks of previous years, programming during PDP Week 2024 combined PDPC events with the International Association of Privacy Professionals (IAPP)’s annual Asia Privacy Forum. However, for the first time, the PDPC also scheduled its annual Summit on Privacy-Enhancing Technologies (PETs) in the Asia-Pacific (APAC) region during PDP Week.
Throughout the week’s events, FPF fostered robust discussions on data protection issues arising from new and emerging technologies, including generative AI. Below is a comprehensive summary of our participation and key takeaways from these significant engagements.
1. FPF, with the support of PDPC, hosted a hands-on workshop to equip regional privacy professionals with practical knowledge on the complexities of generative AI governance in the APAC region.
On July 15, 2024, with the support of PDPC, FPF hosted a hands-on workshop titled “Governance Frameworks for Generative AI: Navigating the Complexities in Practice.” This event aimed to equip members of the regional data protection community with practical knowledge on the operational and implementation complexities of generative AI governance. It drew upon the findings from FPF APAC’s year-long research project, “Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific,” (FPF’s GenAI Report) which explored emerging governance frameworks for generative AI in APAC.
With a full house of 70 attendees, the workshop addressed rising concerns surrounding generative AI deployment risks, particularly in AI governance and data protection, highlighting guidelines and frameworks issued by data protection regulators across the APAC region. Participants engaged in dynamic discussions regarding AI and participated in a practical exercise, gaining invaluable insights into navigating the intricate landscape of generative AI governance.
Josh Lee Kok Thong, Managing Director of FPF APAC, hosted the entire event, which began with an introduction to FPF’s Center for AI by Anne J. Flanagan, FPF’s Vice President for AI. The event was structured in two parts: (1) an informational segment featuring presentations and a panel discussion; followed by (2) a practical, hands-on workshop.
1.1 The informational segment featured presentations by FPF and IMDA, as well as insights from industry and practice.
The informational segment included two presentations:
- Dominic Paulger, Policy Manager for APAC at FPF, shared key findings and takeaways from FPF’s GenAI Report.
- Darshini Ramiah, Manager (AI & Data Innovation) at the Infocomm Media Development Authority of Singapore (IMDA), provided an overview of Singapore’s Model AI Governance Framework for Generative AI, released in May 2024.
The industry sharing session that followed focused on key aspects of generative AI governance and deployment. The experts featured in this segment included:
- Barbara Cosgrove, Vice President, Chief Privacy Officer at Workday;
- David N. Alfred, Director and Co-Head of Data Protection, Privacy, and Cybersecurity at Drew & Napier; and
- Lee Matheson, Senior Counsel for Global Privacy at FPF.
The experts discussed strategies for selecting AI service providers, emphasizing the importance of internal policies and risk assessment. The panelists argued that while AI introduces new technologies and applications, it ultimately functions similarly to other systems and services, allowing companies to leverage existing frameworks for compliance and risk management. The panelists additionally noted that many existing laws and regulations will remain applicable to AI systems, including those governing the professional liabilities of users of AI systems.
- A key theme from the discussion was identifying red flags when engaging with AI service providers. A major red flag raised by one panelist was when a buyer or seller lacks a thorough understanding of the AI system they are discussing. The panelists agreed that it is crucial for both sides to be well-informed about the technology and its implications, and to beware potential AI vendors that could not provide in-depth explanations of their products.
- The discussion emphasized the need for transparency and communication between companies and their vendors. Companies should seek vendors willing to engage in open conversations about their practices, rather than those claiming 100% compliance without discussion. Instead of relying solely on standard certifications, companies should request detailed information, such as data sheets or labeling, to understand the specific practices of their AI service providers.
- Further, panelists considered transparency and communication crucial at multiple levels within the AI ecosystem. When AI service providers purchase hardware to run AI models, both buyer and provider need to be aware of the data sources and datasets involved, as these factors could impact their liability.
- For effective use of generative AI products, the panelists agreed on the importance of establishing a governance framework within an organization. This includes having clear guidelines for the responsible use of AI, such as for managing confidential and personal information. If a company has an acceptable use policy, it should ensure that its communication strategies are consistent with such a policy. Panelists also noted that managing vendor relationships can be complex, necessitating clear contractual agreements and governance structures.
- Panelists highlighted early-stage considerations for companies developing or deploying AI systems. They considered that security-by-design and privacy-by-design should be starting points for AI development and deployment. Engaging legal, regulatory, and compliance teams early in the process is essential for comprehensive risk management.
- The discussion highlighted the similarities between data protection principles and AI governance. Key data protection concepts, such as accuracy, minimization, and purpose limitation, are also relevant to AI data governance. Panelists emphasized that while data scientists and analysts may not always view their work through a legal lens, their activities often fall within data protection requirements.
The discussion concluded with insights on managing training data and model improvement while balancing innovation with ethical and regulatory compliance across international jurisdictions.
Photo: Industry sharing segment of the workshop on key aspects of generative AI governance and deployment, July 15, 2024. (L-R) Barbara Cosgrove, Lee Matheson and David N. Alfred.
1.2 The hands-on portion of the workshop engaged participants in a group exercise based on a realistic hypothetical scenario.
The final segment of the workshop engaged participants in a practical group exercise exploring the implementation of a hypothetical generative AI application modeled after ChatGPT by a fictitious private education services provider. Participants were divided into groups representing specific stakeholders relevant to the AI deployment lifecycle, such as the developer, deployer and user of the application, or a regulator, employee or in-house legal counsel. Each group was tasked with identifying and addressing potential concerns and risk areas from the perspective of their stakeholder. These discussions fostered a comprehensive understanding of the challenges posed by generative AI applications and provided valuable insights and a hands-on experience for organizations aiming to develop or deploy generative AI responsibly and in compliance with regulatory frameworks in the APAC region.
Photo: Participants presenting major takeaways from their table discussions, July 15, 2024.
Photo: Closing the workshop with a group photo of the FPF team, July 15, 2024. (L-R) First row: Bilal Mohamed, Anne J. Flanagan, Josh Lee, Sakshi Shivhare, Brendan Tan. (L-R) Second row: Lee Matheson and Dominic Paulger.
2. At the IAPP Asia Privacy Forum, FPF organized a panel to examine India’s landmark data protection legislation, and also participated in a panel on data sovereignty.
2.1. On July 18, FPF organized a panel titled “Demystifying India’s Digital Personal Data Protection Act”.
This panel was moderated Bilal Mohamed, Policy Analyst for FPF’s Global Privacy Team, and featured as panelists:
- Rakesh Maheshwari, formerly Senior Director and Group Coordinator (Cyber Laws and Data Governance), Ministry of Electronics and IT of India (MeitY), providing a regulator’s perspective;
- Nehaa Chaudhari, Partner and head of the advisory and public policy practice at Ikigai Law, providing perspectives from the legal sector; and
- Ashish Aggarwal, Vice President, Public Policy at nasscom, providing industry perspectives.
The panelists examined India’s landmark legislation, the Digital Personal Data Protection Act 2023 (DPDPA), covering familiar concepts like notice and consent, data subject rights, data breaches, and cross-border data transfers, as well as new features of the law like significant data fiduciaries and consent managers.
Rakesh Maheshwari provided insights into MeitY’s thinking behind several key provisions of the DPDPA. On children’s privacy, he explained that the Government was concerned with ensuring the safety of children who access online platforms and so set the threshold for parental consent at 18 by default. However, he also highlighted that the DPDPA’s children’s privacy provisions are flexible: if platforms demonstrate that they process children’s personal data safely, then the age threshold could potentially be lowered. Rakesh also explained that consent managers are intended to centralize management of consent across multiple, fragmented sources of data, such as health data from various sources like labs, hospitals, and clinics, while ensuring data protection and providing data subjects with control over how their data is processed. He further addressed the relationship between MeitY and the Data Protection Board, clarifying that while the Government will establish subordinate rules to the DPDPA, the Board will act independently as an adjudicator. He emphasized the importance of close cooperation and harmonized operations between the Board and the Government.
Nehaa Chaudhari discussed the industry’s proactive approach to compliance, noting that many businesses in India have already started the compliance process, focusing on data mapping and proactively obtaining consent from data subjects. She highlighted the industry’s hope for clarity on certain aspects of the DPDPA, particularly concerning children’s data and verifiable parental consent. She described two key aspects for verifying parental consent: obtaining the parent’s consent and establishing the parent-child relationship. Businesses are exploring various models and technological tools to address these requirements, such as the adequacy of using checkboxes for consent. She also pointed out that the DPDPA does not impose explicit duties on data processors and instead, allows data controllers and processors to determine their respective responsibilities through contractual arrangements. While the DPDPA provides a baseline for compliance, Nehaa emphasized that sector-specific regulations might impose heightened obligations.
Ashish Aggarwal provided insights into how ready nasscom’s 3,000+ member companies are to comply with the DPDPA. He explained that business-to-business (B2B) companies that already comply with the GDPR could become DPDPA-compliant in around six months as such companies should already have completed data mapping. However, he noted that for business-to-consumer (B2C) companies, GDPR compliance alone may not be sufficient as there are significant differences between the GDPR and DPDPA. He highlighted that some provisions of the DPDPA (especially breach notifications) still require clarification under forthcoming subordinate rules to the DPDPA. However, he did not expect that these rules would be as comprehensive as GDPR.
Overall, the panel provided substantial insights into the challenges and opportunities presented by the DPDPA, offering actionable advice for navigating this new regulatory landscape.
Photo: FPF Panel on Demystifying India’s Digital Personal Data Protection Act, July 18, 2024. (L-R) Bilal Mohamed, Ashish Aggarwal, Rakesh Maheshwari, and Nehaa Chaudhari.
2.2 On July 17, FPF APAC Managing Director Josh Lee Kok Thong contributed to a panel on “Data Sovereignty: Nebulous and Evolving, But Here to Stay in 2024?”.
This panel delved into the complexities of data residency, data sovereignty, data localization, and cross-border data transfers within APAC’s evolving governance structures. The speakers explored the impact of data and privacy laws, noting the complexities added by data localization requirements and the diverse approaches of countries like China, Indonesia, India, and Vietnam.
Josh provided an overview of cross-border data flows in the APAC region, highlighting the concept of data sovereignty. He drew a distinction between “data sovereignty” – a conceptual framework for looking at data transfers – and “data localization” – a set of requirements rooted in laws or policies.
Photo: FPF APAC represented by Josh Lee on a panel on Data Sovereignty: Nebulous and Evolving, But Here to Stay in 2024? July 17, 2024. (L-R) Charmian Aw, Josh Lee, Darren Grayson Chng, Wei Loong Siow, and Denise Wong.
3. FPF was represented in two sessions at the PETs Summit held on July 16, 2024.
3.1. FPF Vice President for AI, Anne J. Flanagan, spoke on the panel “Architecting New Real-World Products and Solutions with PETs.”
The panel discussed how companies have leveraged PETs for various use cases to innovate and create new products and solutions by participating in the IMDA’s PET Sandbox – a regulatory sandbox initiative set up by the PDPC to offer companies the opportunity to collaborate with PET digital solution providers to develop use cases and pilot PETs. Panelists offered valuable insights into the business cases for integrating PETs and how it contributed to sustained success in an increasingly data-driven business environment.
Anne discussed the integration of PETs in AI product development, highlighting their potential to balance innovation with privacy protection. She emphasized that PETs are not a one-size-fits-all solution but rather a tool to address various privacy challenges. Anne stressed the importance of incorporating PETs within a comprehensive company framework to effectively tackle these issues. She also announced the launch of FPF’s recent report on Confidential Computing. This report offers an in-depth analysis of the technology’s role in data protection policy, detailing its fundamental aspects, applications across various sectors, and crucial policy considerations.
3.2. FPF APAC Managing Director Josh Lee Kok Thong chaired a roundtable titled “Unleashing The Data Economy: Identifying Challenges, Building Use Cases & How PETs Help Address Generative AI Concerns.”
This session focused on exploring privacy challenges in specific use cases and the application of PETs to mitigate these concerns. The roundtable delved into the data economy, individual use cases, privacy challenges, and the intersection of PETs with generative AI. Key highlights included building an AI toolbox, identifying challenges and use cases, choosing and implementing PETs, and using PETs to balance innovation with privacy.
4. FPF organized exclusive side events to foster deeper engagements with key stakeholders on July 18, 2024.
4.1 FPF hosted an invite-only Privacy Leaders’ Luncheon at Marina One West Tower.
This closed-door event also provided a platform for around 30 senior stakeholders of FPF APAC to discuss pressing challenges at the intersection of AI and privacy, with a particular focus on the APAC region. During the session, FPF Vice President for Artificial Intelligence Anne J. Flanagan introduced FPF’s new Center for AI to APAC stakeholders, highlighting our ongoing commitment to advancing AI governance.
4.2 FPF co-hosted a networking cocktail event with Rajah & Tann at Marina Bay Sands Expo and Convention Centre.
Later in the evening, on July 18, FPF APAC toasted with old and new friends and discussed the challenges and opportunities in AI and privacy. At the event, we were privileged to have the following distinguished speakers share brief remarks:
- Denise Wong, Deputy Commissioner, Personal Data Protection Commission of Singapore.
- Steve Tan, Deputy Head, Technology, Media & Telecommunications and Partner at Rajah & Tann.
- Anne J. Flanagan, Vice President for AI at FPF.
- Josh Lee Kok Thong, Managing Director of FPF APAC.
This event facilitated meaningful connections and discussions among the attendees, further strengthening FPF’s partnerships and friendships within the data protection community.
5. Conclusion
FPF is proud to showcase our significant participation in PDP Week 2024, the IAPP Asia Privacy Forum 2024, and the PETs APAC Summit, driving forward discussions on data protection and AI governance in the APAC region. FPF’s workshop on generative AI governance, insightful panel discussions, and exclusive networking events underscored our commitment to fostering collaboration and knowledge-sharing among industry, academia, regulators, and civil society.
As we look ahead, FPF remains dedicated to advancing the discourse on privacy and emerging technologies, ensuring that we continue to navigate the complexities of the digital age with a balanced and informed approach. We are grateful for the support of the PDPC, IAPP, and all our members, partners and participants who contributed to the success of these events.