Quantcast
Channel: Global – Future of Privacy Forum
Viewing all articles
Browse latest Browse all 29

AI Forward: FPF’s Annual DC Privacy Forum Explores Intersection of Privacy and AI

$
0
0

The Future of Privacy Forum (FPF) hosted its inaugural DC Privacy Forum: AI Forward on Wednesday, June 5th. Industry experts, policymakers, civil society, and academics explored the intersection of data, privacy, and AI. In Washington, DC’s southwest Waterfront at the InterContinental, participants joined in person for a full-day program consisting of keynote panels, AI talks, and debates moderated and led by FPF experts. 

AI and FPF Experts Take the Stage 

Keynote Panels and AI Talks

FPF CEO Jules Polonetsky kicked off the day with welcoming remarks and announced the launch of FPF’s new Center for Artificial Intelligence, which is headed by Anne J. Flanagan, FPF’s VP for AI, and focuses on AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe, consisting of members from industry, academia, civil society, and current and former policymakers.

fpf annual meeting june 2024 41

FPF Board Chair Alan Raul joined the stage to give opening remarks and introduce keynote speaker, Adam Russell, AI Safety Institute Chief Vision Officer from NIST. Adam Russell presented an overview of how, why, and what the US AI Safety Institute aims to achieve in AI safety and to help build collective intelligence.

FPF’s Director of Youth and Education, David Sallay, kicked off the first AI Talk Session along with Colleen McClain, Research Associate from the PEW Research Center. Sallay discussed the recent FPF report, “Vetting Generative AI Tools for Use in Schools,” which offers a checklist designed specifically for K -12 schools, outlining key considerations when incorporating generative AI into a school or district’s edtech vetting checklist. 

McClain presented a new PEW survey analysis on youth and AI that included thought-provoking views and experiences of teenagers aged 13-17 and their parents, as well as the views of K-12 teachers in the U.S. One key insight revealed that U.S. adults view privacy as a main concern when it comes to trusting–or not trusting–use of AI.  

fpf annual meeting june 2024 117

FPF’s Anne J. Flanagan moderated a keynote panel, “Risk Assessments: Up to the Task?” with Ed Britan, Senior Vice President, Global Privacy & Marketing Legal, Salesforce; Barbara Cosgrove, Vice President and Chief Privacy Officer, Workday; and Katherine Fick, Associate General Counsel, IBM. These leading privacy experts explored how companies can evaluate risk factors when it comes to developing or deploying AI. This included what can be learned from previous privacy risk assessments, advice for those daunted by regulations and standards, guidance for those who are new to AI governance, and what makes these AI assessments different from those that have come before. 

Shifting into the afternoon, FPF Board Member Agnes Bundy Scanlan moderated the second AI Talk, “Is Algorithmic Fairness Even Possible?” with Professor of Computer Science at Princeton University, Arvind Narayanan. During his presentation, Prof. Narayanan argued that algorithmic fairness has not been particularly impactful, arguing that most AI products do not succeed, that broken AI perpetuates broken institutions, and added that fixing algorithms will not solve systematic problems in our society. He also gave recommendations for policymakers and regulators, such as establishing standards for efficacy and managing explanation and contestability. 

Next, FPF’s Policy Council for Data, Mobility, and Location, Adonne Washington, led the panel, “AI & The Future of Work,” featuring Keith Sonderling, Commissioner, U.S. Equal Employment Opportunity Commission (EEOC), and Lael Bellamy, Partner at DLA Piper, on concerns of bias and discrimination, as well as the potential of AI-driven tools for fostering inclusive workplaces. Commissioner Sonderling argued that AI tech can help us make better and more transparent employment decisions; however, he stressed that AI  must be used properly. Bellamy added that AI tools could reinforce society’s legacy of bias referencing tools like ChatGPT and how it formulates its information from public data like Reddit and Wikipedia, regurgitating skewed knowledge. 

Global Convergence and Hyperlocal Regulation

fpf annual meeting june 2024 171

FPF’s Tatiana Rice moderated “AI Legislation: States to the Rescue?” with Del. Michelle Maldonado, D-VA, 2024 Virginia House of Delegates Communications, Technology, and Innovation Committee, and Senator Robert Rodriguez, Majority Leader, Colorado General Assembly, where both discussed the importance of recent privacy laws that were passed in their respective states. “Technology moves at the speed of light, and legislation moves at the speed of molasses,” stated Del. Maldonado on AI governance. Senator Rodriguez discussed the Colorado AI Act (CAIA) and how looking at previous legislation, such as the Colorado Privacy Act (CPA), was a helpful component in writing new privacy bills. 

FPF’s Senior Counsel for Global Privacy, Lee Matheson, then moderated “Global Convergence or Competition for Regulatory Leadership” with Anupam Chander, Scott K. Ginsburg Professor of Law and Technology, Georgetown University Law Center. Prof. Chander outlined the main areas of convergence on AI regulation globally.

FPF AI Debates 

fpf annual meeting june 2024 220

In one of the most energetic parts of the day, FPF’s Director for U.S. Legislation, Keir Lamont, kicked off the AI Debates, moderating the first session, “Resolved: Data minimization is compatible with the development of artificial intelligence,” featuring Omer Tene, Partner, Goodwin Procter LLP opposing the position and Samir Jain, Vice President of Policy, CDT, arguing in favor of the position. 

Tene argued that data minimization is antithetical to the development of AI. The essence of AI is the ability to discover new trends and correlations, Tene argued, and by its definition, minimizing data is limiting AI’s intelligence. Jain disagreed, explaining that more data is not necessarily better, nor is it necessary for the development of AI. He added that AI can derive from data, and certain sites are not necessarily reliable outputs. Audience members were given the opportunity to vote on which position they agreed with. In the end, it was 50% opposed and 50% in favor, with Jain swaying more attendees to his side of the argument. 

This was then followed by the second debate, “Resolved: APRA Strikes the Right Balance For the Future,” featuring Jennifer Huddleston, Senior Fellow in Technology Policy, Cato Institute, on the opposed position and Cameron Kerry, Ann R. and Andrew H. Tisch Distinguished Visiting Fellow, Brookings Institute, with the in-favor position. 

Huddleston expressed concerns with the American Privacy Rights Act (APRA) provisions, ranging from how data minimization can cause problems regarding AI to consequences for the next generation of innovators; she also questioned whether APRA would improve underlying situations for consumers. Kerry argued that it is long past the time to establish comprehensive privacy regulation and that APRA is an opportunity to address gaping holes in the current system, further noting that APRA would put in place civil rights protections, baseline protections for algorithms, and more. In this second AI debate, the audience once again ended in a vote of 50% opposed and 50% in favor. 

Keynote Fireside Chat

fpf annual meeting june 2024 229

In the late afternoon, FPF’s Senior Director for U.S. Policy, Stacey Gray, sat down in a fireside chat with Samuel Levine, Director of the Federal Trade Commission’s Bureau of Consumer Protection, who discussed how the Commission is prepared to hold companies accountable when it comes to protecting consumers’ individual data, including deterring AI from impersonation as well as aiming to understand how AI can be used to disguise advertising. “Trust is the foundation of any market economy,” Levine explained. Further, it is critical for the government to do better and collaborate with those in the industry when it comes to keeping consumers safe. 

FPF Workstream Lightning Talks 

screenshot 2024 06 18 at 2.06.29 pm

To close out the day, FPF featured four Lightning Talks on the intersection of AI with various emerging technologies and/or regulations. 

On AI and XR, FPF’s Senior Policy Analyst for Immersive Technologies, Jameson Spivack, delved into two parallel trends in neutral technology development: 1) AI is being integrated into new products, and 2) AI technologies are becoming more immersive. Spivack noted that if the further integration of XR is not done responsibly, then applications of immersive technology can raise substantial risks. 

On AI and Cybersecurity, FPF’s Senior Technologist for Youth & Education Privacy, Jim Siegl talked about cybersecurity as one of the foundations of AI trust and how AI can be subject to novel security vulnerabilities alongside standard ones. He focused on confidentiality, integrity, and availability, with potential confidentiality risks including generative AI-enhancing phishing or malware development. AI tools can raise the prospect of attackers manipulating the behavior of Large Language Models (LLMs) both directly and indirectly, and each of these risks can be reduced but not eliminated, he continued.

Regarding Generative AI in the Asia Pacific region, FPF’s APAC Managing Director, Josh Lee, explained why the APAC sector is an emerging leader in AI regulation. Lee highlighted how AI is transnational and that the region widely uses AI tools, with most companies having a presence in the area. He noted how the APAC region is becoming a major international thought leader with respect to AI technology and AI governance. He highlighted the recent FPF report that provides a comprehensive overview of how generative AI systems work and key governance frameworks across five specific jurisdictions: Australia, China, Japan, Singapore, and South Korea.

Moving over to the EU, FPF’s Policy Counsel for Global Privacy Christina Michelakaki offered insights on initiatives coming from EU Data Protection Authorities (DPAs) and the UK Information Commissioner’s Office (ICO) concerning the processing of personal data in the context of an AI application. She noted that while the GDPR does not explicitly mention AI, it is a technologically neutral law, and it applies to any technology that involves the use of personal data, such as for training, testing, or deployment of an AI system. Therefore, when personal data is used, all the GDPR’s principles apply; the ones of fairness, transparency, and accountability are of particular relevance.

Evening Awards and 15th Anniversary Dinner Reception

fpf annual meeting.c june 2024 27

After a full and engaging day of AI policy talks, debates, and discussions, FPF ended the First DC Privacy Forum: AI Forward by presenting Christopher Wolf, FPF Founder and Founding Board President, with the Legacy of Excellence Award for his 15 years of impactful tenure. FPF’s Board Chair, Alan Raul, FPF Board Member Dale Skivington, and FPF CEO, Jules Polonetsky, presented Wolf with the award. 

A big thank you to all of those who participated in our inaugural DC Privacy Forum: AI Forward! We hope to see you next year. For updates on FPF work, please visit FPF.org for all our reports, publications, and infographics, follow us on Twitter/X and LinkedIn, and subscribe to our newsletter for the latest.


Viewing all articles
Browse latest Browse all 29

Trending Articles