In March, the Federal Trade Commission hosted the first-ever Technology Forum by the International Competition Network (ICN) members in Washington, D.C. The forum brought together 21 competition agencies spanning the globe from Brazil and Japan to South Africa and Sweden.
During the two-day event, representatives from these agencies engaged in discussions about tech- related topics including artificial intelligence (AI), commercial surveillance, privacy, and security, and shared best practices. Representatives also shared their experiences with building and increasing technical capacity within their agencies, which includes hiring internal expertise and developing tools to strengthen their agency missions. As an outcome, two dozen agencies signed a joint statement on increasing tech capacity to keep pace with increasing digitization of the economy.
This post reflects on some key topics that participants discussed during the round table sessions. Overall, the event highlighted that while there are diverse perspectives with cultural nuances among ICN enforcers, there is a significant overlap of best practices, and similar perspectives regarding benefits and harms in the marketplace.
- Downstream risks from concentration and vertical integration. Participants noted that, throughout the layers of the technology stack that powers AI — including chips and cloud infrastructure, data and models, and front-end consumer applications — market concentration within layers and vertical integration within stacks pose downstream risks that can impact consumers. Some attendees mentioned that their consumers and businesses are reliant on using AI data and models developed, trained, and optimized from the norms of other countries. Some highlighted that in their countries, there are budding startups at the model layer and the consumer applications layer, but it is unclear how concentration and vertical integration across the AI stack might influence how smaller players can enter the space moving forward.
- Algorithms impact consumer prices. Various participants highlighted that there are different methods and tools companies use to influence price that can adversely impact consumers and competition. This includes algorithmic price setting and using expansive and highly personal and sensitive information to set individual prices. Some attendees outlined how the opacity of these algorithms can potentially enable companies to collude as it creates more barriers to detecting such behavior.
- AI language gap across countries. AI developers scrape consumer content to power generative AI models from the internet, independent of the content’s location and language. However, participants from countries that are non-English speaking or have large populations of non-English speakers pointed out that while their data may be scraped to power these models, they may not get the benefits of having their data used in these models — as many AI models do not work as well (e.g., not comprehending requests, inaccurate results, etc.) when the inputs or outputs are not in English.
- Turbocharging fraud and scams. Many participants highlighted that AI-enabled consumer-facing applications can lead to an uptick in certain types of wide-ranging fraud and scams that already exist in their countries. They mentioned that generative AI exacerbates impersonation scams and deepfakes and exploits users through deceptive advertising. The distribution of the types of frauds and scams being used varies, as some countries may be more resilient than others due to past experience with non-AI versions of those efforts.
- Consumer surveillance can harm not only privacy and security, but also competition. Participants highlighted that intermediary actors such as data brokers are hidden from consumers, preventing awareness that such data collection exists. They also discussed how some companies are collecting large swaths of data in order to train their AI models, limit access to the data to others, and create hurdles so that users cannot easily switch providers.
Participants also discussed steps their agencies are taking to build digital capacity and adopt practices and structures to address the challenges of tech-related enforcement efforts.
- Agencies are sharpening their internal efficiency and collaboration tools. Participants showed interest in leveraging different types of technical expertise and efficiently designing and deploying tools internal to their agencies that allow their staff to be proactive in preventing, detecting, and monitoring harms.
- Digital capacity team structures reflect the diverse needs and capabilities of the agency. Participants illustrated a broad spectrum of resources, structures, and practices in integrating technologists in and across their agencies. Sometimes teams are embedded within specific units – other times they may be more centralized and distributed across the agency.
- Hiring practices for tech talent differ. Participants expressed similar difficulties of hiring technologists against competitive alternative options across sectors including industry, academia, civil society, and research institutions. Be it tech subject matter experts, data scientists and analysts, or engineers – agencies catered hiring efforts to their specific needs.
The types of technological promises and perils have been highlighted by multiple agencies from jurisdictions around the world—and are grounded by the experiences of their consumers, workers, and small businesses. The gathering created a platform for participants to share best practices and methods to ensure that technologists can help their respective agencies, including the FTC, fulfill their respective missions.
Thank you to staff who contributed to this post: Jessica Colnago, Amritha Jayanti, Stephanie Nguyen, Paul O'Brien