The scientific community has a history of creating and sharing lists of important or interesting questions. The mathematician Paul Erdős, for example, famously doled out his questions to students and colleagues, some of which are unsolved to this day.[1] The Clay Mathematics Institute has its list of seven Millenium Problems,[2] six of which remain unsolved. Among these is the P = NP problem,[3] a fundamental question in computational complexity theory and obviously the Office of Technology’s favorite of the seven. Just as the scientific frontier “is still open and abounds in important problems,”[4] understanding of the way that science and technology impact people will always be growing.
Fortunately, not all of these questions require “a solution of the deepest, most difficult problems,”[5] like P = NP. Indeed, from commercial surveillance to market dynamics around the layers of the AI tech stack, the FTC has a track record of fostering knowledge exchange across a broad array of topics related to changes in the tech ecosystem. But there are rapidly evolving issues the Commission interacts with that could benefit from further research and insights.[6]
At the FTC, the rapid development and deployment of technology is informing efforts across the agency as the Commission continues to work to promote fair competition and protect Americans from unfair or deceptive tactics.[7] The Office of Technology (OT), whose mandate[8] is in part to highlight market trends and emerging technologies that impact the agency’s work, regularly engages with external stakeholders through workshops, research conferences, and conversations to understand key trends and developing issues. OT surfaces and answers key emerging research questions that are relevant to agency work. For example, staff published[9] a deep dive into the technical side of FTC’s cases on digital health platforms that explored third-party pixel tracking – and highlighted some research questions for which continued research could prove useful. For decades, the agency has held workshops that highlight questions and areas of focus for discussion with outside stakeholders and experts including previous workshops on data portability,[10] connected cars,[11] facial recognition,[12] and dark patterns.[13] Staff have also hosted panels and conferences such as PrivacyCon[14] and the Tech Summit on AI[15] to highlight some of these areas of interest.
In this post, the team highlights some questions of interest to the Office of Technology. This list of questions for further research does not indicate a particular agency position on a given topic. The agency is not requesting submissions to the questions posed, nor is the agency commissioning research. OT is interested in learning more about these issues and recognizes that researchers can draw on a range of information including expertise from professional disciplines, practical and lived experience, anecdotal evidence, as well as a variety of research methods.
OT welcomes continued dialogue with researchers, civil society organizations, practitioners, and consumers about pressing technological issues facing the public – whatever they may be. A few issues are of current interest:
AI-enabled Fraud and Scams – On AI-enabled technologies that may turbocharge fraud and scams:[16]
1a. In what ways is AI turbocharging fraud and scams? What safeguards do major platforms (e-commerce, social media, ad tech, etc.) have in place to prevent these harms, if any, and to what extent are they effective?
AI and Competition – On generative AI impacts across layers of the tech stack:[17]
2a. When firms have prior access to large datasets that can be used to train generative AI models, in what ways are competition and innovation impacted? Are there notable examples?
2b. what ways are firms with market power in markets adjacent to AI, such as productivity tools, social media, and search, leveraging that market power to gain an advantage in evolving AI markets?
2c. How does vertical business integration across the AI tech stack (chips, cloud, models, and consumer facing products and services) impact competition and innovation? Are there notable examples?
Algorithmic Pricing – On price fixing by algorithms:[18]
3a. In what ways are companies adopting algorithms to determine prices and offerings (including business-to-business prices and bids)? What examples of these algorithmic systems and their designs exist in the market? More generally, how are companies developing and deploying both off the shelf and bespoke pricing algorithms?
3b. How do different third-party pricing platforms treat competitively sensitive information that enters algorithmic systems, and which platforms integrate publicly available competitor data?
Surveillance & Data Privacy – On tracking mechanisms,[19] mass data collectors,[20] and privacy-enhancing technologies:[21]
4a. In what ways do first-party data collectors engage in data sharing within and between their own services, and how does this sharing impact consumers and competition?
4b. How might commercial surveillance tracking mechanisms on traditional devices (e.g., mobile, tablet, desktop) and other devices (connected cars, TVs, point-of-sale systems, etc.) evolve in the next few years, particularly given current and potential moves to default privacy features? What notable trends or examples exist in the collection of location data, particularly surreptitious techniques that may circumvent user preferences or are difficult for a person to avoid?
4c. In what ways are firms engaging in commercial surveillance[22] to implement individualized and targeted pricing? What are notable examples?
4d. What are emerging ways in which individual data is being shared, aggregated, and sold between entities? What claims are being made about the privacy protection or consumer control of that data? How accurate are those claims (e.g., claims about “data clean rooms,” claims about the privacy of data in real-time bidding, use of alternative unique identifiers, roles and practices of “identity resolution providers,” use and misuse of industry consent technologies, use of Privacy Enhancing Technologies (PETs)[23])?
4e. Which PETs enhance – or do not enhance – the privacy of consumers in various use cases and implementations and how? What best and worst practices have emerged in various use cases?
Data Security – On addressing security vulnerabilities systemically:[24]
5a. What factors might encourage developers to adopt techniques for systematically addressing vulnerabilities? What layers of the tech stack would benefit from integrating these approaches? How can automated and human-facilitated tooling help to accelerate the adoption of these approaches (e.g., migrating SQL queries to parameterized query APIs or migrating code to memory safe languages)?
5b. How can the usability and accessibility of advanced cryptographic techniques (e.g., anonymous credentials, multi-party computation, homomorphic encryption, etc.) be improved to encourage greater adoption by developers when such adoption is appropriate?[25]
5c. How common are security vulnerabilities in code generated by large language models (LLMs)? What factors contribute to vulnerabilities? What techniques can reduce the likelihood of LLM generated vulnerable code?
Open Models – On dynamics related to open models and large players:[26]
6a. In what ways do AI models with widely available model weights impact competition? Do aspects like license terms, business models, information on and availability of training data, etc. have an impact on their competitive effects? Are there notable examples of these impacts?
6b. Is it possible to train open-weight models in such a way that techniques for shaping model behavior cannot be removed by someone performing fine-tuning on the model? If so, how? To the extent such techniques exist or are developed, what implications do they have for the usefulness (e.g., speed, resources required, rate of hallucination, performance on different tasks, or any other attribute of the model) of fine-tuning open-weight models?
Platform Design – On default design choices:[27]
7a. In what ways are design choices by social media and gaming platforms exacerbating or mitigating problematic usage or other mental health harms, including to children and teens? In what ways can design choices at the device level exacerbate or mitigate problematic device usage or mental health harms, including to children and teens?
Digital Capacity – On building digital capacity in tech regulatory organizations:[28]
8a. What are short- and long-term ways to cultivate a pipeline of technical skills and expertise that can be translatable to tech policy and enforcement environments?
[3] https://www.claymath.org/millennium/p-vs-np/ “If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian Path Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a solution, I can easily check that it is correct. But I cannot so easily find a solution.”
[4] Ibid.
[5] Ibid.
[6] To be clear, a solution of the P = NP problem would also be welcomed by our office – if the reader has one.
[7] https://www.ftc.gov/system/files/ftc_gov/pdf/2024.01.25-chair-khan-remarks-at-ot-tech-summit.pdf
[23] “Privacy enhancing technologies” (PETs), such as end-to-end encryption, are a broad set of tools and methods aimed at providing ways to build products and functionality while protecting the privacy of users' data. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/keeping-your-privacy-enhancing-technology-pet-promises