Skip to main content

A common trope crossing the science fiction and mystery genres is a human detective paired with a robot. Think I, Robot, based on the novels of Isaac Asimov, or Mac and C.H.E.E.S.E., a show-within-a-show familiar to Friends fans. For our purposes, consider a short-lived TV series called Holmes & Yoyo, in which a detective and his android partner try to solve crimes despite Yoyo’s constant malfunctions. Let’s take from this example the principle – it’s elementary – that you can’t assume perfection from automated detection tools. Please keep that principle in mind when making or seeing claims that a tool can reliably detect if content is AI-generated.

Image
AI and Your Business

In previous posts, we’ve identified concerns about the deceptive use of generative AI tools that allow for deepfakes and voice cloning and for manipulation-by-chatbot. Researchers and companies have been working for years on technological means to identify images, video, audio, or text as genuine, altered, or generated. This work includes developing tools that can add something to content before it is disseminated, such as authentication tools for genuine content and ways to “watermark” generated content.

Another method of separating the real from the fake is to use tools that apply to content after dissemination. In a 2022 report to Congress, we discussed some highly worthwhile research efforts to develop such detection tools for deepfakes, while also exploring their enduring limitations. These efforts are ongoing with respect to voice cloning and generated text as well, though, as we noted recently, detecting the latter is a particular challenge.

With the proliferation of widely available generative AI tools has come a commensurate rise in detection tools marketed as capable of identifying generated content. Some of these tools may work better than others. Some are free and some charge you for the service. And some of the attendant marketing claims are stronger than others – in some cases perhaps too strong for the science behind them. These tools may have other flaws, such as not being able to detect images or video that a generative AI tool has only lightly edited, or a bias against non-English speakers when attempting to detect generated text.

Here's what to deduce:

  • If you’re selling a tool that purports to detect generative AI content, make sure that your claims accurately reflect the tool’s abilities and limitations.  To go back to our trope, for Knight Rider fans, that means your claims should be more in line with KITT and less than with its bad twin, KARR.
  • If you’re interested in tools to help detect if you’re getting the good turtle soup or merely the mock, take claims about those tools with a few megabytes of salt.  Overconfidence that you’ve caught all the fakes and missed none can hurt both you and those who may be unfairly accused, including job applicants and students.

Wouldn’t it be nice to live in a techno-solutionist land in which a simple gadget could easily and effectively handle all difficult AI issues of our day? No such luck. Our agency can address some of these issues using real-world laws on the books, though, and those laws apply to marketing claims made for detection tools.

Looking for more posts in the AI and Your Business blog series?

More from the Business Blog

Get Business Blog updates