Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This week the Federal Trade Commission (FTC) approved an omnibus resolution expanding its investigative authority over products and services involving artificial intelligence (AI). While the possibilities of AI are vast, its increasing role across diverse industries has also led regulators to take a closer look.
This action signifies AI practices will be scrutinized more vigorously going forward, with streamlined tools for the FTC to collect information through civil investigative demands (CIDs) according to a release on Monday.
Notably, the FTC wielded CIDs previously in the technology sector to crack down on illegal robocalls. In 2022, it obtained federal court orders against VoIP providers XCast Labs and Deltracon for failure to fully comply with outstanding CIDs.
As the FTC’s Samuel Levine, director of bureau of consumer protection emphasized, CIDs carry the force of law – non-compliance can result in contempt charges. The actions drove this point home, making an example of companies that don’t promptly provide all required documentation and data.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Given the FTC’s expanded AI resolution mirrors its authority over other industries, tech firms would be wise to take note. Proactively organizing internal records relating to AI claims, product development practices, third-party oversight and more prepares businesses to respond swiftly should scrutiny arise.
Substantiating Claims with Evidence
In its announcement, the FTC emphasized the resolution aims to allow “expedited gathering of facts” about AI uses implicating consumer protection and fair competition. One area of interest will undoubtedly be marketing claims. If an organization promotes an AI solution’s capabilities, it must have substantive proof to back performance characteristics presented to customers and partners.
Records of model training data, validation studies, case studies demonstrating real-world impact and ongoing monitoring reports are examples of information that may help corroborate AI solution disclosures. Peer reviews, oversight of third-party data sources and documentation of efforts to identify and mitigate risks can also lend credibility.
Without such materials in hand, vague or unsubstantiated statements about an AI system’s functions risk regulatory suspicion or enforcement if shown to be deceptive.
Addressing Fairness, Bias and Compliance by Design
Algorithmic fairness and mitigation of unintended biases will remain top concerns as more AI is integrated into consequential decision-making. The FTC will want assurances businesses proactively address such issues throughout the product development life cycle.
Documentation of design processes, impact assessments, risk logging mechanisms, oversight programs and response protocols can act as evidence that diligence was exercised.
For organizations already using AI operationally, compliance programs must demonstrate continued monitoring and a commitment to addressing emerging problems. While technical issues may arise despite best efforts, responsive correction and transparency tend to generate goodwill with regulators. Proactive rather than reactive stances bode well when scrutiny increases.
Handling Third-Party Relationships
Collaboration is integral to progress, yet the new FTC resolution indicates oversight responsibilities now span beyond internal teams. If an organization relies on third parties for data sources, model training, enhancements or other facets of AI solutions, access to information about those systems and activities will be expected.
Strong contractual protections requiring transparency, verification of claims and controls are crucial. Regular audits and documentation of third-party due diligence protect the organization and end users. It signifies an awareness that regulatory accountability has expanded boundaries as technology partnerships become the norm. Outsourcing technical elements does not relinquish compliance obligations.
FTC takes an active role in AI regulations
In addition to establishing its investigatory authority, the FTC announced plans earlier in November to launch a Voice Cloning Challenge. This initiative aims to spur the development of technical and policy solutions that protect users from financial fraud or privacy violations involving synthetic voices.
The agency recognizes new techniques must be matched by safeguards, as impersonating another’s voice could enable scams. The challenge seeks multi-stakeholder cooperation to curb such abusive applications of emerging capabilities.
Separately, during the U.S. Copyright Office’s study of generative AI implications, the FTC submitted comments in October emphasizing its jurisdiction over related consumer protection and competition issues.
While copyright questions fall elsewhere, the FTC argued certain uses of AI-generated content could facilitate deception or unfair practices in violation of its statutes. This perspective has drawn criticism from some asserting an overreach into established legal doctrines like fair use.
Nonetheless, as new technologies continue challenging old regulatory paradigms, advocacy for the FTC’s mandate should be expected. Overall, the agency aims to balance innovation with responsible oversight through a mix of advisory initiatives and traditional enforcement tools.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.