The Implications Of AI’s Capacity For Automated Deception
Many of the policy challenges created by generative AI for everything from cybersecurity policy to deepfake election disinformation have at their root a common problem: that this technology enables deception to be automated for the first time in human history, a change which will have broad implications for society and law. Students will conduct desk research, culminating in a research memorandum, that will identify gaps in select areas of federal statute and jurisprudence created by the automation of deception.
Students will also work with the CHT team to draft principles and identify best practices that can guide lawmakers and courts in addressing those gaps.
Project Teams
- Tommy Sowers
- Wanyi Chen
- Merritt Cahoon
- Mili Shah
- Taylor Reasin