The rapid advance of technology is bringing robots and artificial intelligence, or AI, closer to us every day, including in factories, hospitals, highways, schools, our offices, and our homes. But the technology is advancing so quickly that it’s outpacing our ability to fully grasp its impact, and for policymakers to resolve the difficult balance that reduces risk to the public without constraining the development of these potentially beneficial technologies.
2016 alone gave us multiple significant policy developments. In June the Federal Aviation Administration (FAA) released amendments to its regulations to address the operation of unmanned aircraft systems and pilot certification to preserve safety in the National Airspace System. In September, the National Highway Transportation Safety Administration (NHTSA) released guidance to industry and regulators for safe design, state policy recommendations, and regulatory tools for highly automated vehicles.
In October, the National Science and Technology Council Committee on Technology released a report including recommendations to U.S. Federal agencies and other actors to inform future AI policy. We can expect the U.S. government will implement even more robotics and AI-specific regulations to preserve jobs and to address concerns of security, safety, and privacy within the next few years.
This is an exciting and dynamic area with rapidly evolving developments in policy and science. SciPol provides a single resource covering both, including policy updates and explanations of the relevant science on topics like drones, surgical robots, driverless cars and artificial intelligence.
Michael Clamann oversees the development and publication of SciPol content related to robotics and AI. He is also a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within Duke Robotics and an Associate Director at the Collaborative Sciences Center for Road Safety.
For his scientific research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety. He presented technical remarks to the Department of Transportation on the current Federal Automated Vehicles Policy, and his research has appeared in major news outlets including NPR and the Atlantic.
He received a PhD and MIE in Industrial and Systems Engineering and a MS in Experimental Psychology from North Carolina State University. He has worked in industry as a Human Factors Engineer since 2002, supporting government and private clients in domains including aerospace, defense and telecommunications. He is also a Certified Human Factors Professional (CHFP).
Research in Duke University’s Humans and Autonomy Lab (HAL) focuses on the multifaceted interactions of human and computer decision-making in complex sociotechnical systems with embedded autonomy.
Given the explosion of autonomous technology in aviation, medicine, and even in everyday mundane environments like driving, the need for humans as supervisors of and collaborators in complex autonomous control systems has replaced the need for humans in direct manual control.
Instead of relying on humans for well-rehearsed skill execution and rule following that requires significant practice and memorization (and subject to problems such as fatigue and boredom), autonomous systems need humans for their more abstract levels of knowledge synthesis, judgment, and reasoning. Autonomous systems today, and even more so in the future, require coordination and teamwork for mutual support between humans and machines for both improved system safety and performance.
Visit scipol.duke.edu for news, updates, and opportunities to engage in robotics and AI policy developments.