The lack of incident reporting frameworks in place can lead to novel problems that can become systemic if not addressed properly. For example, AI systems have the potential to harm the public by improperly revoking access to social security payments. The findings from CLTR, which focused on the situation in the UK, could be applicable to many other countries as well.
According to CLTR, the UK government’s Department for Science, Innovation & Technology (DSIT) does not have a centralized and up-to-date system for monitoring incidents involving AI systems. While some regulators may collect incident reports, they may not be equipped to capture the unique harms presented by cutting-edge AI technologies. CLTR highlighted the importance of recognizing the potential risks associated with high-powered generative AI models and the need for a more comprehensive incident reporting framework in these situations.