The demand for regulating artificial intelligence from being leveraged in high-risk use cases like live facial recognition and social credit scoring has increasingly gained traction.
The European Commission is proposing transparency obligations to apply for systems that interact with humans, detect emotions, determine association with (social) categories based on biometric data, or generate or manipulate content. The Commission promises to give risk managers, communicators and reputation managers accustomed to dealing with transparency and openness a seat at the AI decision-making table.
The EU has stated the purposes for each risk level – ranging from unacceptable risk, high-risk, to minimal risk. Unacceptable risk has been defined as AI systems that violate users’ fundamental rights, including using their social credit score and real-time remote biometric identification systems.
The regulation has also set out several legal notification requirements. Though current transparency requirements may change, brands should be prepared for any further regulations.
[3 minute read]