The future impact of ‘Emotional Artificial Intelligence’ on right to privacy protection
Emotional Artificial Intelligence (EAI) is a rapidly growing multi-billion-dollar industry that combines AI and big data technology that aims to detect emotions through biometric senses such as facial expressions, voices, eye movements, skin conductance, body temperature. Although it is still controversial that emotions themselves and understood, or meaningfully interpreted, currently, EAI has been applied in different sectors and for different purposes. This fact, accordingly, creates distinctive risks for the privacy right that other technologies might not as we cannot ‘turning off’ our face to hide/protect our emotional states, an essential part of our inner lives. Furthermore, our inner life is not only under threat because emotions can be monitored, that fact but also is assigning some identity from external: the claim that “they [EAI] know subjects better than subjects know themselves, by directly accessing and revealing their unconscious” poses a different level of surveillance.
In my thesis, I argue the necessity of embracing human emotions as potentially private subject matter in the regulatory framework. To approach this conclusion, I will evaluate the privacy theories to answer these questions of why emotional privacy is important, which values, such as dignity, autonomy, self-hood, can be plausibly underpinned to emotion, and will analyse how effective the current regulation in protecting the rights to emotional privacy and emotional data.
Pathway 13 Politics, Public Policy & Governance