-
Collective human action against deepfakes
- Source: Freedom from Fear, Volume 2018, Issue 15, Jan 2020, p. 88 - 91
-
- 08 Jan 2020
Abstract
For Immanuel Kant, our senses are the gate to perceive information from the environment and to generate our knowledge. Yet, in the age of advanced technology, our senses are easily becoming subject of manipulation. In such context, the fundamental question arises whether we, humans with manipulated sense, can continue relying on our own decision making. There has been an unprecedent progress in the quality of techniques for human image synthesis based on Artificial Intelligence (AI), which can manipulate our sense of sight. Deepfakes constitutes the most famous example of it. In just few years, many alarming examples of fake content have involved politicians, governments, technology leaders, and media celebrities. What does this mean for our future, the future of our societies and the future of our countries? What will this manipulation entail at the moment we exercise our rights as citizens and voters? Perhaps instead of jumping into the complexity of these questions, it is worth focusing on how our collective efforts can help us preventing technology from manipulating our senses. This consideration served as a guiding principal for the solution developed by the Open|DSE team in response to the UNICRI challenge at the Hackathon for Peace, Justice and Security (The Hague, June 2019). Before proceeding with the description of the solution, let’s have a closer look at the AI technology behind the creation of this fake content.