Dette indlæg er alene udtryk for skribentens egen holdning.

Are you ready to let AI tell you if you are a good or bad person?

14. januar 2020 kl. 12:0112
Are you ready to let AI tell you if you are a good or bad person?
Illustration: Privatfoto.
Artiklen er ældre end 30 dage
Manglende links i teksten kan sandsynligvis findes i bunden af artiklen.

Facial recognition turns faces into numbers and data. When faces become numbers and data, facial recognition is capable of much more than just recognizing faces.

Facial recognition makes it possible to identify an offender before the criminal is convicted. In this way data can foil a crime by apprehending and convicting the criminal even before the crime is committed. Using data in this fashion, I call 'cyberphrenology'.

Søren Matz er MSc fra Department of Criminology, University of Leicester, samt rådgiver i sikkerhedsledelse og risikostyring.

The concept of 'cyberphrenology' embraces an ambition to master the future. The precognitive element in the way we use data is motivated by an unrealistic political agenda.

Artiklen fortsætter efter annoncen

Politicians demand that law enforcement must stop crime. This political prioritization requires a transition in which law enforcement resources become focused towards data-driven surveillance.

The Danish government's recently amended security legislation suggests that the transition has occurred. Privacy is traded in for data-driven security.

I. Magic mirror on the wall ...

Digital facial recognition is a somewhat fluffy term for technologies that recognize faces. It sounds simple and innocent. Technology transforms faces into numbers and data. Simsalabim.

But when faces are turned into numbers and data, facial recognition is capable of much more than just recognizing faces.

Artiklen fortsætter efter annoncen

Since the dawn of time, we have acknowledged that beauty lies in the eyes of the beholder. Those days are over. When technology transforms faces into data and numbers, algorithms can determine who is 'the fairest of them all' - and who is not. Just ask the numbers in the magic mirror. Then we don't waste time defining beauty. It's called efficiency.

But data about your face can do much more than determine if you are pretty or not.

II. Phrenology

Since antiquity, we have been of the conviction that facial features reflect unique personal traits. A pseudoscientific belief that there is a connection between body and soul. That faith is called phrenology.

Phrenology was created by Austrian-German physician Franz Joseph Gall (1758-1828) and his thesis is based on the claim that different organs in the human brain represent different properties and that these properties can be measured on the surface of the skull.

Gall believed he had invented a method by which he could measure character, personality, intelligence and psyche by examining the skull's external shape. When facial recognition technologies translate faces into numbers, it becomes possible to measure and digitally remaster a relationship between personality and looks.

Facial recognition will recognize you and decide whether you are pretty or not. But from a phrenological perspective, data can also determine, if you have a good or bad character.

When numbers from facial recognition are used in correlation with other numbers, data can be used to establish whether you are worthy or a risk; whether you are privileged or vulnerable. Ultimately, data can be used to choose whether you are in or out. Using data in this way, I call cyberphrenology.

III. Cyberphrenology

Cyberphrenology is phrenology in a digital disguise. Despite the prejudicial and racist conceptual heritage, cyberphrenology represents a new data-driven way of viewing the world. We change everything into numbers. Even faces. Numbers don't lie. They can't. Lying is a human trait.

Yet, numbers don't always tell the whole truth. Still, many decisions are based on faith in numbers. The belief in data transform any qualified guess or assumption into eternal, final truths. The cyberphrenological paradigm disrupt our cognitive bias - changing attitudes and opinions about each other.

Cyberphrenology is like looking at others through glasses where you see the person but also see an interpretation of the data associated with that person. Facial recognition not only recognize your face, but place your face on the data-set that is precisely attributable to you. Facial recognition will recognize you in a way, you can't recognize yourself.

IV. The born criminal

Based on the principles of phrenology, 'the father of criminology', Cesare Lombroso (1835-1909) claimed that criminal behaviour was hereditary and innate. 'The born criminal' could be identified through studies and measurements of physical features and Lombroso performed a number of anthropometric studies focusing on the measurement of skulls.

Lombroso's anthropometric profiling of criminals intended to validate the claim that measurement of skulls makes it possible to identify persons with criminal inclinations. From a criminological point of view, Lombroso's idea of 'the born criminal' has been rejected as pseudoscientific. Yet facial recognition technology breathes new life into Galls and Lombroso's discriminatory and biased claims.

In 2016, two researchers from Shanghai Jiao Tong University, Xiaolin Wu and Xi Zhang, developed an AI based on photographic facial recognition. Their claim is that with their algorithm, they can teach a machine to read images of human faces and determine whether the depicted person has a criminal character.

Their research provides evidence that their AI - with 89.5% accuracy - can determine whether a person is a convicted criminal or has criminal traits. Facial recognition can determine whether you have a law-abiding or a criminal character.

The argument is that facial recognition technology can identify a criminal even before the offender is convicted. In theory, facial recognition can foil a crime by apprehending and convicting the offender before the crime is committed.

V. Precognition

Cyberphrenology includes an ambition to master the future. Facial recognition is a data-driven crystal ball. The precognitive element of the way we use data is the main motivation and driving force of data-driven surveillance.

Data from facial recognition can document your past and examine your present. But the goal is to predict your future - with tangible mathematical certainty. Personal data from facial recognition can predict your future and give predictions a hint of evidence-based scientific objectivity.

The technology produces biases, prejudices or predictions about you that you may find difficult to get rid of. Data has no 'Sell-by-date'. Data can confront you with your past and present at any point in your future.

The precognitive element of the way we use data is motivated by an unrealistic political agenda. Politicians demand that law enforcement must stop crime before it occurs. On that account, we must expect the police will be required to down-size on preventative and investigative efforts to shift resources into data-driven intelligence surveillance.

The Danish government's recently amended security legislation suggests that the transition already has taken place.

VI. Security package

Offending is not rooted in genetics. People commit crime as a result of situational or rational choices. Still, the precognitive element of cyberphrenology, which is the focal point of the Government's new "Security Package", suggests an uncritical return to the idea of the "born criminal": a belief that data from surveillance and facial recognition can predict human behaviour in the future.

Thus, the political intent in the "Security Package" dismisses a century of criminological theories on why people commit crime. More surveillance does not resolve the basic circumstances that are generally considered to be breeding grounds for crime in contemporary criminological theories.

However, from a policing perspective, facial recognition is a formidable tool for investigating crimes. The technology offers a promise of more effective police, prosecution and control. Police and courts are increasingly using data-driven evidence derived from mass surveillance where public and private data sources are interrelated.

Use of facial recognition data has an impact on established methods of evidence, changes legal norms, and makes it possible to punish crime that has not yet been committed.

The prevailing political assumption is that increased surveillance, control and harsher penalties provide increased reassurance against becoming victim of crime. Security has become data-driven.

VII: Security or privacy

In general, data-driven democracy involves an opaque exchange of rights where a number of ethical, legal and human rights issues become stretched beyond recognition. Data-driven security challenges citizens' freedoms, including the right to privacy.

Rights to privacy and personal freedom are exchanged in a trade-off, where security concerns outweigh the values of democratic rights. Digital technology disrupt legislation and human rights, but we still claim to live in a democracy. A data-driven democracy. A democracy without democratic control.

Facial recognition is pushing the boundaries of privacy violations. A quick glance at any crystal ball suggests that future democratic rights will be under siege from exponential technological developments.

The challenge is that we are only just beginning to realize the transformation. Resolving a problem is conditioned upon acknowledging that you have a problem.

Are you willing to trade your privacy for data-driven security?

12 kommentarer.  Hop til debatten
Denne artikel er gratis... det er dyrt at lave god journalistik. Derfor beder vi dig overveje at tegne abonnement på Version2.

Digitaliseringen buldrer derudaf, og it-folkene tegner fremtidens Danmark. Derfor er det vigtigere end nogensinde med et kvalificeret bud på, hvordan it bedst kan være med til at udvikle det danske samfund og erhvervsliv.

Og der har aldrig været mere akut brug for en kritisk vagthund, der råber op, når der tages forkerte it-beslutninger.

Den rolle har Version2 indtaget siden 2006 - og det bliver vi ved med.

Log ind eller opret en bruger for at deltage i debatten.
6. februar 2020 kl. 19:16

Det er en skam, at du er forvirret, Palle, men jeg har fuld forståelse for det. At udgive indlægget på engelsk var et forsøg fra en meget positivt indstillet redaktion på Version2.
Normalt skriver jeg først mine indlæg på engelsk og oversætter herefter til dansk. Dels fordi det rammer et større publikum, dels fordi jeg er uddannet i England. Det ligger lige til højrebenet for mig. Hvis jeg skriver på dansk oplever jeg ofte, at læsere i andre lande anvender Google Translate og udgiver indlæggene på engelsk, tysk eller fransk. Hvis jeg skal være diplomatisk yder Google Translate ikke indlæggende retfærdighed rent sprogligt. Jeg håber denne forklaring gør dig mindre forvirret.

Jeg takker meget for den meget positive modtagelse forsøget har fået af andre læsere. Jeg vil takke redfaktionen på Version2 for mod til at prøve noget nyt. Det kan da hænde, at vi prøver det igen.

4. februar 2020 kl. 12:18

En dansker skriver på et dansk medie til danskere men på engelsk - er det kun mig der synes det er plat?

15. januar 2020 kl. 13:41

Besides classical non-sequiturs - yes, you might recognize convicted criminals with an 89,5% of certainty - that does not imply that a person with the same characteristics is or ever will be a criminal; Making the results completely worthless.

The approach hasn't work for people like Alfred Rosenberg and the Nazis, and it will also not work for "predictive policing". We really, really tried that shit before.

15. januar 2020 kl. 13:13

I am willing to make the trade if I am thoroughly convinced that the AI is punished in proportion if it makes erronious arrests or similarly breaks the law. (Fyodor Dostoevsky wrote a book in 1866 or if you don't want to read that, another one of interest is from 1869...)

Untill that situation is dealt with to my moral satisfaction, forgive me, 'cause I can't accept the tradeoff, when compared to the current system! I might have to, but then we are talking a Tocqueville "Tyranny of the majority" situation from my perspective...

As soon as you give the AI rights, you damn well give it corresponding duties and hence assure punishment in proportion to its crime! Untill that is assured, any AI used too intrinsically in the societal spheres and without a user taking on full judicial responsibility for it, will be wreaking havoc on the most basic democratic societal pillars.

I am very sceptical about the ability to satisfy my offer, thus I am not willing to yet grovel to the AI-based future as much as WEF have done in the past. However, if a solution of virtue exists to the proclaimed issues, I would be remis to deny it at least a chance for open discussion!

15. januar 2020 kl. 11:21

No people in their right minds would prefer the kind of AI surveillance dictatorship that we se for example used in China. But you never know. It will be interesting to see what the Danish Government will come up with and allow in the upcomimg law and the so called "Tryghedspakken". I'm sure that a lot of the tech giants and other digitization and technology suppliers of surveillance tools are more than willing to help out for profit, without even thinking of the consequences. Money rules.

15. januar 2020 kl. 11:02

Do I want to live in a China inspired technocratic dictatorship? No way.

15. januar 2020 kl. 10:33

The precognitive element of the way we use data is motivated by an unrealistic political agenda. Politicians demand that law enforcement must stop crime before it occurs.

People commit crime as a result of situational or rational choices.

Are you willing to trade your privacy for data-driven security? Det korte svar er ja og nej.

Den lange er nok et nej. Et nej for med den verden vi lever i dag, så er det jo temmelig tydeligt at det er de virkelige store skurke som går fri. Og det er ultra små fisk som ender i fængslet. Kan Phrenology eller Cyberphrenology ændre på det?? Det kan jeg ikke forestille mig. Det er kun dem som dummere sig som kommer i problemer så som Eipstein eller Morten M. fra DF. Jeg har endnu set eller hørt om f.eks. bank direktører som skal ruske tremmer i minimum et årti, eller olie baroner som er dømt for mord fordi de ikke gav ordre til nedlukning af f.eks. olieplatform eller lign., som derefter endte i en kastrafofe.

14. januar 2020 kl. 14:33

If you state you can deside ANYTHING like a persons personality or social charesticial by looking at a persons face

But maybe one can say something about the probability for certain traits based on some obscure measurements of the face. The distance between the eyes might say something about the testosterone level and the probablity for a person being agressive. Even though this particular individual, ecce homo, might be mild and saintly. Statistics and probabilities are also a great tool for prejudice. It is not nescessary to have a picture of the face. Name, sex, adress, age ... wonderful tools for being a bigot.

14. januar 2020 kl. 14:05

If you state you can deside ANYTHING like a persons personality or social charesticial by looking at a persons face (AI or "just a doctor, police, judge or who ever" - then you have made a judging based on racist assumption. Whatever reason AI should be better or worse then people, does not matter !

It will be an unfair judging - Ok, I know - as humen, we do it all the time, but it still not fair !

14. januar 2020 kl. 12:18

"Are you willing to trade your privacy for data-driven security?"

No! To me loosing privacy doesn't sound particularly as a definition of security. Actually quite the contrary (eller hedder det "opposit"?).

(sorry rustent engelsk)

Tak for endnu et godt indlæg...