Basking photos and chatting, how come some people think that I am going to commit a crime?

Film and television works such as "Minority Report" and "Suspect Tracking" may only appear in the past few years. If it is a film and television work in recent years, the audience will be very "flustered."

Crime prevention, predictive models, use the system to speculate about what has not happened… This means that a lot of information will be collected, and it also means that you will be judged by an unknown system whether there is a threat. The feeling of entrusting fate to the unknown, probably not many people like it.

But there may be no way to dislike it, because before you know it, more and more companies are already collecting your public information to judge the possibility of your crime from the daily content of your social network.

▲ Picture from: "Minority Report"

Social media has turned into the biggest whistleblower. Is this a self-report?

Voyager, Kaseware, Aura, PredPol, Palantir, which are some companies that are trying to identify potential threats through social networks, a large part of them also cooperate with the local police department and are at the forefront of predicting crime.

Obtaining the required information from everyone's social media is the core of this type of company. They use the content posted by users on social networks to estimate whether the other party is likely to commit a crime. The algorithms of these companies are not the same, but they are basically using artificial intelligence to decipher the user's line, judging from the content shared by the user whether the subject has committed a crime, may commit a crime, or adhere to a certain dangerous ideology.

▲ Kaseware official website page

This is actually not a new development. After all, in 2012, some people turned social networks into the "pulse of the city." It's hard to see a space and a product that can accommodate so many users at the same time and make them willing to share everything about themselves.

The characteristics of social networks create conditions for people to find goals. Students search for interviewees on social networks, statistical agencies predict election results on social networks, and AI detectives want to find criminals on social networks.

Because everyone uses social networks, social networks can often reflect a person's real situation. Representative Tyrone Carter of Michigan State of the United States believes that the police's search of public social networks did not violate any laws and did not infringe the rights of users, so this prediction is feasible.

The moment you click send on the public page, the post is no longer yours. People get into trouble because of what they post, because social media is the biggest "self-informer" I have ever seen.

▲Picture from: Lifehacker

But the secret will only be revealed when it is guided. Voyager Labs, which cooperated with the Los Angeles Police Department, played the role of leader in this process. Only the non-profit organization Brennan Center discovered through public information provided by the Los Angeles Police Department that Voyager's work was also suspected of racial discrimination and privacy violations.

The way companies like Voyager work is not complicated. It collects all public information on a person's social media, including posts, contacts and even commonly used emojis. In some special cases, it will use the public information and non-public information to cross-reference for further analysis and indexing.

▲ Voyager will make judgments based on personal social media participation topics

Through Voyager's service, the police can clearly see a person's social relationships. How are their connections and how they interact on social platforms. Furthermore, Voyager can even detect whether there is an indirect connection between two users (both have at least four identical friends).

It sounds like it's just looking at what users do on social networks, which is used as supplementary information for surveys. But in fact, Voyager not only collects and displays information, it also makes judgments.

▲ Voyager will make judgments and predictions through the social network relationship chain

Voyager mentioned an assault case in a white paper submitted to the Los Angeles Police Department. The case specifically demonstrated the platform's approach-AI will automatically review the content of people's posts on social networks without human intervention. And classification (respectively mark the user in three colors of blue, orange, and red).

In a specific case, Adam Alsahli, the suspect in the shooting, was judged by the system as “a strong sense of pride and identification with Arab traditions” because of his Islamic-themed photos posted on Facebook and Instagram. Therefore, Adam Alsahli was marked in orange by Voyager's artificial intelligence tool before launching a specific attack.

This can be a case of successfully predicting potential offenders, but also a case full of "prejudices."

▲ The social media information of Adam Alsahli, the suspect in the shooting, was marked orange before he committed the crime

Data predict crime? But the data can’t be completely believed

Are these conclusions really credible? On what basis do they make judgments? How can everyone prove their innocence under big data?

There is indeed a lot of data confirming the correlation between the content bias of social networks and the crime facts, but this is not a 100% correlation data.

New York University’s Tandon School of Engineering and the School of Global Health and Public Health released research results that show that cities with more racial hate speech on Twitter have higher crime rates; studies in Finland also show that, based on two decades of data, For every 1°C increase in temperature, criminal activity will increase by 1.7%; American studies have shown that the theft rate of vehicles on weekend nights increases sharply; and it has been proven that when the local football team loses accidentally, domestic violence incidents will increase by 10%.

▲ AI can't make a ruling with 100% accuracy

But these cannot prove anything, because probability and fact are different.

Even with the endorsement of relevant data, it cannot prove that in cities with the most racial hate speech, vehicles must be stolen on summer weekends. In hot weather, when the local home team loses 100%, more domestic violence cases will occur.

Similar crime prediction systems are reversed based on existing crime facts and research results. Another problem that this creates is that it is full of "stereotypes."

Turing Award winner Yang Likun once said that when data is biased, machine learning systems become biased. In a single case, when the information received by machine learning is that black male users in prison account for a large proportion, such a system may make a judgment that black men are more likely to commit crimes.

▲ Yang Likun said that the bias of machine learning systems comes from people

For machine learning, "black men are more likely to commit crimes" may be a fact of data analysis, but in reality it will be racial prejudice and differential treatment.

Secretly rating users, dividing which ones are more threatening, and tracking and preventing those who are threatened more closely are the operating logic of the entire system.

Similar startups will use algorithms and artificial intelligence to explain the process they process and analyze information to make decisions. Although there is currently no evidence to prove that this prediction is effective, and there are many doubts from the public, the police department still wants to cooperate with similar platforms.

▲For the police, the platform’s forecasting service is very valuable

For the police, this type of tool is very attractive. The discovery of platforms such as Voyager on social networks can provide effective assistance in user profiling without missing the subtle online clues. If it is only an auxiliary investigation, this will be a very effective tool. However, when the tool develops to the later stage and begins to play a role in predicting crime, it will also become a weapon for wounding.

After the enthusiasm of the previous years of financing, many AI products have entered the application stage. But in some areas, they are still the role of auxiliary.

Medical care is a field that is cautious about AI intervention. Even in the fastest-advanced AI medical imaging field, today's AI technology still cannot guarantee 100% accuracy and requires the intervention of human doctors. Because everyone knows that medical care is an industry that requires nearly 100% accuracy. Any deviation or error may cause serious consequences.

▲ At this stage, we may need human police more

The policing field is also a field that strives to be 100% correct, and any conjecture and inference without the support of evidence will also cause serious consequences. A person who publishes various discriminatory and violent remarks on social media may be marked as a potential offender, with a 90% possibility of committing a violent crime, but before he actually commits a crime, he is an ordinary person.

In the overall data set, we can never ignore that everyone is an independent individual.

Not too interesting, not too optimistic.

#Welcome to follow Aifaner's official WeChat account: Aifaner (WeChat ID: ifanr), more exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo