"We are unable to provide specific feedback on the results of your interview."
In the United States, more and more employers are using artificial intelligence (AI) to speed up the recruitment process. But job seekers rarely know why AI recruitment tools reject their resumes, or how to analyze their video interviews. When they lose the election, they often can only receive such a cold email, not knowing why they lost the election.
▲ Picture from: unsplash
This is disturbing. What kind of decision-making power does AI play in our careers?
The Associated Press reported that in early November, the New York City Council passed a bill by 38 votes to 4: If the AI recruitment system does not pass the annual audit, it will be banned. The annual audit will check the AI recruitment system for racial or gender discrimination, allowing job applicants to choose alternatives such as manual review, and AI developers will need to disclose more details that were not transparent in the past.
Interestingly, it is the AI developers who are responsible for the annual audit, but the employer will be punished. If they use an AI recruitment system that fails the annual audit, the maximum penalty for each violation is $1,500.
Supporters believe that the bill will open a window to complex algorithms. Algorithms often rank skills and personalities based on how job applicants speak or write, but it is doubtful whether the machine can accurately and fairly judge personality characteristics and emotional signals. This process will be more transparent in the future.
At the very least, it makes sense to know that "the algorithm is rejected because it is biased." Sandra Wachter, a professor of technical law at the University of Oxford, once said:
The anti-discrimination law is mainly driven by complaints. If they don't know what happened to them, no one can complain about being denied a job opportunity.
The start-up company Pymetrics also supports this bill very much. They advocate using AI to conduct interviews through games and other methods, and believe that it meets fairness requirements. At the same time, outdated AI interview programs will be swept into the trash. HireVue, an AI recruitment system provider, has begun to phase out its facial scanning tool at the beginning of this year. This tool is called "pseudoscience" by academia, reminiscent of the phrenology theory of racism.
Most of the opposition to the bill is based on "this is far from enough." Alexandra Givens, chairman of the Center for Democracy and Technology, pointed out that the proposal actually only requires employers to meet the existing requirements of the U.S. Civil Rights Law-prohibiting recruitment that has different effects on the basis of race, ethnicity, or gender, but ignores prejudice against disability or age.
▲ Picture from: unsplash
And some artificial intelligence experts and digital rights activists worry that the bill only allows AI developers to self-certify that they have complied with the basic requirements, and it only sets weak standards for federal regulators and legislators. The specific "audit bias" is very vague.
It is worth noting that biases are not uncommon in interviews. The main problem lies in the sample feeding algorithm, but they are often placed in a "black box", which is difficult for ordinary job seekers to detect.
A few years ago, Amazon discontinued its resume scanning tool because it favored men in technical positions. Part of the reason is that it compares the conditions of job seekers with the male skilled workforce within the company; similarly, if the algorithm draws nourishment from industries where race and gender differences are already common, then it is just consolidating prejudice.
This prejudice is not just hiring. In April of this year, a new study from the University of Southern California showed that Facebook is showing ads in ways that may violate the Anti-Discrimination Act. Men are more likely to see pizza delivery driver recruitment advertisements, while women are more likely to see shopping. advertise.
▲ Picture from: unsplash
In essence, AI discrimination is cultivated by human society, and individual biased behavior is even subconscious, and we may not be aware of it ourselves. If you compare a company with less prejudice with a company with serious prejudice, there are generally two reasons-the former is better at deliberately eliminating prejudice, and the latter is better at collecting unreasonable status quo and perpetuating them.
Therefore, the neutral opinion believes that the best part of the New York City proposal is its disclosure requirements to let people know that they are being evaluated by AI, how they are being evaluated by AI, and where their data is going.
▲ Reference materials:
#Welcome to follow Aifaner's official WeChat account: Aifaner (WeChat ID: ifanr), more exciting content will be provided to you as soon as possible.