While reading Katharine Schwab’s article, “AI Has a Big Tech Problem,” in the current issue of Fast Company, I began to think about how so many HR departments have become dependent upon technologies, especially Applicant Tracking Systems, and that HR, like AI, may have a big tech problem itself.
If, as reported, there are racial, gender and other biases in English language Google search results, the same would have to be true for HR systems based on Artificial Intelligence (AI). (For those of you who read her article, also mentioned is the “outsize carbon emissions required to compute” the search results. Such a comment should not detract from the seriousness of the subject matter.)
This is important to job seekers since, as Ms. Schwab clearly states, “At stake is the equitable development of a technology that already underpins many of our most important automated systems. From credit scoring and criminal sentencing to healthcare and whether you get a job interview or not [emphasis added], AI algorithms are making life-altering decisions for people, with no oversight or transparency.” The harms the systems can cause include “discriminatory hiring systems,” among others.
This is a problem of technology meeting ethics. The people who can be most negatively impacted, marginalized communities, need a seat at the table. Now Lord knows that I like a good laugh, but naming the organizations trying to tackle this problem, and I am quoting from the article not making this up, “Algorithmic Justice League (AJL), Data for Black Lives, Stop LAPD Spying Coalition, and the Our Data Bodies Project,” diminishes from the seriousness of the issue. It also shows that while Big Tech has too much presence in the room, there are not enough adults in the room!
Case in point: These groups, or at least some of them, got Amazon to stop selling its facial recognition software because it does not work well with Blacks. I don’t doubt it. But stopping the use of the software is stupid. If it works for Whites, then use it to catch White criminals. Blacks are always complaining that they are being racially profiled and they are disproportionately arrested and convicted for crimes. Well, if the software can help to locate White criminals, the percentage of Black criminals will drop. In the meantime, the developers can continue to work on the technical problems and, once solved, criminals, Black, White, Brown, Yellow, Red, Green, and Purple, will be caught, tried, convicted and our streets will be safer. Isn’t that what we wall want? Thus the need for some adults being in the room.
Luckily, there is one. Steven Shwartz, the author of Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, the e-book edition of which I shall now “steal” from shamelessly.
The AI we all know and love, albeit to varying degrees, is the AI that powers the robots that stack shelves and do mundane, repetitive and dangerous work allowing humans to fully recognize their potential, contribute to their employer’s profits and, more importantly, to be safe. The AI systems that we fear, the killer robots which will enslave us, are based on Artificial General Intelligence or AGI. Like humans, these systems have the “ability to reason; to process visual, auditory and other inputs; and to use it to adapt to their environments in a wide variety of settings. These systems are as knowledgeable and communicative as humans about a wide range of human events and topics. They’re also complete fiction [emphasis added].” (p.18)
In other words, AI science fiction is just that, fiction. The problem is, most people don’t appreciate the fact that IBM’s Watson winning at Jeopardy!, and a program beating a master at chess or Go, may be impressive, but it’s basically a very complicated trick. They function on AI and “have little or no commonsense knowledge of the world and they cannot reason based on knowledge.” (p.18) They are what Ray Kurzweil called “narrow AI systems,” which are defined as “machines that can perform only one specific task.” They are not intelligent. (p. 18)
It is narrow AI that may have an impact on employment. And there are a few things of which you need to be aware.
First, perfection. “A missing hyphen in the software ruined the 1962 Mariner space launch. Faulty software was also the cause of the 1979 Three Mile Island nuclear disaster, the 2003 New York City blackout, 2010’s high-speed trading outage on Wall Street…and the Boeing 737 Max crashes.” (p. 27) If there is a mistake in software, people can die. No one will die if there is a mistake in HR software, but they might be unemployed which, in some sense, could be worse.
Second, liability. The manufacturers of the software, and the users, must be held liable for anything that goes wrong. Just as you can’t blame an autonomous vehicle for a poor decision because they lack “commonsense reasoning capabilities” (p. 34), you can’t blame the software if it rejects minorities as a matter of practice. It’s not the software’s fault it’s the fault of the programmers (the manufacturer) and the user (the company whose HR department convinced them to make the purchase).
Third, hysteria. In 2018, “Newsweek reported ‘Robots Can Now Read Better Than Humans, Putting Millions of Jobs at Risk’.” If you believe that, please contact me. I have a bridge in Brooklyn I’m trying to sell! The truth is, AI systems may be able to read, but they cannot read and understand and “researchers do not know how to make them do so.” (p.47)
Our jobs are safe. Every time automation is introduced, some people lose their jobs. Some of those people retire. Some of them are retrained. Some move on to other things. But always, always, more jobs are created than are lost. This is nothing new. It’s been going on for centuries.
In the 1800s, “Luddites destroyed automation equipment to protest [textile] job-destroying machines.” Warren Buffet researched the topic for Time magazine. He discovered that in 1776 80% of workers were employed on farms. Today, the number is two percent. Why? Tractors, planters, cotton gins, combines, fertilizer, irrigation or, in a word, technology. (p.50)
Ray Kurzweil, who I mentioned earlier, discovered that “half of all the jobs available in 1900 no longer exist today.” Those “new jobs are paying eleven times higher wages than the jobs in 1900, even after adjusting for inflation.” (p.50)
Think about what (almost) everyone reading this article can remember: “Word processors have replaced many secretaries [Remember typewriters?], tax preparation software has reduced the need for accountants, automated toll booths have replaced human toll collectors, internet travel sites have displaced many travel agents, e-commerce (especially Amazon) is taking a toll on brick-and-mortar retail, and self-checkout technology is threatening the 3.6 million US cashier jobs.” (p.50.)
But, and I am sorry to get political, unlike blindly shutting down fossil fuel production and claiming the workers can make solar panels for a very small national industry (see below), cashiers won’t be fired. They will become salespeople (if they want) and probably earn more money as they will be able to get commissions. These people will walk the floors, sales tablets in hand, talk to customers admiring products, and close the sale right there on the spot, not giving the customer time to change their minds as they walk to the now non-existent cashier lines. That’s the way it is supposed to work.
Think about ATMs. They marked the doom of bank tellers. Or, at least, that’s what was predicted. What really happened? “While ATMs reduced the numbers of tellers per branch, more branches opened because ATMs reduced the operating costs in each branch.” (p. 51) And don’t forget the jobs that were created manufacturing, installing and repairing ATMs, not to mention those related to the construction of those new bank branches.
One more thing to note: While AI can handle repetitive functions, “only 10 percent of occupations are composed of more than 90 percent automatable tasks. Although parts of a certain job might be replaced by AI, the other 90 percent of the job will still need to be done by a human being; that means that, although your job duties might change, your job is likely safe (at least from AI).” (p.53)
Up until now I have not been writing about hiring because I felt it important to make certain that the relationship between AI and job destruction/creation was properly understood. Always think ATMs, never think solar panels. (For the record, according to their industry, in 2019 there were only a quarter of a million solar panel jobs in the US. https://www.thesolarfoundation.org/national/. On the other hand, according to their industry, there were 6.7 million fossil fuel related jobs in the US in 2018. https://www.usenergyjobs.org/2019-report. You do the math.)
Shwartz also makes it clear (p.54), “The biggest technology driver of job loss today is not AI. Conventional software that uses explicit coding of instructions and rules, such as e-commerce, rideshare software, and robotics, destroys far more jobs than AI systems.” As has been said many times, it is a tragedy for the person who loses their job to technology, but let’s put the blame where it belongs. Focusing on AI allows conventional software to slip through under the radar.
Which (finally?) brings me to hiring. There exists a new acronym, ADS, which stands for “automated decision system.” These are the ones job seekers, employers (because they purchase them – the ADSes, not the job seekers! – and the manufacturers, have to worry about. They make “decisions and recommendations that previously were made by people,” including employment decisions. (p. 186)
Problem is, like the people who program them, even with the best of intentions, ADSes can be discriminatory. I can’t summarize this better than Shwartz wrote it (pp. 187-188):
In 2003, economists Marianne Bertrand and Sendhil Mullainathan responded to help-wanted ads in Boston and Chicago with fake resumes. The researchers gave the resumes random names that sounded African American (e.g., Lakisha and Jamal) or Caucasian (e.g., Emily and Greg). The Caucasian resumes received 50 percent more callbacks. Social media such as LinkedIn facilitates intentional discrimination by providing a place where biased hiring managers can view an applicant’s picture.
Let me chime in: Years ago there was a lot of debate over using LinkedIn to vet candidates. By going to LinkedIn, you can see the applicant’s photo. You therefore have a good idea, in fact near certainty, as to their gender, race and, possibly, religion. Lawyers, therefore, recommended that only after receiving a resume, speaking with the applicant, and inviting them in for an interview, should recruiters/employers look at their LinkedIn profile. Then it was not for discriminatory purposes but to spot differences between their resume and profile so as to be able to ask probing questions. That debate, probably because it is nearly impossible to know when a recruiter/employer looks at the LinkedIn profile (before or after inviting the applicant to be interviewed), has vanished. Now to return to Mr. Shwartz:
Job screening ADSes use data that incorporates the hiring preferences and experience of previous hiring managers. Amazon built an ADS to predict which job applicants would be the best employees. However, because most software engineers were historically male, the ADS inadvertently learned a bias against female applicants. Amazon discontinued the system when they discovered this issue.
So now that we know the problem, what’s the solution?
Shwartz identified nine steps employers should take (pp.191-2):
- Hire a diverse workforce to reduce intentional discrimination.
- Use only ADS systems that use interpretable algorithms.
- When building ADS systems, preprocess the data to remove bias.
- Run tests on ADS systems to determine whether they are biased.
- Use only ADS systems that are certified as bias-free by independent third parties.
- Check to see whether they publish statistics showing a diverse hiring pattern.
- Determine whether they only use ADS systems that are explainable.
- Find out whether they test their ADS systems to ensure they are nondiscriminatory.
- Discover whether they have third-party nondiscrimination certifications for their ADS systems.
Granted, I am prejudiced, but I have a tenth, far simpler solution: Don’t use them! They are not worth the cost of the inevitable law suits. Hire recruiters to choose candidates to be interviewed and give them strict instructions on the laws concerning discrimination.
Ironically, again quoting Shwartz (p.187): “Factors such as race, religion, color, gender, disability, and family status can be explicitly removed from training tables to prevent ADSes from making decisions based on these factors.” If they can be removed from the ADSes, they can be removed from the people, and cost a lot less in time, money and aggravation. So my tenth solution is really not all that self-serving.
In any event, discriminatory criteria must be eliminated from the process. Even zip codes need to be removed from the calculous of hiring. This cannot be taken lightly especially if you work in the EU. “The European Union General Data Protection Regulation now requires an individual to consent to the use of an ADS for a decision that has a consequential impact on that individual.” (p.194) And if it’s in the EU, it will eventually arrive in the US, probably through California. But, to be fair, there has already been one lawsuit filed, in Texas (p.195):
“A Texas teacher’s union won a 2017 court case in which teachers objected to the use of an automated scoring system as the primary method of identifying 221 teachers for termination. The issue was that the school system had no way to know if the scoring used biased data. … Although the parties settled the case out of court, the school system agreed to stop using the automated scoring system.”
It’s not just self-interest that makes me advise against ADSes, it’s real concern for both employers and job seekers. Tread carefully, you could fall into a huge money pit!