fbpx

The Wrong Way to Use AI for Recruiting

December 30. 2019. 7 mins read

In our recent piece on The Impact of No Brokerage Fees on Retail Investors, we talked about how zero-fee stock trading along with fractional share ownership means that there is no excuse not to engage in best practices like dollar cost averaging and diversification. We also talked about how our own preferred strategy of dividend growth investing (DGI) results in income streams that grow every single year, outpacing inflation, and resulting in a better quality of life as we age.

One of the thirty companies in our DGI portfolio is Automatic Data Processing (ADP), more commonly known as ADP, a $74 billion provider of human capital management software and solutions. ADP has quite the track record when it comes to increasing their dividend, having done so for 45 years in a row.

ADP Company Overview
ADP Company Overview – Credit: ADP Investor Deck

Just last month, ADP announced a 15% dividend increase for investors along with a decision to repurchase $5 billion in stock (this increases earnings-per-share which means there’s more room for dividend growth in the future, all things being equal).

Like most companies today, ADP has riddled their latest investor deck with mentions of artificial intelligence (AI). There’s a right way and a wrong way to use AI in human resources, and in today’s article we’re going to discuss the pitfalls of using AI for recruiting.

Why Human Resources Exists

If you’ve ever worked in a large corporation as a hiring manager, you’ll understand why the Human Resources (HR) function exists. It has nothing to do with protecting the employees and everything to do with protecting the employer’s best interests. And for some reason, it manages to attract some completely inept individuals who like to create more work for themselves instead of creating efficiencies. You can see the incompetency on full display when it comes to technology solutions that human resources will cram down the throats of hiring managers. (Early implementations of Concur were a great example of just how little HR vets what they adopt.)

ADP has built a very successful business around providing HR solutions that add value. Over the past ten years, they’ve increased their dividend by 10% per year on average. You don’t achieve track records like ADP has by selling rubbish solutions. With that said, there’s a startling new trend in HR technology solutions that needs to be called out for the garbage that it is. In recent years, some human resources departments have decided that they know better than hiring managers when it comes to identifying the right talent. Or as one Twitter user put it, “if you’re wondering why you can’t find the right people for that crucial position, it might be because Gwyneth in HR is excluding people using repurposed love quizzes from 1980s magazines.”

As we’ve talked about before, there’s a right way and a wrong way to use technology to make hiring decisions.

The Right Way to Use AI for Recruiting

In our last article on this topic, we looked at The Right Way to Use Recruitment Technology which involved looking at key metrics like retention to determine if better people are being hired. Contrast this to other firms we’ve looked at which think that looking at group photos is how you ought to judge how successful the screening process was – one that involves playing games because the startup in question thinks women “don’t do well on standardized tests.”

The situation is now reaching a tipping point where all sorts of “AI startups” are cropping up selling snake oil to gullible human resources departments everywhere. This practice was recently called out by a professor at Princeton University named Arvind Narayanan who penned an aptly titled presentation “How to recognize AI snake oil.” The only slide we’re going to take from his presentation can be seen below, and it’s presented alongside a caveat about why these startups have been named (note that we have spoken quite favorably about one of these startups in past articles and those opinions haven’t changed):

Slide from the Arvind Narayanan paper on AI snake oil
Credit: Arvind Narayanan

“Why are HR departments apparently so gullible?” asks Mr. Narayanan in his next slide. If you’ve served hard time in a large corporation, then you already know the answer. What we want to better understand today is why artificial intelligence fails in this respect, and how we might identify similar failed implementations of AI going forward.

Predicting Social Outcomes

Mr. Narayanan proposes that the root cause of this problem is that AI has been built up to be something so grand that people are willing to believe it can accomplish just about anything. (This same level of hype is why there are now more than 3,000 AI startups around the globe, a number that continues to grow daily.) He goes on to talk about legitimate progress being made with AI in a category called perception – things like facial recognition, medical imaging, and natural language processing. The reason progress has been made so quickly in these areas is because the outcomes are binary. We know immediately if it worked or it didn’t, and we can quickly make the algorithms perform better by putting some “humans in the loop.”

He then discusses a second category called “automated judgement” which looks at things like spam detection, automated essay grading, detection of copyrighted materials, or content recommendations. These aren’t binary outcomes because there is some element of subjectivity involved. Still, there is usefulness in these solutions. Finally, he talks about a category labeled as “predicting social outcomes” where AI algorithms attempt to do things like predict job performance or recidivism. The remaining half of his presentation delves into why this isn’t possible, and he uses the Fragile Families and Child Wellbeing Study conducted by Princeton to demonstrate his point. More than 450 researchers tried using machine learning algorithms to predict social outcomes, and none fared better than the 100-year-old method of good old regression analysis.

The Importance of Ground Truth

We’ve talked an awful lot about predictive analytics which is now being used across all industries to make things more efficient. What all these sophisticated techniques have in common is some element of ground truth which can be used to determine success. Ask any hiring manager to come up with a complete set of metrics to quantify an employee’s performance and they’ll tell you it’s next to impossible. That’s because it’s not just about the person, but the impact they have on the people around them. In a high-performing team, one bad apple can spoil the entire bunch.

We’re now hearing about some human resources departments discouraging hiring managers from using “the airplane test” or rejecting a candidate based on “cultural fit.” Bad idea. The second you bring on someone who doesn’t mesh well with your team, morale plummets across the board, and you’re now tasked with getting rid of the bad apple. Good luck with that. In some locations around the globe, the amount of paperwork involved with dismissing an employee can take up 30% or more of a hiring manager’s time, “documenting the bad performance.” It can take years to get rid of a bad apple. During the hiring process, only the slightest red flag may be all the indication you’ll ever receive that you’re about to hire a bad apple. Just how exactly will human resources even begin to measure something as complex as employee performance when it crosses dozens of intangible dimensions like:

  • Breadth of usefulness as seen by hiring manager
  • Performance as seen by other team members
  • Interactions with other team members
  • Interactions with other departments
  • Interactions with senior leadership
  • Extent to which person is self-motivated
  • Extent to which person takes things personal
  • Written and verbal communication capabilities
  • Extent to which the person may drive their manager completely nuts

Make one mistake as a hiring manager and it can be a living nightmare for you and your team. Any attempt by human resources to refine the pool of candidates based on some black box technique is a bad idea. Essentially, they’re saying they know better than the hiring manager about what the needs of the job are. That sort of arrogance will quickly cripple a high-performing manager who will find somewhere else to work if they can’t hire the most competent people based on their experience. In other words, let hiring managers do their jobs and you do yours. Let’s hope ADP applies artificial intelligence in the correct manner – in situations where ground truth exists – and doesn’t fall for this snake oil.

Conclusion

Operating large multi-billion-dollar organizations requires competent managers. Generally speaking, competent managers are capable of hiring high-performing individuals who are often smarter than they are. When human resources dictate that hiring managers select from some pool of algorithm-selected candidates, they’re limiting the capabilities of an organization.

Furthermore, when governments start to impose hiring quotas like the State of California has decided to do, we’re no longer able to select the best person for the job. Even worse, it leaves the person who was recently promoted to the Board of Directors wondering if they’re in the boardroom because they earned it, or if they’re there to tick some box because they happen to sit down when they take a piss. As investors, we demand that the most competent person is hired for the job, always. And we believe those decisions need to remain with the hiring managers who know the job and the team fit best.

Share

Leave a Reply

Your email address will not be published.