Friends & Family agree: Cronyism, Nepotism a Problem

hr.blr.com recently posted the results of a survey that asked managers if they had been forced to hire someone they didn’t want to, and what was the reason? To quote the article, the results were:

  • 34 percent said cronyism
  • 21 percent cited nepotism
  • 15 percent said race, ethnicity, or gender
  • 11 percent reported that the boss liked her for more than her job skills
  • Another 18 percent said other

Yeesh. The article doesn’t say how many total people were hired for these reasons –this is just the breakout from a subset of hires where the manager was forced to hire someone he/she didn’t want to. But still, interesting to see good old fashioned cronyism up there, along with its relative (har har) nepotism. I think many people can relate to experiences like this, especially in smaller businesses.
It also reminds me of a symposium at this year’s Society for Industrial/Organizational Psychology conference entitled “Genetic Density as a Predictor of Nepotism in the Family Firm.” The researcher actually studied nepotism in family-owned business using some interesting evolutionary psychology methods. Her conclusion: nepotism exists, particularly in family-owned businesses, and it really annoys the people who aren’t benefitting from it. Hardly an epiphany, but it’s nice to see someone studying it.
The reason for nepotism and cronyism isn’t that hard to figure out, though: Managers want to hire people they know –they’ll go with the known over the unknown every time unless they know the person is some kind of moron. And even then, better the moron you know than the one you don’t.


Employment Statistics Website

In response to my comments on calculating adverse impact, reader David posted a link in the comments section to this nifty, online adverse impact calculator. I haven’t had time to test it thoroughly so use at your own risk (another reader commented on a bug when using selection rates of zero), but it looks pretty neat and seems to be useful for quick calculations.
Interestingly, the website also has other employment statistic calculators, for figuring the following:

  • Turnover
  • Job evalulation points (e.g., for compensation)
  • Availability Analysis
  • Utilization Analysis
  • Eight Factor Analysis (whatever that is)
  • Wage/Hour Calculator
  • Federal Tax Withholding Calculator
  • On-Line Cost of Living Comparisons
  • Consumer Price Index
  • Develop your very own Internet Based Salary Survey

Pretty neat.


Selection Alphabet Soup

A is for Adverse Impact
B is for Bona Fide Occupational Qualification
C is for Cut Scores
D is for Discrimination
E is for Equal Employment Opportunity Commission
F is for Four-Fifths Rule
G is for g
H is for Halo Error
I is for Industrial/Organizational Psychology
J is for Job Analysis
K is for KSAOs
L is for Legal Exposure
M is for Meta-Analysis
N is for Nonverbal Cues
O is for Office of Federal Contract Compliance Programs
P is for Personality Testing
Q is for Questionnaires
R is for r
S is for Subject Matter Experts
T is for Title VII
U is for Uniform Guidelines on Employee Selection Procedures
W is for Work Sample
X is for X, The Mean Score
Y is for Your qualifications
Z is for z-score


Why have job descriptions?

I was skimming over this article on job descriptions from About.com’s HR section and generally nodding my head to all the criticisms of typical job descriptions. I could probably count on one hand the number of really good job descriptions I’ve read and still have enough fingers left over to make an obscene gesture to the rest. Most of them are vapid and overly general. I once saw one that consisted mostly of “Performs job duties in accordance with company policies.” The second bullet point was “Performs duties safely while the third and last was “Performs other duties as assigned.”
The About.com author goes on to make some recommendations that seem pretty crazy to me –stuff like negotiating and updating tasks and responsibilities on a monthly basis and making sure that they’re updated given the constant changes in technology or business goals. Which, you know, sounds an awful lot like what managers should be doing anyway without the benefit of company letterhead and a file drawer. And on top of that, job descriptions are often used as ammunition in court cases where discrimination (think reasonable accommodations for those with disabilities) and wrongful termination.
So why have job descriptions in the majority of cases? I can think of several reasons why not to:

  1. They tend to be outdated quickly
  2. The temptation to make them vague is too great because specifics are hard and scary
  3. They can be used as hard evidence in court cases
  4. They killed my dog
  5. They can serve as a substitute for good management
  6. They often contradict the “real” requirements of the job
  7. It’s easy to put in requirements (e.g., education level) that aren’t really backed up by research or job analysis and thus place you at legal risk
  8. They’re not legally required
  9. They killed my other dog, too

There are probably others, but that’s a pretty good start. Of course, there will be some situations where you want a job description. For use in a job posting, for example. There’s nothing like a poorly written job posting to get you the wrong applicants or no applicants at all. And even then, it should be based on at least some cursory job analysis or input from subject matter experts. But outside of that specific use, I’m not sure why so many companies insist on them. Of course, any decision-making on a job in terms of selection, job design, compensation, or training requirements should be made with a thorough understanding of the job, but that’s not the same as having Skippy the Intern jot down a job description based on a quick “Uh, so what do you do?” interview over the coffee pot.


The many ways to calculate adverse impact

Yesterday I attended a pretty good workshop put on by the Personnel Testing Council of Southern California in which Dennis Doverspike talked about assessing adverse impact –when a test or other hiring system discriminates against one group more than another. (He also spoke on hiring based on a public service work ethic, which I’ll probably write about next week).
Adverse impact analyses had always been pretty straight forward to me. I was certainly aware that other methods existed, but I had always used the “Four-Fifths or 80% Rule” to determine the presence of a hiring system’s adverse impact against minorities or women. Quoth the Uniform Guidelines on Employee Selection Procedures:

A selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally be regarded by the Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by the Federal enforcement agencies as evidence of adverse impact.

So here’s an example:

In this example 64 males took a test and 16 passed while 17 women took the test and 3 passed. So the passing rates were 20% for males and 15% for females. Is the 5% difference enough to signal adverse impact?
The answer is yes: 15 / 20 = 75% or three quarters. The Four-Fifths rule says that if it’s less than 80% (i.e., four-fifths) then you’ve got evidence of adverse impact. Pretty cut and dry, right?
Well, as the PTC-SC workshop point out, no. There’s also language in the Uniform Guidelines that allows for most rigorous statistical tests like Chi Square or Fisher’s Exact Test, and there’s a history of court cases that use other quasi-statistical rules of thumb, like saying that pass rate for the protected group must be within 1.97 standard deviations of the dominant group’s passing rate. And the thing is that depending on the distribution of your data, one method may yield a red flag while another may not. There are also different assumptions about what’s the population of interest –is it all the people who applied for the job or is it all the people in your labor market who could have applied. And don’t even get me started about setting different levels of alpha (i.e., accepting a 5% or 10% or 1% chance of saying there’s a difference between the groups when there’s not). Seriously, don’t. We’ll be here all day.
Dr. Doverspike’s presentation provided a long list of helpful formulas and procedures, but the thread that ran through them all: There’s more than one way to skin a cat and then not hire it based on discriminatory hiring practices against skinless cats. In other words, the Four-Fifths rule isn’t the final word and whether your hiring procedure has adverse impact may depend as much on your data as your lawyer.
In the end, though, it’s almost all a moot point. My own rule of thumb would be this: Unless you’re actively trying to increase the diversity of your workforce, assume you have adverse impact and move on to looking at validity and utility. If you use your favorite method and find out that you don’t have adverse impact, assume that some other lawyer or expert witness could come along and uncover some just by slicing your data differently or making a couple of assumptions differently. If you want to maximize the usefulness of your test, you should be more worried about whether or not it’s valid and what kind of utility you’re getting out of it.