There’s an article on PersonnelToday.com that urges employers to “tighten up their recruitment procedures to avoid legal claims from the growing number of ‘serial saboteur’ job applicants.”
Okay, wait. Stop. Right there, that sounds like a manufactured crisis in the same league as “SHARK ATTACK!” It’s almost like a special on Fox —When Rogue Applicants Go Bad!
But let’s continue. Says the article:
Under the scam, one person makes multiple applications for an advertised position under different identities, in an attempt to prove that an employer discriminates against applicants of a particular race, gender or disability.
…If the genuine application receives less favorable treatment than the falsified applications, they then have evidence to suggest that the employer discriminates against applicants of a particular race, gender or disability
I’m not sure how this would work beyond blackmail on the part of the applicant. Let’s say “Chris” is female. She sends in two applications for the same position, reporting her sex as female in one and male in another. Then if the male Chris gets asked for an interview and the female one doesn’t she threatens to sue? According to the PersonnelToday.com article, that’s sufficient evidence “to suggest that the employer discriminates against applicants of a particular race, gender or disability.” Really? So much so that Chris can now shake down the employer for a cash settlement?
Maybe it’s because PersonnelToday.com is a U.K. publication and the laws there are substantially different than the U.S. discrimination laws I’m familiar with. But in the U.S. there’s a distinction between adverse treatment and adverse impact. The former deals with blatant, prima facie discrimination like putting out a sign that says “Sorry, Catholics need not apply” or “We only hire White folks.” It’s understandably almost impossible to defend from a legal standpoint. Adverse impact deals with cases where there’s no overt intention to discriminate, but your system or decisions result in discrimination nonetheless –like screening out more women because you require someone to be able to lift 100 pounds in order to qualify for a job as a Truck Loader at a warehouse.
But courts and mediators (in the U.S., anyway, and maybe in the U.K. despite this article) don’t look at one case to determine the presence of adverse impact. They look at statistics over large groups of people. So that’s out. So unless Chris gets a letter that says “Boy-howdy I’m sure glad you’re not a skirt. Want the job?” and can thus prove blatant adverse treatment, this whole thing strikes me as very unlikely at best and easy to defend against at worst.
This suspicion is kind of buoyed by the fact that PersonnelToday.com’s only source of information on this terrible threat comes from one person at a law firm who –surprise– apparently sells a service where he’ll help you prevent this kind of thing. I wonder if they sell shark repellant, too.
Sage Publications has put up the new edition of the Review of Public Personnel Administration. In the table of contents I noticed an article entitled A Review of Court Decisions on Cognitive Ability Testing, 1992-2004, with the following abstract:
General cognitive ability is likely the single best predictor of job performance, although it typically results in race-based adverse impact. The majority of the 22 cognitive ability testing cases in appellate and district courts from 1992 to 2004 involved class action plaintiffs and civil service jobs. Organizations that used professionally developed tests that were validated and that set cutoff scores supported by the validity study fared well in court. The validation study must be conducted according to professional standards, and the results should be used properly when setting cutoff scores. Plaintiffs in cognitive ability test-related lawsuits were likely to be members of minority groups; the majority of cases were race-based claims. Utilizing a consultant to develop and validate selection tests may ensure the appropriate expertise and a professionally developed and validated test; however, it does not alleviate the responsibility of the employer for adverse impact.
This particular journal is published online, but you have to pay $15 to access this one article or more for the whole issue. Which I haven’t. What, do I look like I’m made out of money? But if you or your library has a subscription, it might be a good article.
George’s Employment Blawg has a short post about the accuracy of criminal record databases. In it, the author points to a study finding that checking names against a criminal record database produced 11.7% false negatives and 5.5% false positives. In other words, 11.7% of people were reported not to have a record when they really did, and 5.5% of people who did NOT really have a record were reported as having one. We could go to more sophisticated technology like finger printing, but that’s really expensive and raises a lot of legitimate privacy concerns. Do you want every prospective employer you talk to to have your fringerprint on file?
Taking a step back, though, I have to wonder if this is as big a problem as it first appears. No selection system is perfect. They’ll all have some number of false positives and false negatives. So long as the employer doesn’t publish a billboard with the names of everyone it finds to have a criminal record, you can almost just chalk this up to random error variance as long as the utility of the overall system –warts and all- is positive. I’m not sure how that’s different from anything else that doesn’t yield perfect predictions.
It’s just that the injustice of being punished for something you didn’t do is a bitter pill to watch other people swallow.
Two quick stories today on Employment Law Information Network related to selection and the Americans With Disability Act. Both stories are contained in a .pdf document here.
First, a U.S. District Court found that inabilty to drive is not “a major life activity” and thus not covered by the ADA:
A computer programmer whose vertigo prevented her from driving to work had no discrimination claim under the Americans with Disabilities Act, the U.S. District Court for the Northern District of Illinois recently ruled in Yindee v. Commerce Clearing House Inc., No. 04 C 0730. Granting summary judgment to her employer, the court found that plaintiff’s vertigo did not substantially limit her in a major life activity since the sole activity affected by her condition was driving.
This is important because it suggests that employers don’t have to provide reasonable accomodations to employees who can’t drive to either a job or a pre-employment testing session. No working from home or remote test administration for you, Dizzy!
The second story is on the good ole’ MMPI:
The Minnesota Multiphasic Personality Inventory (“MMPI”) is a test that determines where a person falls on scales measuring traits such as depression, paranoia and mania. On June 14, 2005, the U.S. Court of Appeals for the Seventh Circuit held that the MMPI is a medical examination and that its use by an employer in making personnel decisions violated the Americans with Disabilities Act.
IN short, the Court considers the MMPI to be a medical exam and the ADA doesn’t allow employers to require medical exams prior to a job offer if they’d screen out people with disabilities. I know that the events relevant to this case happened a while ago, but I’m really, really shocked that anyone would still be trying to use the MMPI –a clinical test meant to measure one’s psychosis– for selection or any other personnel decision. There are so many other tests available.
What’s interesting, though, is that Pearson Assessments, the publisher of the MMPI, has a web page up that spins the court decision to say that the test CAN be used for selection under the right circumstances. Which I guess is technically true but kind of misses the point.
There’s an interesting post on About Human Resources regarding Malcolm Gladwell’s Blink: The Power of Thinking Without Thinking. Says the author of the About.com piece:
Whenever we have to make sense of complicated situations or deal with lots of information quickly, we bring to bear all of our beliefs, attitudes, values, experiences, education and more on the situation. Then, we thin-slice the situation to comprehend it quickly. The implications of this concept have astonishing significance for our personal reactions to most situations.
It seems to me that this ability to think without thinking, to make snap decisions about situations and people in a “blink”, has significant implications for how we interview and hire staff.
She’s right, of course. But “the Blink effect” and “thin slicing” are new terms for an old phenomenon that’s been studied out the kazoo by psychologists for years now. One of my favorite classes in grad school was a seminar on Judgement and Decision-Making, and we called this phenomenon “heuristics” or, more generally, “decision-making under uncertainty.” The gist of it is this: puny humans are limited in their information-processing capabilities, so they have to use various mental shortcuts or heuristics lest their brains pop, fizzle, and burst into flame before the end of a typical morning. There’s no way you can accumulate and more to the point use all the information needed to achieve complete rationality for every decision you have to make. None of us has the mental horsepower, time, or other resources needed for that.
So we do the best we can. We rely on those mental shortcuts and make snap judgements because it gets us through the day and keeps things going, no matter how bone-headed and irrational such reliance is. Our brains are hard-wired for it and it’s the root of a variety of innate human foibles, including:
- Liking people who are similar to ourselves
- Thinking an event is more frequent than it is if we’re familiar with it
- Attributing the behaviors of others to their nature rather than external influences
- Ignoring information that contradicts our previously-held beliefs…
- …and putting too much weight on information that confirms them
And tons more. In the end, though, we have to acknowledge, as Gladwell apparently does, that these biases exist and we can’t completely expunge them from our daily lives. The trick is to be aware of them in decision-making and use the tools and techniques needed to avoid them when making really important decisions. It’s fine to be irrational when deciding what movie to see or what menu item to order, but not so much when deciding whether or not to refinance your mortgage or whether or not to hire a particular person. So that’s why we do things like standardize employment testing procedures or structure interviews so that everyone is asked the same questions and has their answers evaluated in the same way.
And indeed, the author of the About.com piece makes the same point, though with hipper terminology:
The key take away from the book is the necessity for each of us to be aware of and control our thin-slicing. After reading Blink, I’m more convinced than ever that we make snap decisions about situations and people, unconsciously, that bring into play all of our biases. All candidates for positions deserve the same treatment and the same attention to factors other than race, religion, appearance and size.
Any decisions that we make based on our thin-slicing must be accompanied by the recognition that we do make important decisions using this process – unconsciously. Take the time to gather a larger pool of data before going with your initial gut reaction. While you may be right, you can be wrong. And, there is the constant opportunity to unconsciously discriminate, make poor hiring and networking choices and to trust or distrust employee stories for all of the wrong reasons.
So there you go: old lesson, new lexicon.
Read “Why “Blink” Matters: The Power of First Impressions” on About.com.
I was poking around the ‘net looking for more blogs on I/O psychology and employment testing when I found my way into The Assessment Council News Newsletter for June, 2005. In it was an article entitled “Effectively Interview Practices for Accurately Predicting Job Candidates’ Counterproductive Traits.” It was a promising title, but the article underneath it had me cringing from almost the get-go with things like this:
Ideally, to accurately assess a job applicant’s personality characteristics, interviewers should use an unstructured interview format instead of the popular structured format that utilizes standardized questions. Th unstructured interview consists of free-flowing conversation between the interviewer and the applicant with no standardized questions. This interview type is usually conducted in a very casual atmosphere, such as over coffee or lunch, and in which cast many follow-up questions are asked of the applicant. Research has shown that the unstructured format is far superior to the structured format when predicting a job candidate’s personality…
Wait, what? Sure enough, the author goes on to defend the use of the unstructured, unguided, unstandardized interview that relies entirely on the demonstrably bad judgement of puny humans when working under conditions of uncertainty and incomplete information. She doesn’t even list recommended questions –the implied message is that you should just wing it. She is, in fact, going against the grain of established research and wisdom in the selection and assessment field, and not in a James Dean “I play by my own rules” kind of way. It’s more like a Mr. Magoo kind of way. It’s exactly the kind of thing that I try to talk people out of all the time, and this woman is saying that it’s the cat’s pajamas. Silk pajamas, even!
She goes on to talk about the “Good Judge, Good Trait, Good Target, and Good Information” moderators of personality assessment, saying:
The “Good Target” variable suggests that some targets (or job applicants) are easier to judge than others and it is these individuals for whom you will be able to make more accurate judgments (Colvin, 1993). For example, upon meeting a candidate, if he opens up to you, tells you his life story, and exhibits consistent behavior throughout the interview process, you can consider this individual a “Good Target” and know that you will probably be correct in your judgement of him.
I had to lie down for a bit after reading that. You feel free to do the same. The most striking thing about that blurb is that it demonstrates that the author doesn’t really understand what a “moderator” is in this context. It’s a variable whose magnitude affects the strength or direction of a correlation between two other variables (say personality and performance or personality and counterproductive work behaviors). So a person could at the top of the scale for “Good Target” by telling you how his Aunt Ruth used to come visit every summer ’till the gout got her, but if you’re not measuring personality, you aren’t going to find any kind of relationship. Because it’s not going to be there to be moderated.
You want to measure personality traits relevant to work behaviors? Use a well-researched paper and pencil test accompanied by some kind of validation research to make sure they’ll relate to important outcomes. There are, of course, observational methods for measuring personality that I/O Psychology has habitually ignored, but I’m pretty sure going to Starbucks and saying “So, tell me about yourself” isn’t one of them.
Some survey makers polled 425 senior execs and found that at the tip top of their list of concerns is “Attracting and retaining skilled staff.” So presumably, that means attracting the right people, then separating them from the wrong people and bringing them on board. Spiffy! Number 2 on the list was “Changing organizational culture and employee attitudes,” which again often involves hiring the right people (among other things). So until until our top scienceticians can churn out gold-plated super robots who can do the work of 1.3 men (or 0.9 women), people continue to matter.
Good to hear, but isn’t this at odds with HR’s not getting a seat at the strategic table? Or is HR not the one doing it? Or is it just lip service?
I recently finished reading Personality Psychology in the Workplace, edited by Bret Roberts and Robert Hogan (fun fact: I studied under Joyce and Robert Hogan at the Univeristy of Tulsa). I don’t know why I keep buying books like this, because my experience with them is always the same. It’s just kind of this loose glob of papers related to some aspect of personality in the workplace, tied together without a stronger thread than the general topic. None of the chapters relate to each other or build on each other, and some of them are so esoteric and rigidly written that they’re a massive chore to get through.
Actually, a few of the chapters were good and a few might be great to come back to if I needed to research a particular sub-topic (e.g., measuring personality through item response theory), and there are a couple of decent summary chapters that I could use as an easy citation when discussing the value of conscientiousness and emotional stability for a variety of jobs. But on balance I don’t feel like I got much out of it. I’m just going to have to be more careful and pick books that read more like textbooks or self-contained technical books instead of an outlet for researchers to increase their publication count. I recently bought one on theories of multiple intelligence that looks a lot more hopefull. It’s good that it’s more coherent, but has the drawback of presenting only data from one school of thought. Still, I guess that’s what other books are for.
Here’s an interesting story that relates, in a way, to employment and selection. It’s about two girls who did a bit of a social experiment for a high school class. Both girls looked pretty similar to start with: tall, thin, blonde. The hook is that one girl dressed up in preppy clothes that presented a clean-cut and generally “all American” look. Her friend went goth, wearing heavy eyeliner, black clothes, black hair dye, and a bared midriff. See the picture there on the right if you need a visual aide:
Then both girls went to apply for jobs as register zombies at Abercrombie & Fitch, an almost overbearingly trendy and preppy clothing store. I think you can probably see where this is going.
The A&F manager practically stumbled over himself trying to hire Ms. Preppy, despite the fact that the girl said she had no previous retailing experience and no references. I think she may have even said she was mildly retarded and was always being blamed for stealing stuff. Ms. Goth, on the other hand, was treated like a pariah by the (presumably) same A&F manager, despite the fact that this girl said she had worked two retail jobs before and had great references. The girls then repeated the experiment at Hot Topic, a much sluttier vendor of midriffs and miniskirts. The results were reversed, though not quite as drastic.
Of course, this is all utterly unscientific. It’s two girls doing clumsy manipulations on an ill-defined variable and running only two uncontrolled trials. So you aren’t going to see their stunt in the next issue of Journal of Applied Psychology (much to their dispair, I’m sure). Thing is, it doesn’t need to be scientific. There’s already plenty of scientific research showing that you’re more likely to get interviewed or hired the taller you are, the thinner you are, and the more professionally dressed you are. Same for the “like me” effect that makes an interviewer like an interviewee the more similar they are in appearance. So saying that looks really do matter shouldn’t really elicit much more than a resounding “Duh!” from the audience.
But while that may be true, it’s still incredibly easy (and a lot safer) to instead focus on a handful of simple measures to find the occasional diamond in the goth, even for low-level jobs like this. Previous work experience is a no-brainer, and I’d like to slap the A&F manager for completely overlooking this. I don’t put any stock in references (research shows there’s hardly any variance and they have almost no predictive validity), but a few simple interview questions could screen out obvious misfits. If you want to do even better you could add biodata. Even better, cognitive ability and personality tests could be used.
Sure, the retail managers in this story may think that people who look like Ms. Preppy work out better than Ms. Goth, just because that’s the way it is, and they may even have some examples to back this up. But that’s a clumsy hiring practice –measure what you need to measure and nothing else. It’s so easy to do so much better.
In fact, given all that, the funniest part of this story is the “insta-poll” the reporting website was running:
So, after illustrating the dumb generalizations made by retail managers about skin-deep (heck, not even that; clothes deep) features, which dumb generalization would YOU make, dear reader? Heh.
At any rate, I deal with this kind of “I just KNOW a good employee when I see one” fallacy all the time in the professional world, and it’s alarming to see that it has seeped into our shopping malls. Won’t someone think of the children!
I was listening to the news one morning and they had a piece about how many medical researchers are beholden to the makers of the products they’re testing. Furthermore, the major medical journals who publish this research are sometimes unable to deal with or know about this potential bias. If the makers of Lipitor, for example, are paying researchers to study its effectiveness at reducing cholesterol relative to a competing drug, then that raises all kinds of questions about objectivity. Those questions can be dealt with, of course, and they need not mean that the research is worthless. You’ve just got to have safeguards and full disclosure to everyone, including the readers.
Interesting as all that was, I was more interested in the question of why we don’t do tests of specific products in I/O Psychology. If the medical field can conduct scientific research on name brand drugs and get them published in top-tier journals, why don’t we study off the shelf products used in the area of executive development, selection, and training?
I’m not talking about measuring So-And-So’s Five Factor Model of Implicit Leadership or a meta analysis of studies looking at conscientiousness. I want a team of crack psychologists to study Stephen Covey’s 7 Habits of Highly Effective People training and tell the world if it really does do what it says it does. Let them use lab rats if they need to. I want those same, objective scientists to study the jaunty Impact Hiring system or the use of the Meyers-Briggs Type Indicator.
These kinds of studies are being done (well, some of them; I’m pretty sure nothing scientific has come within a hundred yards of a Covey seminar, but I’d love to be corrected if I’m wrong), but they’re being done by the test vendors and the consulting firms that sell them. Let me ask you: would you sooner trust a study on the effectiveness of St. John’s Wort put out by Walgreens or one put out by the Journal of the American Medical Association?