- Rocket-Hire.com’s May newsletter came out. Find some nifty articles (and sales pitches) there.
- U.S. Justice Department sues New York City for Discriminating Against Black and Hispanic Firefighter Applicants. Ouch.
- Psychology in the mainstream press: Study: Your Personality Can Change (and Probably Should).
- The 2007 IPMA-HR Assessment Council (IPMAAC) annual conference is in St. Louis on June 10-13. Registrations are open and there are some fantastic speakers: Wayne Cascio, Robert Hogan, and Nancy Tippens.
Someone on the IPMAAC listserv noted that the Equal Employment Opportunity Commission (EEOC) recently had a public meeting where they invited people to listen to presentations from experts on “emerging trends in workplace testing and selection procedures.” These included some of the EEOC attorneys, those affected by recent cases, and other huge brains in the organizational psychology world. Too bad it was in Washington, D.C.; I’d have love to have gone.
One person who did go, however, was kind enough to write up a summary and post it to the IPMAAC list. That summary can be found by performing clicking motions here. It sounds like there were various retrospectives and case studies (literally) involving recent claims with which the EEOC threw its considerable weight. Some of the more interesting parts of the discussion involved companies getting in trouble when they didn’t follow through on the Uniform Guidelines’ recommendation to search for alternative predictors with lower adverse impact. I think this bit of the document usually gets ignored, since a lot of people think that “Eh, that’s just in there to sound good,” or “Nobody really does that.” Well, apparently they do.
I also found this part on the use of credit scores for selection interesting:
Adam Klein brought up the problem of employers using credit
scores to make selection decisions. Employers like using credit scores
because the info is cheap and easy to obtain. However, there are
several problems with using credit scores: credit scores were developed
for use in credit decisions, not employment decisions; there are several
valid reasons why someone would have no credit score at all; and there
are no studies showing a correlation between credit scores and job
performance. Klein notes that the adverse impact involving credit
scores is 2:1. Klein suggested that the EEOC issue guidance on the use
of credit scores in making employment decisions.
It still surprises me every time someone mentions seriously using credit reports as a selection tool. I always countner by asking exactly what they think they’re measuring with that, and the retort is usually something along the lines of “Well, how can we expect them to manage our company’s assets if they can’t manage their own?” This is a hard argument to get through sometimes, because it’s like hiring a slob to be a housekeeper or a morbidly obese person to sell gym memberships. But there’s just so much measurement error in these reports, so much adverse impact, and you really don’t need to look any further than the fact that none of the research (that I’ve seen, anyway) shows any validity for them for the kinds of jobs managers want to use them for.
Anyway, the summary of the EEOC meeting is worth a read, and hopefully they’ll put out some kind of white paper or summary of their own in the near future.
So, here’s a story that’s been going around in certain circles. Shipping superstar FedEx settles a discrimination lawsuit for $54.9 million. The suit was brought about in part because they used a “Basic Skills Test” that looked valid on the face of things, but which (supposedly) had adverse impact and for which the company could not produce (or decided not to bother producing) any kind of validation evidence. Oops.
I think this is an interesting and possibly influential case because it involves such a recognizable company that everybody is familiar with. I think a lot of decision-makers may shrug their shoulders and turn away when you point out a lawsuit involving a small police department in Hootinvill, Illinois or some other organization that doesn’t seem relevant. But FedEx? Man, we use them all the time! I’ve shared this story with some people not normally involved with employment testing, and it seems to get them to stop and take notice.
What’s also interesting is that if you look at at a sample of the test, it appears to be fairly straight forward, clean, and possessed of face validity. And those are all nice things, but apparently they couldn’t produce any evidence that the test actually resulted in better hires. I say “apparently” since the article linked to above is kind of vague, saying just that they caved in to the accusations without raising evidence to contradict it. Maybe they decided the evidence of other discriminatory acts was too overwhelming to bother.
Regardless, this case makes for a nice example of the dangers of letting subject matter experts develop their own test and running with it. Sure, they can make a test that looks good, and maybe it is valid. But without the help of I-O psychologists or similarly trained experts, you can’t prove it. And when the lawyers or judges or EEOC come a’knocking and ask you to respond to accusations of discrimination, you’d better have something more substantial than “Well, Ted thought it looked good.”
A couple of months ago I heard a story on NPR about what they called a new trend in job seeking: the video resume. To see what I’m talking about, just go to YouTube and do a search for “video resume” and you’ll get plenty. If you’re at work, try not to waste too much time at this point. The idea is that these folks are taking advantage of the falling prices in basic film making equipment to make movies of themselves where they review their qualifications and ask for jobs. I think it’s fair to say that production quality varies widely, but some of them are decent.
I’m of two minds on this. On the one hand, I’m a mildly geeky guy who loves technology, especially when used to solve old problems. And let’s be honest: this blog isn’t too many steps removed from the idea of a video resume. The main difference is that I’m not pro actively sending it out to prospective employers. And therein lies one of the problems my other hand has with this: it potentially circumvents whatever hiring and selection system you may have in place. Most resumes go through some kind of screening and cataloging so that the employer can prepare for government audits and identify those who report having the right skills. Video resumes, however, are precisely designed to make the candidates stand out from the sea of traditional resumes, and people probably aren’t mailing them to the HR department or attaching them to their electronic applications on your website. At least not primarily. You’re more likely to encounter video resumes from a hiring manager who sends you an e-mail saying “I saw this kid who put together this slick thing together. She’s even got a talking dog! I want to make her a Vice President.” In other words, they’re going to be a pain for whoever is supposed to be tracking all this stuff.
And what’s more, video resumes bring up a lot of concerns from the I-O Psychologist in me. First, they introduce all kinds of opportunities for bias based on appearance. Of course, one could argue that that’s going to happen in the face-to-face interview stage anyway, but I’d counter by saying that video resumes have the potential to exacerbate these biases and bring them out much sooner in the selection process. (There’s some interesting potential research questions here, by the way.) And unless you’re hiring a film director or editor, how slick and professional a person’s video looks is probably going to be irrelevant to the job, and making hiring decisions on such factors is inviting trouble. Of course, proponents of video resumes could then say that we deal with these same factors in resumes and dress. We’ve probably all seen poorly formatted resumes and inappropriate interview attire. True, but again it’s a matter of degree, and at the end of the day that’s what’s at the heart of my unease: lack of standardization. Wonderful, wonderful standardization.
The bottom line is that to make accurate and legally defensible hiring decisions you should only focus on job related information and ignore just about everything else. Standardizing the application process to elicit the most job related information and give everyone the same fair shake is the way to go, be it through resume screening, testing, structured interviews, or what have you. And besides, video resumes are only attractive right now because they’re novel. Imagine a world where they were more standardized and every one of the two hundred applicants for a single job sends in a seven minute (or longer) video for you to watch and evaluate. Your staffing department would erupt into flames and shrill cries for mutiny. Nobody wants that, even if it does mean I get to surf YouTube as part of my job.
Comments are once again open for your commenting pleasure. Go crazy.
At the 2007 SIOP convention in New York last week, I did manage to drag myself into several symposia, despite the alure of forty foot neon signs and cheap “Romex Brand watches” being sold by a shifty looking guy at the corner of 7th and Broadway. And while not quite as visually stunning as The Lion King production going on a block away, several of them were pretty interesting.
For example, on Saturday I made sure to go to a panel discussion entitled “Validity Generalization in the Workplace.” This was of interest to me because the description promised to discuss the use of alternative test validation techniques, such as job component validity, validity transportability, and meta analysis. In fact, I have in the past or present worked directly with three of the four panelists (John Weiner from PSI, Wanda Campbell from the Edison Electrical Institute, and Ryan Ross from Hogan Assessments) and used products of theirs, the use of which had been validated using one of these alternative approaches. And the fourth panelist was Nancy Tippins from Valtera, whose work I was also familiar with.
So there was a lot of expertise sitting up there and it unfolded along the lines you’d kind of expect. The panelists talked about when they’d want to use these validation techniques (when you can’t do a traditional criterion-related design), what the legal risks are (nobody knows for sure yet), and what you need to take into consideration (equivalence of job analysis tools, the quality of the original studies you’re trying to generalize from). What was kind of disappointing, though was their response to a question about how close is close enough when determining equivalence. For example, if you’re trying to transport the validity of a test from one location to another for what should be the same job, you should go through some research to determine that the jobs are actually the same in terms of what they require and involve. You might, for example, do dual job analysis and compare the end results in terms of how similar the KSAO lists are and how closely they match in terms of frequency and importance. But how close is close enough? And if you’re looking at something like distance scores across profiles on a standardized job analysis instrument and arranging jobs into families, how do you know when a job’s profile is close enough to a family’s profile to warrant bringing it into the familial fold? What’s the RULE, man?
The panel’s answer, in a nutshell: “We don’t know. That’s for you to figure out in your professional judgment.”
That’s kind of an unsatisfying answer. In fact, I did this myself a couple of years ago. In my case I was lucky in that the KSAO lists were almost identical and the average frequency/importance ratings only differed by a few tenths of a point on a 5-point scale. So I looked around, shrugged, and felt confident in calling them equivalent. But where’s the line? Like with most things, there’s probably not a line per se, just your professional, experienced, judgment and your ability to back up your decisions and convince a judge or auditor that you didn’t act arbitrarily. I think the benefits of these alternative approaches are too attractive to pass up and that case law will eventually catch up and provide some more concrete standards, but in the meantime I can still see why criterion-related validation is seen as the gold standard.
Well, that was nice. After almost a year off, I’m reviving SelectionMatters.com and posting on a lighter schedule. The reason for the hiatus was that about a year ago I started a new job in a new city. This new job required a lot more time, and to top things off I had a new baby last December. Short version: something had to give, and this site was one of the casualties (along with a couple of other hobbies and TV shows and a fair bit of sleep).
I decided to start posting again because last week at the annual SIOP convention I had about three people tell me that they either recognized my name from this site or that they had used to read it and were disappointed that I had quit. So I thought it would be good to start up again. I’m going to start off with only posting once a week, on Fridays, and see where things lead from there.
Also, one of the things at the top of my to do list is to upgrade the software so that I can install some new stuff to prevent comment spam. Until then, comments will remain off. They should be up within a week, though. Enjoy!