When right is wrong: hiring people quickly

I recently came across this story on Software By Rob, which gives tips for hiring technical people very quickly. It’s easy to get in a state of mind where you cite academic research, best practices, and the “right way” to hire people. This usually involves intensive and time-consuming job analysis, building and polishing technical interviews, creation or purchase of aptitude or ability tests, validation of those tests, training for test administrators, and four or five other things. It’s by the book.
In many situations, though, that kind of approach will lead to ruination. For example, I spent a few years working in an Internet-based startup company where we grew very quickly. Er… Then we laid a bunch of people off and grew quickly again. Laid off some more. But in those growth periods, we needed people now and many of them were in newly minted jobs where there were no current incumbents or even specific job duties. And if we didn’t get those people, it meant we were probably going to fail and the organization as a whole would die off.
According to the article on Software By Rob, hiring fast in these kinds of situations involves five things:

  • Write the Shortest Job Description Ever
  • Skim Resumes Like Crazy
  • Use the Numbers
  • Hold Phone Interviews
  • Finish In-Person

There’s a lot of reasonable advice there if you don’t have time or resources to do things the “right” way. And in fact, looking back over the way I handled those frantic hires at that Internet startup, I ended up doing a lot of the same ways. I wrote terse job descriptions, I skimmed resumes looking for basic experiences and qualifications, I did a lot of brief but structured phone interviews, I sorted people into “buckets” based on their quality as a candidate, and I invited a few people in for face-to-face interviews just to make sure they weren’t some kind of total freaks.
It wasn’t a perfect system, and I could have probably kicked up the utility of the whole thing with a simple cognitive ability test or more rigorous screening, but it worked as well as it had to.

The History of American Psychology Museum

Quick, I need a place to stay in Akron, Ohio. Why? Because of this article (use BugMeNot registration if you don’t have an account), which describes the grossly overlooked History of American Psychology at the University of Akron. Apparently it’s a treasure trove of psychological paraphernalia used in some of the most (in)famous psychology experiments EVAR. How cool would it be to see a shock simulator used by Stanley Milgram in his experiments on conformity to authority? Or home movies of Sigmund Freud? Or the prison uniforms and billy clubs used by Philip G. Zimbardo in his prison simulation experiment? Maybe I’m a huge dork, but I think it’d be very cool, that’s how cool.
And check this passage out about a phrenology machine that would measure your head and give you detailed feedback on your personality and what occupations you’d be suited for:

Testing was, in fact, the raison d’etre of psychology until just after World War II, he noted, when psychologists were first permitted to offer clinical care in response to the needs of returning veterans.

Even the bizarre psychograph, Dr. Baker said, was predicated on a theory that remains a bedrock of modern research, that different regions of the brain have differing functions that can be measured and described. Although psychologists and neurologists of today measure those regions using magnetic resonant imaging, 19th-century phrenologists believed that those regions could be calculated from outside.

The device is one of three remaining in the United States. With its 1,954 parts housed in a walnut case, it sits in a corner of the reading room, its crown of calipers ready to measure every nook and node of the skull.

“You’ll have to remove your glasses,” John Bean, an undergraduate in psychology who works as an assistant at the archives, said as he put on latex gloves to place the sharp, heavy calipers on a visiting reporter’s skull. In less than two minutes, it cranked out a kind of ticker tape giving a five-point rating, from “poor” to “excellent,” on 28 personality variables like benevolence, suavity, caution, conscientiousness, acquisitiveness and conjugal love. The device automatically combined the variables to predict suitability for various professions, a process that Mr. Bean modernized with a computer spreadsheet.

“Do you want to see your results?” Dr. Baker asked. “Your highest score, you’ve got 70 percent on mechanic, followed by pugilist, at 60 percent. How did you do on journalist? Forty-five percent. You have a higher score as a zeppelin attendant.”

That’s five kinds of awesome. I’d love to have one of those readouts, especially if I rocked out the top of the scale for “zeppelin attendant.” We’ve come a long way, eh?

Is preferential treatment for hurricane survivors fair?

Soon after hurricane Katrina demolished New Orleans and other parts of Louisiana, many recruiters, employment agencies, and employers stood up straight and started offering succor to displaced workers in need of new employment. It was widely seen as a downright wonderful thing to do. I’ll bet that if you asked your mom, she’d agree.
I noticed on The Employment Law Bulletin, though, that The Wall Street Journal recently published an article (found online here) about a “backlash” against employers in Texas who give preferential treatment to Hurricane Katrina survivors. The piece raises some interesting and rather difficult questions. Scribbled the Quote Monkey:

In places such as Houston, Baton Rouge, La., and San Antonio, where evacuees have arrived en masse, employers have blended hiring needs with a groundswell of compassion. Local outlets of McDonald’s Corp., Exxon Mobil Corp., PetSmart Inc. and others have visited evacuee sites to pursue Katrina victims. Flyers at one shelter last week read, “San Antonio Jobs for Katrina Evacuees,” and listed more than 60 employers with contact names and phone numbers. Each had called a local radio station vowing to offer jobs to hurricane victims in the city.

…The Houston office of WorkSource, a nonprofit organization that helps people find jobs, has received so many faxed job forms from companies offering to hire Katrina evacuees that staffers have had to change the fax machine’s toner cartridge several times a day. There have been hundreds of requests for a range of jobs including barbers, truckers and technology workers. About 60 companies have asked to hire Katrina evacuees only, said Leonard Torres, a senior business consultant with the organization. Mr. Torres said it appears that a number of the jobs are being created just for evacuees.

While this seems inscrutably noble at first glance, if you think about it it does bring up the nagging question of whether or not it’s fair. The WSJ article quotes one unemployed San Antonio native as saying “”It’s not right. I can understand they need to work, but a lot of people [who were already] in San Antonio need jobs, too.” Is it fair to give someone preferential treatment because they’re a hurricane survivor? That’s not an easy question.
One other thing that quickly occurred to me, though, was not only the question of fairness, but legality. If this preferential treatment resulted in the disproportionate rejection of people from a protected class, then it would run afoul of what’s known as “disparate impact.” And according to the law, intention –noble or ignoble– doesn’t enter the equation.
This question also occurred to a representative from the Texas Workforce Commission:

…The commission checked to make sure there wasn’t a statute that bars employers from focusing their hiring on Katrina evacuees. “What we have to look for is whether or not a protected class is being discriminated against,” she said. “No one is discriminating against a protected class.” She adds that the job fairs are available to everyone, and that some nonevacuee Texas residents have attended.

I’m not sure that the spokesperson is completely right there, but I also doubt that anyone is going to be sued over this in the short term. And hopefully the whole practice will be short term as people get back on their feet and rebuild.
Like I said, I’m not sure I know the answer to the whole fairness question, and I think that one could make a case for the lesser of two evils here –an unemployed person in San Antonio is probably better off than a refugee from Katrina who has lost EVERYTHING. And it’s the kind of thing that probably bears consideration on a case-by-case basis. I somehow suspect that many of these jobs being offered to Katrina survivors aren’t exactly the kind where you wear a power suit and tie combo while lighting cigars with $100 bills.
Still, though, my instinct here would be to do what a lot of employers are apparently doing: reach out to hurricane survivors by targeting them with recruiting practices. Set up recruiting drives at shelters, for example, or create relationships with recruiters and job placement services doing the same. But once the application is across your desk, keep the playing field level with other job seekers. I’ll bet the end result would be almost exactly the same.

Churches can discriminate on religion and still get paid by the government

Those kooky lawmakers. This article on Yahoo! News is kind of skimpy on the details, but even what’s there is pretty intriguing. For example:

Churches and other religious groups are allowed to receive federal money to provide preschool to poor children. Now, the House says, they should be allowed to hire based on religion.

In a broad update of the Head Start program, the House voted Thursday to let preschool providers consider a person’s faith when hiring workers — and still be eligible for federal grants. The Republican-led House said the move protects the rights of religious groups, but Democrats blasted it as discriminatory.

…The Republican plan would, for example, allow a Catholic church that provides Head Start services to employ only Catholic child-care workers, and to reject equally qualified workers of other religions.

Now, I know there are specific exceptions for churches and other religious organizations when it comes to discrimination on the basis of religion. I directed the Quote Monkey at the actual text of the Civil Rights Act of 1964 (shock!), and it came back with this:

This subchapter shall not apply to an employer with respect to the employment of aliens outside any State, or to a religious corporation, association, educational institution, or society with respect to the employment of individuals of a particular religion to perform work connected with the carrying on by such corporation, association, educational institution, or society of its activities.

Assuming my legalese translator is working, that amounts to “Churches can discriminate on the basis of religion when hiring for jobs directly related to carrying out their religious work.” Makes sense, right? I mean, the Catholic Church should require that its priests be Catholic. But it doesn’t seem to say that non-religious activities like publicly funded preschooling would be covered.
A bit further in the document, though, the Quote Monkey found this:

This title shall not apply to an employer with respect to the employment of aliens outside any State, or to a religious corporation, association, or society with respect to the employment of individuals of a particular religion to perform work connected with the carrying on by such corporation, association, or society of its religious activities or to an educational institution with respect to the employment of individuals to perform work connected with the educational activities of such institution.

That paragraph is even harder to parse, but it seems to say that the whole law doesn’t apply to schools owned by churches or other religious organizations. That’s obviously farther-reaching than just positions related to a church’s religious mission.
The main contention and the main change with the law discussed in the Yahoo! News article, though, is that religious organizations can now practice religious discrimination (under the conditions described by the CRA of 1964), AND receive federal money. If someone more familiar with the laws in question wants to comment and correct me or expand on this, I’d love to have him/her do it.
All this aside, though, the Human Resources professional in me has to wonder why any church would engage in religious discrimination when hiring preschool teachers or administrators. I can think of at least two reasons why it wouldn’t:

  1. It drastically cuts down on your applicant pool, making it much harder to fill the position at all, much less get the best people.
  2. Assuming that a church’s mission is proselytization to its religion, wouldn’t it want to bring in people who are not already sitting in the choir every Sunday?

Intelligence, IQ, and g

Bell Curve IQ Scores

I love the Internet. Sure, it gives us spam and smut, but it also gives us websites dedicated to dogs in bee costumes and Wikipedia. The latter is like the Encyclopedia Britannica, except it’s online, free, and written entirely by Internet nerds. Every now and again I like to look up topics close to my own heart and see what they’ve written. Today, let’s take a look at “intelligence.”
For sure, Wikipedia isn’t going to replace a good textbook in anyone’s classroom, but it is interesting to see what’s there. And in this case, it’s actually a constellation of 3 related entries:

None of those entries is particularly long or technical, but if you were to read just one, I’d go with the one for Intelligence Quotient. As you might expect from a document created by community and with no strong editorial oversight, the entries are kind of jumbled and don’t flow well in many places, but it’s good to see that they do include intelligence testing for employment purposes. The section on “Practical Validity” discusses the relationship between g and job performance, and they even cite Hunter and Hunter’s seminal (1984) meta analysis.
There are, unfortunately, a couple of things that are flat out wrong, the most egregious of which is this bit:

“Legal barriers, most prominently the 1971 United States Supreme Court decision Griggs v. Duke Power Co., have prevented American employers from directly using cognitive ability tests to select employees, despite the tests’ high validity. Using cognitive ability scores in selection adversely affects some minority groups, because different groups have different mean scores on tests of cognitive ability.”

(While it’s true that cognitive ability tests are guaranteed to create adverse impact, employers can use them if they document the job-relatedness and validity of the tests, a feat made easier by recent meta analytic data and better tests. Griggs v. Duke didn’t go to Griggs because Duke Power used intelligence tests per se; it went to Griggs because the particular tests and other requirements that Duke used weren’t valid. The case is most often cited for sparking the whole notion of “adverse impact.”)
The entry on General Intelligence Factor, though, contained a great passage that explains the relationship between g and wider intelligence testing:

The relationship of g to intelligence tests may be understood by an example. Irregular objects, such as the human body, are said to vary in “size”. Yet no single measurement of a human body is obviously preferred to measure its “size”. Instead, many and various measurements, such as those taken by a tailor, may be made. Each of these measurements will be positive correlated, and if one were to “add up” or combine all of the measurements, the aggregate would give a better description of an individual’s size than any single measurement. The method of factor analysis allows this. The process is intuitively similar to taking the average of a sample of measurements of a single variable, but instead “size” is a summary measure of a sample of variables. g is like size, in that it is abstracted from various measures (of cognitive ability). Of course, variation in “size” does not fully account for all variation in the measurements of a human body. Factor analysis techniques are not limited to producing single factors, and an analysis of human bodies might produce (for example) two major factors, such as height and girth. However, the scores of tests of cognitive ability do in fact produce a primary dominant factor, g.

Like I said, given their shortcoming these entries aren’t going to be cited in any refereed journal, but they are good primers and make for handy links if you know someone who wants a jumping off point for the subject.

National American Business Women’s Day

Apparently today is National American Business Women’s Day, at least in many places. So I’d like to extend my congratulations to at least four fifths (80%) of working women everywhere.
I also encourage everyone out there to send obnoxious electronic greeting cards to your favorite working woman. Because, you know, she needs more spam. It’s not like she’s trying to work or anything.

September 2005 Issue of JAP

Journal of Applied Psychology

The latest issue of the Journal of Applied Psychology arrived in my mailbox recently. There are a few articles and research reports of interest to those doing good deeds (or bad, I guess) in the employment testing field.
The first is a meta analysis of one of my own favorite subjects: Applicant Reactions. It’s entitled “Applicant Attraction to organizations and Job Choice: A Meta-Analytic Review of the Correlates of Recruiting Outcomes.” Here’s the abstract:

Attracting high-performing applicants is a critical component of personnel selection and overall organization success. In this study, the authors meta-analyzed 667 coefficients from 71 studies examining relationships between various predictors with job-organization attraction, job pursuit intentions, acceptance intentions, and job choice. the moderating effects of applicant gender, race, and applicant versus nonapplicant status were also examined. Results showed that application attraction outcomes were predicted by job-organization characteristics, recruiter behaviors, perceptions of the recruiting process, perceived fit, and hiring expectancies, but not recruiter demographics or perceived alternatives. Path analysis showed that applicant attitudes and intentions mediated the predictor-job choice relationships.

I was kind of hoping that they had used the data I collected in my dissertation (which was on how test administration procedures can affect applicant reactions), but it appears they didn’t.
Next is “Proactive Personality and Job Performance: A Social Capital Perspective” with this abstract:

This study of 126 employee-supervisor dyads examined a mediated model of the relationship between proactive personality and job performance. The model, informed by the social capital perspective, suggests that proactive employees reap performance benefits by means of developing social networks that provide the resources and latitude to pursue high-level initiatives. Structural equation modeling suggested that the relationship between proactive personality and job performance is mediated by network building and initiative taking on the part of the employee.

So in other words, “Go-Getters who schmooze a lot are successful.” Hardly shocking, but it’s nice to see that someone has finally quantified and scientifically studied that. It still leaves the question, though: is a trait like “proactive personality” something you want to test for?
Finally we have “Risk-Taking Orientation and Injury Among Youth Workers: Examining the Social Influence of Supervisors, Coworkers, and Parents” with this abstract:

Despite youths’ susceptibility to social influence, little research has examined the extent to which social factors impact youths’ risk-taking orientation and injury at work. Drawing on social influence and behavioral intention theories, the study hypothesized that perceived supervisory influence, coworker risk taking, and parental risk taking serve as key exogenous variables in the risk-taking orientation at work. Risk-taking orientation was further hypothesized to be mediated through global risk taking, which in turn was posited to predict risk-taking orientation at work. Longitudienal results from 2,542 adolescents working across a wide spectrum of jobs supported hypothesized linkages, although there was some evidence of partially mediated mechanisms. Coworker risk taking was a relatively strong predictor of youths’ risk-taking orientation at work.

This kind of thing is interesting to me as I’ve been looking into safety and risk-taking in general and I think it’s a criteria and outcome that is often overlooked in favor of that good old standard, job performance. Many companies would be just as happy to reduce safety violations as they would be to raise performance –indeed, they may consider the two inexorably tied. Finding out that people emulate the risk-taking behavior of others like parents or coworkers makes sense, and gives us additional levers to use against the problem of work injuries.

Bad interview questions and good training

Susan Heathfield’s columns on About Human Resources is quickly becoming one of my favorite HR-related reads. A recent column covers how to be careful about interview questions.
The article’s first page goes over that same old boilerplate stuff about illegal questions covering race, sex, religion, disabilities, age, family, et cetera. Well, the questions aren’t illegal per se, but it IS illegal to base any hiring or promotion decisions on that litany of landmines. It’s best to steer clear of them entirely since none of them are likely to be job related and just asking them could give someone ammunition to load into a big gun that has “Lawsuit” written down its barrel.
This is, unfortunately, the point where I think a lot of so-called “Interview Training” stops dead in its tracks if HR is more concerned about compliance than actually improving the workforce. I’ve known people who say they’re trained interviewers, but all they’ve really received is a list of naughty questions. People who have sat between the cross hairs of a really good and experienced interviewer, though, will tell you that there’s a wold of difference. Good interviewers know how to probe, dig, and elicit information while sticking to the relevant questions. When you’re sitting across from one of these people, you don’t have a chance of embellishing your role in that big project or claiming that being the treasurer for your fraternity counts as “extensive experience managing budgets.” They also know how to capture information and use structured interview guides so that they can be used for their intended purpose –comparing numerous candidates against a set of standardized criteria.
But even the best interviewer can’t do much if all she has is a list of questions she can’t ask. Fortunately, the second page of the About HR piece gives some good advice about how to create effective interview questions. There’s not much meat on the bones, but the author does link to some more detailed pieces in the archives about how to create behavioral questions based on the specific requirements of the job. It’s not the end-all-be-all of structured interview building, but it’s a good primer.
As an aside, the most memorable interview I ever conducted was when I was working as a corporate recruiter for Pizza Hut in St. Louis, recruiting and screening people for restaurant manager positions. One applicant’s experience was limited to seven years as the Assistant Manager at a Dunkin’ Doughnuts. He had a thick Eastern European accent (or so it seemed to me; having received my list of naughty questions I didn’t ask him where he was from) and his answer to every question was “I make the doughnuts.” And I mean EVERY question:

“Can you tell me about your roles and responsibilities at Dunkin’ Doughnuts?”

“I make the doughnuts.”
“Okay, what else?”
“I make the doughnuts.”
“…All right. Can you tell me about a time when you had to manage an employee with performance issues?”

“I make the doughnuts.”

You get the idea. I don’t believe we hired him, but I’m sure he made darn good doughnuts.

Online resources for cheating on interviews and employment tests

George’s Employment Blawg put up a story about interviewat.com, which is a website where you can research interview questions used by various companies. Apparently people who go through interviews at various companies go to interviewat.com and report what questions they were asked. I think some of the reports even contain reports (from memory, presumably) about employment test questions.
One might think that interview questions are a pretty gray or even white area where the “no harm no foul” rule would apply, because when most people think of interview questions they think of banal and predictable (and not to mention “worthless”) stuff like “Tell me about your greatest strength” or “Where do you see yourself in 5 years?” A cursory glance at interviewat.com, however, reveals a wealth of technical questions or word problems designed to measure math or logic skills. There are objectively “right” answers to these questions, and receiving them ahead of time is pretty clearly cheating the system. Fortunately this particular website is pretty darn broken, to the point of not even rendering correctly.
This kind of thing also makes computerized testing more attractive, because (a) it’s easier to push out new test content when old content gets compromised, and (b) some systems can avoid the problem in the first place by introducing random elements or choosing from a big bank of equivalent items.
Sometimes, though, it’s not that straight forward. Earlier this year at the SIOP convention I attended a session on test security. Two speakers from ePredix flabbergasted the audience (including me) by showing screenshots of sites that sold answers to various pre-employment tests. But they then topped all that by putting up a screenshot of a site that will sell you a kit designed to help you pass a urine test, including a bottle of clean urine! A simple Google search for “clean urine” brings up a site that has a variety of products for sale, including one with this description (certain words blanked out so search engines won’t index me under them):

This product is the perfect solution for anyone seeking an ultra realistic look as well as experience. Our improved Whizzinator and Urinator design consists of a 5 inch fake ***** with detail, look and feel that are so realistic you won’t believe your eyes. …By implementing a hidden internal check-valve, the act of urination is so realistic that even a direct observer will not be able to detect that it is in-fact simulated.

Like I said, flabbergasted. This doesn’t seem like a very gray area at all.

New IPMA-HR newsletter focuses on technology and recruitment

Looking for something to read that doesn’t include the words “Kojo needs a new kidney?” The September, 2005 newsletter for the International Public Management Association for Human Resources (IPMA-HR to its friends) is out and available online. There are a few articles that touch on selection and assessment.
First, From Then to Now – How Technology Has Changed Human Resources and What the Future Holds touches on technology and recruitment/selection. This part resonated with me:

“But a more interesting example is modular testing,” [Jenna Berg, the CEO of some web-based recruitment/selection system] said. That is, to have an applicant apply for several jobs, each of which may share some testing requirements, and for that applicant to then schedule themselves for all the tests to be taken in one session and to only take tests or subtests once.

Long ago, my uncanny ability to see the blindingly obvious told me that this kind of thing would eventually happen given the rapid rise of information technology and the Web. Scores on some pre-employment tests have the potential to become as common a requirement as licensure requirements or a driver’s license. It’d be keen if your scores on various tests were maintained, in a trustworthy fashion, by job placement services. So if two separate jobs require a cognitive ability test, you only have to take it once. I think the main problem now is standardization. There are a lot of tests out there that measure the same thing, and a lot of tests that measure things that are similar, but not quite the same thing (c.f., personality testing).
Another article of interest in the newsletter entitled “Recruitment Technologies and Services for Government Today” touches on the idea of using technology to administer assessments to help the screening process. So does the article after that one, which is entitled Fighting the War for Talent with Technology: The Federal Experience. Both articles describe how the line between “application” and “employment test” blurs, especially with more innocuous measurements like biodata.
I’ve been on the receiving end of the resume flood that comes with posting a job on Monster.com or HotJobs, so nobody has to sell me on the idea of doing pre-screens (or “pre-qualifications” as the services usually spin them). There’s a lot happening in this area, but I have to wonder how much of it is being driven by people who know all the ins and outs of not only tests and measurements, but employment law as well.