There’s an old adage along the lines of “The more you know, the more you know what you don’t know.” I think it might have originally come from a Peanuts strip where Charlie Brown was going through his Nietzsche phase and staring into the abyss. Be that as it may, this concept came to mind recently when I was helping my employer go through a changeover to a new version of our recruiting website. I-O Psychologists and our professional neighbors know an awful lot about recruitment, but as I tried to help define how our jobs website would work, it occurred to me that we may not know the answer to one of the most important questions, like how the heck people use job boards.
I’m thinking about basic stuff here. How do people search for jobs? By keyword, by location, by job title, by salary range, by educational requirements? What kinds of factors make people more likely to come back to a website and look for newly posted jobs, or to sign up for e-mail or text alerts? Do they search or browse? How many clicks in the application process before someone decides it’s not worth it? What are the effects of different kinds of information in job postings and the presentation thereof? What are the effects of noting (or not noting) selection systems like drug screens, physicals, pre-employment test, or background checks on applicant reactions or perceptions of the company?
Companies who roll out new internet job boards have a LOT of questions about these basics, and the answers dictate how things are going to be configured and the quality of experience job-seekers will have when they come looking for new opportunities. If you do it better than your competition, you’re probably going to have a real advantage in the marketplace, just like any other superior recruiting activity. On the other hand, if you do things haphazardly, only the really determined (or desperate) will make it through the gauntlet.
As I said, some of these questions could probably be answered by synthesizing information from the recruiting literature and company culture literature. We know, for example, something about how people react to drug screens or to diversity statements in job postings. That’s research that can be put to use. But I think there’s big chunks of the solution missing in that what we don’t know about are the nuts and bolts about how people use websites like these. What do people like, dislike, want, never use, et cetera? There are no shortage of experts on web design and web usability –indeed, it’s grown into an entire industry. And while I’m sure someone could offer to tell you what color palate to use, what size to make your font, and where on the page to put your logo, I’m not sure anyone has sat down to tell you the best way to get people to search for jobs that match their qualifications or how long an online application can be before casual job seekers wander off.
Some colaboration is needed here. Of course, maybe despite the considerable time I spent with Google looking into this issue, there is a body of research out there and I just don’t know about it. If you know that to be the case, please let me know!
I’ve been thinking a bit about retesting policies lately. You know, if someone takes your employment test, do you let them take it again? When? How many times? Do you poke them with a stick first?
Based on what I’ve seen and heard from talking to other colleagues, the only thing that people seem to agree on is that they’re needed. After that, recommendations get either vague or militantly specific. There do seem to be a few things that most of the experts agree you need to keep in mind.
First, how long do you make people wait before retesting? I call this “the cooldown timer” but that’s just the World of Warcraft geek in me. The main concerns here are drains on company resources (in terms of how expensive it is to give a test) and a practice effect for test-takers. If your test is a hands-on work sample that takes 6 hours to complete and can only be administered one-on-one, then you may want to keep people from retesting as often as you might if you’re talking about a 40-minute, paper-based test that can be given to dozens of people at a time.
The practice effect is a thornier problem. If a person is allowed to take the same test over and over again her score may have too much to do with practice and not enough to do with the validity of the test. Problem. This may be especially true of timed tests, or with things like tests of reading comprehension where the test taker would benefit from repeated exposure to the material. The nature of the test will have to inform your decision, but generally you can combat this by either having alternate forms of your test (expensive!) and/or having them wait a month or more between attempts so that they have a chance to forget (cheap!).
On the other hand, sometimes practice isn’t a bad thing. Some skills (say, data entry or physical abilities) may be expected to change with practice, and if they improve that’s a GOOD thing and you may not want to discourage people. Again, the nature of the test and what constructs it measures should inform your decision.
On the third hand, there are arguably some constructs that are highly unlikely to change over time. Personality is stable by definition. General mental ability doesn’t change much in adults. In these cases allowing retesting may have more to do with controlling perceptions of fairness than with validity.
The second thing you want to address is which test results are a person’s “official” ones. If you just work with a pass/fail result where a person has to pass a certain cut score in order to be put in an applicant pool, this is easy. Once a person passes the test, no retesting is needed or allowed. But if you use top-down selection or banding, things get trickier. Candidates may want to retest to move to the top of the pile and better their chances of getting the job. This is going to be particularly true if you always let their highest score be their official one.
My suggestion? Always make the most recent score the one that’s used for any selection decision. People may try to improve their score through retesting, but if they backslide it’s just part of the risk inherent to the process. Life is like a game of Chutes and Ladders that way.
Finally, you need to consider how long test results are good for. In other words, do they curdle like milk and expire if given long enough? Again, this is something where the nature of the test is going to have to be your guide. In general, aptitude, personality, and general mental ability tests aren’t going to change, but tests of physical strength, skill, or even job knowledge are susceptible to the ravages of time and may call for quicker expiration dates.
Thumbing through a recent edition of the Journal of Applied Psychology I came across an article dealing with a niche of the recruiting scene that a lot of us don’t often think about, but which probably has its own set of rules: recruiting volunteers. It’s no surprise that when you drop that whole paycheck thing from the equation that the rules change and other factors come into play when motivating people to just give you their time and effort.
Perhaps the most surprising and possibly counter-intuitive finding of this research was that telling potential volunteers how totally mind-blowingly awesome you and your charity are may work against you. Specifically, volunteers who were told that the organization was doing a super job at completing whatever goals it had were less likely to volunteer for them, possibly because they felt their services might be put to better use elsewhere. So, don’t over sell yourself.
So what does make a big difference? For one, support. Potential volunteers were interested in signing up to the extent that they thought that the organization would provide them with the support they needed to do the job. I imagine this translates to an “Am I going to be wasting my time here?” sentiment? If you’re giving up your weekends or evenings, you want to feel like someone is benefitting from it instead of just sitting and saying “Tisk, tisk. Someone should really DO something.”
Interesting stuff. The full title is “Volunteer Recruitment: The Role of Organizational Support and Anticipated Resepect in Non-Voluneers’ Attraction to Charitable Volunteer Organizations” by Edwin J. Boezeman and Naomi Ellemers. It’s in volume 93 of Journal of Applied Psychology.
Here’s an interesting little news brief about how the “name letter effect” can supposedly influence our choice of employer. In short, we humans seem to give preference to things that begin with the letter of our first name. And now somebody has studied this in relation to choosing an employer.
In a new study published in Psychological Science, a journal of the Association for Psychological Science, the psychologists found that there is indeed a name-letter effect between employee names and the company they work for. There were 12% more matches than was expected based on the probability estimate. The researchers noted that “hence, for about one in nine people whose initials matched their company’s initial, choice of employer seems to have been influenced by the fact that the letters matched.”
I KNEW there must have been a reason I compulsively engage in resume blasts to J.C. Penny’s, J.P. Morgan, and Jethro’s House of Chicken & Waffles.
Here’s another one to file in the “Why Aren’t More People Doing This?” cabinet. One of the things that Internet capitalists have figured out is that people want to use the Web to meet people. You’ve got your fan sites and social networking sites like LinkedIn and Facebook, but I’m thinking here more along the lines of dating sites like match.com or eharmony.com, which facilitate your meeting potential partners for everything from a long-term romance to, well, you know… I’ve never had occasion to use these kinds of dating sites, but it’s not hard to find stories –even ones told first-hand– about people who have experienced great success with them. And I have used other websites of a slightly different bent, like meetup.com, to find groups of people interested in getting together to participate in our shared hobbies like photography and gaming.
Bells and whistles aside, at their core what these sites do is ask you about what you like and what you’re interested in, then they show you people who have matching or complementary interests who you might like to meet, then they facilitate your getting together. This begs the question of why academic researchers and practitioners aren’t doing this to seek out collaboration opportunities, especially those working in the area of Industrial-Organizational psychology.
This is actually an idea that a guy by the name of Alan Walker mentioned in an issue of Journal of Occupational and Organizational Psychology a while back, and it’s been rattling around in the back of my head, given how frequently I’m called upon to think of the scientist/practitioner model. Researchers (including graduate students, especially graduate students) need real-world data to test their theories and get their publications. There’s only so much you can accomplish by offering college sophomores extra credit to participate in yet another lab study. Practitioners have the data, or at least access to it. Practitioners also have problems that the researchers can help with. How do I reduce turnover in my call center? How do I best select people for my line crews? What sort of executive education curriculum would work best for my industry? Would adding biodata questions to my online applications help me select better candidates?
Yet researchers and practitioners are so often like ships passing in the night. They each WANT to hook up and play a few rounds of “show me your correlation coefficient” if you know what I mean, but it’s a big world and unless you really know how to network you’re just whistling in the dark. So wouldn’t it be great if there were a website to play matchmaker? Say you were a researcher with a list of interests and you could go onto a site and see a list of decision-makers in organizations that have problems that line up with those interests? Or even just one that would be willing to let you include a few experimental items for that scale you’re working on in exchange for measuring some other stuff while you’re at it? Or what if you were a graduate student in need of data for your dissertation on a certain job taxonomy and you’d be willing to conduct job analysis as long as you could use the data for your own research?
Or heck, I’m a practitioner who would love to do more research but just doesn’t have the time and hasn’t kept on the bleeding edge of research like people whose job it is to do just that. But I’ve got access to some data, some applicants, some statistics and wouldn’t mind working with someone to get a publication or presentation out of that if the circumstances were right. Or maybe two researchers working at different institutions want to get together to collaborate.
Granted, there are practical problems (reliability and timeliness and ownership of data come immediately to my mind), but there could be a lot of missed opportunities here as well. This is a niche that organizations like SIOP, SHRM, or the Academy of Management could really do us a service to fill.
Okay, if there are any grad students out there looking for a thesis/dissertation idea, I’m going to give you a freebie here. In the recent issue of SIOP’s new journal, Organizational Psychology: Perspectives and Practice Scott Highhouse has a nifty little article entitled “Stubborn Reliance on Intuition and Subjectivity in Employee Selection.” This article could easily also been titled “What? No! Why Are the Hiring Managers Doing That? Make Them Stop!” because it basically looks at what he sees as two of the root causes for organizational decision-makers to reject or circumvent scientifically derived selection systems like employment testing.
First is the belief that it should be possible to explain 100% (or close to it) of the variance in human behavior within an organizational context. Someone holding this belief may scoff at your puny validities — 11% of variance explained? Pshaw! Humans are just squishy machines, right? We should be able to predict their performance perfectly. Your expectancy tables and realistic discussions of false positives are powerless in the face of this belief.
The second common reason for objecting to selection systems is the belief that experience makes people better at figuring someone out. This comes through intuition, hunches, reading between the lines, and other nebulous decision-making. Your test results may not mean much if the interviewers like the cut of the candidate’s gib. Just the fact that the guy actually brought in a gib to show them how he had cut it won them over.
Anyway, Highhouse’s article discusses the origins of these troublesome beliefs, and several of the follow-up articles in the same article discuss how to combat them. This, though, made me realize that there’s a whole nascent line of research that’s just waiting to be expanded: stakeholder reactions to selection systems.
Think about it. There’s a great and thriving body of research on applicant reactions to testing, drug screens, and other selection systems. I should know –it was the topic of my Ph.D. dissertation. We know how to study this kind of thing, so why hasn’t anyone turned their attention to building and testing theories about the reactions of other stakeholders, like hiring managers and other decision makers?
We could do it. Heck, you could use the applicant reactions literature as a template to get started. Do hiring managers dislike aptitude tests because of their lack of face validity or because they rob them of control over the decision making process? What kinds of biases and kinks of the human mind come into play when trying to understand probability and utility in a selectoin testing context? Are hiring managers more likely to support testing if they get to interview candidates before or after testing? If we knew more about what kinds of test characteristics drive what kinds of reactions among internal stakeholders, we testing professionals would be better equipped to address, assuage, and prevent those concerns without sacrificing the validity and utility of our tools.
So, there you go. Somebody get on that. If I can find time, I certainly will.
I got a piece of advertising-slash-content in my inbox the other day that actually turned out to be noteworthy. And so I shall make note of it. A company called Peopleclick (a name we can only assume they settled on after rejecting “Peopledrag” and “Peoplerightclick”) put together a white paper entitled “Questions the Government is Asking about your Employment Tests: Do you have the Answers?” You can get it by performing clicking motions here. They’ll ask you to input contact information, but if you’re so disposed you don’t have to make it accurate.
This isn’t going to score you any continuing education credits, but the white paper actually does make for a decent primer on adverse impact, validity, and federal requirements on documentation of all the above. It’s something that you might give to hiring managers who want a little more information about the regulations and legal landmines around testing. Or as a take-away from a meeting on the same kind of topic. I often have meetings with clients who say “I want to do testing!” and while I’m always (well, almost always) happy to hear this sentiment, sometimes a little eduction and setting the scene is necessary. This paper tells them just enough to let them know that they need the help of an expert in things like this.
My only slight complaint is that there is relatively little consideration given to more cutting edge validation techniques, such as job component validity, validity generalization, and validity transportation. This is probably because not only do those techniques quickly bog down in jargon, statistics, and other detailed considerations, but also documents like the Uniform Guideslines and even SIOP’s Standards are a bit out of date in those areas, not to mention case law. Still, that’s a whole different debate.
100 Things Your Need to Know: Best People Practices for Managers & HR and 50 More Things You Need to Know: The Science Behind Best People Practices for Manager & HR Professionals (whew!), are curious and different from most books that I’ve seen on similar topics. As you might guess from the titles, they contain 150 chapters between them, covering sup-toics like selection, Human Resources law, leadership, HR metrics, corporate culture, training, recruiting, HR technology systems, compensation, benefits, motivation, organizational development, job design, teams, performance management, surveys, and more.
Each of these 150 chapters is dedicated to a single “fact,” which is framed as a multiple-choice question at the beginning on the opening page. Do applicants have preferences among various selection techniques? Is there still a bias against African Americans in the workplace? Do people differ in how they learn from experience? How many points should your survey question response scales have on them? How skilled are managers, typically, at being good coaches?
You’re supposed to try and answer the question without peeking, and there’s even places to keep track of your answers so that you can get “scores” for the books that reflects your knowledge of these 100 and 50 things. (Me, I always just peeked.)
After the opening question in each chapter, the correct answer is given, along with a 1-5 ranking of how solid the current state of the research is on this answer, from suggestive to absolutely sure. Then there’s a discussion of the factoid, then citations of research that back up the claim, then a discussion of what it means to HR practitioners, then finally a bibliography for further research. This all happens in the space of 3-5 pages each, so it’s nice and easy to digest. I would typically read a chapter or two over lunch at work or when I needed to take a little break but still wanted to feel like I was doing something work related.
What I like about these books is that they are very research oriented, with each of the 150 assertions backed up by scientific research, usually taken from refereed journals in various branches of psychology and management. It’s not, in short, arm chair punditry or bland platitudes. And while I found myself disagreeing with their reading of the current literature on some topics –such as the importance of emotional intelligence for job performance or the nature of employee engagement as a construct distinct from others– they were mostly spot on from what I could tell.
My only substantial complaint about the books are that I wish the authors had organized all the similar chapters together. I would have liked, for example, to have read through a chunk of chapters all dealing with leadership development all at once, rather than having those same chapters sprinkled randomly throughout the book. Still, with the use of a good index and some skimming, these are going to make pretty good reference books, especially with the bibliographies in each section serving as jumping off points for more in-depth reading.
Do you like these reviews? Check out my profile on Goodreads.com.
Say you’ve got a nice little Master’s thesis on the importance of handshakes in job interview evaluations and you want to “dress it up” for submission to a refereed journal. Step 1: replace all instances of “handshake” with the phrase “tactile nonverbal communication.”
One of the lines on a typical “Is your website Web2.0?” Cosmo Quiz seems to be “Does it let you rate stuff?” Because of this, there is no shortage of things to slap 1-5 stars on. This includes things like books, movies, amateur videos, photographs, and games, but it also includes people like teachers, politicians, online vendors, businesses, and plumbers. I was not therefore too surprised to come across this article on Businesspundet.com dealing with websites that let you review companies as employers. Examples cited include jobvent.com, vault.com, and glassdoor.com though I’m sure there are lots more.
This seems to be the natural flipside of employERS searching the web for dirt on prospective employEES (which I’ve written about, too), but honestly it strikes me as just as bad an idea. There are many problems with these kinds of sites, not the least of which is about an extreme a case of selection bias as you could imagine. I doubt many people, corporate shills aside, go on these sites to sing the praises of their medical insurance or flexible work hours or whatever else it is that they like. Instead, you’re going to get mostly disgruntled folks with a bone to pick. A casual perusal of the reviews does seem to show that there are a lot more negative ones than positive ones.
Furthermore, these sites seem like they would be susceptible to acts of systematic sabotage and character assassination on the part of less scrupulous union employees during a contract negotiation, non-survivors of layoffs, or even disgruntled customers displeased with their level of customer service. I’m not sure anybody is establishing the veracity of these reviews (indeed, the whole allure of this kind of business model is free or nearly free content from users), and it’s not so much that the crowd is bereft of wisdom as it is that the crowd is quite wise but it’s purposely trying to push its own agenda.
It strikes me that there are better ways to go about getting information on a company. Research professional surveys about working conditions, look at salary surveys, read available articles about them, and if all else fails talk to the people who work there. Ask the interview team for references of people you can talk to. Do your own research.