Psychological Tests - Sense or Non-sense?
by Patrick Merlevede, jobEQ's leading researcher

As long as the number of potential job applicants is greater than the number of job openings, selection of some nature will be required. Moreover, a poorly designed test or improper test use may keep you from hiring the best candidates. The aim of this article is to ensure that the assessment instruments used during the selection process help to decrease this error.

Most test developers and test users have good intentions, yet due to inadequate training, there is some considerable misuse of test data. This article has been written to warn about bad use of tests in general and about "nonsense" claims that some commercially oriented people may make about the tests they put on the market. Most of these comments apply to any test, so anyone publishing one of these tests should at least have a reasonable answer to these comments. Their answers should explain what they do to address these specific issues.

Test Design Problems

Test design problems can be classified into two groups. The first group has to do with the construction of the test itself and the theoretical basis that underlies the test. A test without a solid theoretical foundation cannot deliver good results. When a test has a solid theoretical foundation, the psychometric analysis of the test will confirm these foundations and will point out the problem areas in the use of this test. Let us go a bit deeper into these two issues:

  1. Test Constructs - What theory underlies the test? How are the test questions formulated? How do these questions relate to the theory? Which were rejected during test development? Are these questions context independent, or are they an exact match to the context for which the test wants to predict performance? How do language and cultural factors influence the test performance, even within one language area or one geographical area? What other environmental factors play a role and how are these issues addressed?
  2. Psychometric Quality - A task force of the American Psychological Association examined honesty tests and found out that they couldn't get access to research-based information, not even to basic properties such as test reliability (e.g. internal consistency) and test validity (correlations with various criteria), let alone test-retest reliability, factor analysis, or other forms of internal structure validation. Test designers should be able to provide information on these important topics.

Test Use Problems

There are numerous ways to misuse test date. In their book, Eyde et. al. (1993) indicate 86 specific elements and 7 broad factors that represent common problems in test use. This is a summary of the problems mentioned:

  1. Comprehensive Assessment - In part, this issue is about test follow up. "How do you assure that the test score is an accurate description? With what other instruments is this test combined that deliver extra predictor information?" One must also consider the emotional state the tested person was in when the test is filled out, as well as the test's research evidence and test limitations.
  2. Proper Test Use - What training did the persons using the test about this test? How is the quality of test use controlled?
  3. Psychometric Knowledge - Are correct basic statistical principles respected (concerning standard error of measurement, reliability and validity)? Are these principles applied for interpreting the test results? How does the tool limit the number of false positives and false negatives (and the use of cut-offs)?
  4. Accuracy of Scoring - Is the test filled out and tabulated by a person or by a computer? This can make a major difference in the number of errors that are produced:
    INPUT PROCESS TABULATION PROCESS POSSIBLE SCORING PROBLEMS
    Test taken via pen-and-paper Results tabulated by a person Possible data entry error and tabulation error
    Test taken via pen-and-paper Results tabulated by a computer Possible data entry error
    Test taken online Results tabulated by a computer Virtually no possible scoring problems
  5. Appropriate Use of Norms - When using a standardized test, one can run into problems regarding practical applications. Is this test measuring the important traits? How do you objectively know if proactivity or teamwork is important for success in this particular position? A good test will only use criteria important to job performance, and test designers must be able to prove the information directly relates to he job.
  6. Feedback to test-takers - Test designers must provide correct interpretations, and be staffed to do so.

Problems with Test-Takers

Lack of Self-Knowledge - Test-takers might not know enough about themselves to accurately answer the questions. Overestimating or underestimating one's own abilities is a common challenge.

Falsification - Psychological tests are often quite transparent, and it seems obvious to many observers that job applicants would not willingly report undesirable behaviors that would ruin their chances for employment (e.g. Goldberg et al. 1991). You should always carefully examine a test before you use it for your organization. Is there right and wrong answers? Will an educated candidate be able to know what you want to hear? For example, some tests ask: At work, are you:

  1. Always late
  2. Sometimes late
  3. As punctual as the next guy
  4. Rarely late

Of course everyone will pick answer 4! Test designers should be able to prove to you that they can eliminate falsification. This is a major problem, because the number of "false positives" in test results (this happens when people that pass the test, and then do not perform as expected).

Conclusion

This article should raise some questions regarding responsible and accurate test use. There are good tests out there, but they must answer to these issues. The answers to these questions have to come from test developers. If you can't get those answers right away, al least get a commitment that these answers should be available by the time you end a pilot project.

 

References

  • Eyde, L.D et al. (1993) Responsible Test Use - Case Studies for Assessing Human Behavior, American Psychological Association, Washington DC
  • Goldberg, L.R. et al. (1991), Questionnaires used in the prediction of trustworthiness in pre- employment selection decisions: An A.P.A. task force report.
  • Sudman, S., Bradburn, N.M. (1982), Asking Questions - A Practical Guide to Questionnaire Design; Jossey-Bass Publishers, San Francisco
  • Sudman, S., Bradburn, N.M., Schwarz, N (1996), Thinking about Answers - The Application of Cognitive Processes to Survey Methodology; Jossey-Bass Publishers, San Francisco

 
Back to Research
Continue to our Products

Related Pages
Other Articles

Articles on Specific Metaprograms

The Integral Perspectives Group is an Amazon.com associate since 1998
Buy from Amazon:
 
7 Steps to Emotional Intelligence, Patrick E. Merlevede, Denis Bridoux et al.
7 Steps to Emotional Intelligence
Patrick E. Merlevede, Denis Bridoux
 
Words that Change Minds, Shelle Rose Charvet
Mastering Mentoring and Coaching with Emotional Intelligence
Patrick E. Merlevede, Denis Bridoux
 
Privacy Information

Other Pages

Research
Our Research page will teach you the research background of jobEQ, why thorough research is important, and how we use it to your advantage.

Metaprogram Categories
Many of the jobEQ articles discuss specific metaprogram patterns that the iWAM and our other tools measure. Click here to find out more about the patterns.
 

 
 
 
last modified: 2006/Aug/07 15:56 UTC