You are on page 1of 24

Reliability of Selection Measures

Employee Selection

Selection Ratio
How selective you can be # to be hired = (range from 1.0 0.0) # applicants Higher the ratio (closer to 1.0) the LESS selective you can be

HED 362 L. Good

Employee Selection

Selection Ratio
Lower ratios (close to 0.0) mean the more selection techniques will help in choosing

EX:

25 needed = .167 (closer to 0.0) 150 applicants 1:6 ratio

HED 362 L. Good

Criterion of a selection procedure


Reliable Valid Fair Effective

Reliability Defined
The degree of dependability, consistency, or stability of scores on measures used in selection

Factors effecting reliability


Test length- longer=better Homogeneity of items. Higher r = better adherence to standard procedures

Error of Measurement
Obtained Score
True Score Error Score

Methods of Estimating Reliability


Examines two sets of measure If scores from the two measurements are similar, reliability will be raised If scores from the two measurements are dissimilar, reliability will be lowered

Methods of Estimating Reliability


Statistical procedures calculated reliability coefficients range from 0.00 to 1.00 Higher the coefficient, the less the measurement error, and the higher the reliability estimate Lower the coefficient, the higher the measurement error, and the lower the reliability estimate

Methods of Estimating Reliability


Selecting a method: Dependability of data collected today be reflective of the same person in the future To what degree do evaluations vary from one another Accuracy of scores on measuring the true ability Dependability of an assessment of a measure at a given moment

Methods of Estimating Reliability


Test-Retest Same measure is used to collect data from the same person at two different times Draw backs: maturation, memory ,practice The higher the coefficient, the greater the approximation to the true score: closer to 1.00 The lower the coefficient, the greater likelihood of error: closer to .00

Methods of Estimating Reliability


Test-Retest Sources of error Memory Learning True change

Methods of Estimating Reliability


Parallel or Equivalent Forms
Does not use the same measure twice but will instead use equivalent versions of the measure having equal means, sd, difficulty level
Each has same number of questions Each has same level of difficulty Averages of scores are the same

Two different versions, double work, cost

Methods of Estimating Reliability


Internal Consistency Estimates How similar are different parts of the measure (e.g. different questions on a test) in what they measure?
Split-half reliability Coefficient Alpha () most commonly reported

Factors Influencing The Reliability of a Measure


Method of Estimating Reliability Individual Differences Among Respondents Length of a Measure Test Question Difficulty Administration of a measure

Internal consistency
Split Half reliability obtained by correlating two pairs of score from equal halves of a test R must be adjusted sta Efficient and accuratetistically

Internal consistency
Coefficient alpha Degree of correlation among all items of a scale calculated from single administration of a single form of scale By averaging all possible split half estimates Most common method of r

An Overview of Validity
Validity: A Definition
The degree to which available evidence supports inferences made from scores on selection measures.

Validation
The research process of discovering what and how well a selection procedure measures

Importance of Validity in Selection


Shows how well a predictor

An Overview of Validity (contd)


The Relation between Reliability and Validity
High reliability without validity is possible High validity with reliability is not possible Quantitatively, the relationship between validity and reliability is
where rxy = maximum possible correlation between predictor X and criterion Y (the validity coefficient) rxx = reliability coefficient of predictor X ryy = reliability coefficient of criterion Y.

An Overview of Validity (contd)


Types of Validation Strategies

Content validation Criterion-related validation


Concurrent strategies

Construct validation Validity generalization Synthetic validity

Content Validation Strategy


Content Validity
Is shown when the content (items, questions, etc.) of a selection measure representatively samples the content of the job for which the measure will be used.

Why Content Validation?


Is applicable to hiring situations involving a small number of applicants for a position Focuses on job content, not job success criteria Increases applicant perception of the fairness of selection procedures

Criterion-Related Validation Strategies


Concurrent Validation Strategy
Both predictor and criterion data is obtained on a current group of employees, and statistical procedures are used to test for a statistically significant relationship (correlation coefficient) between these two sources of data
Sometimes referred to as the present employee method because data is collected for a current group of employees

Construct Validation Strategy


Construct
A postulated concept, attribute, characteristic, or quality thought to be assessed by a measure.

Construct Validation
A research process involving the collection of evidence used to test hypotheses about relationships between measures and their constructs.

Importance of Large Sample Sizes


1. A validity coefficient of a small sample must be higher in value to be considered statistically significant than a validity coefficient of a large sample. A validity coefficient of a small sample is less reliable than one based on a large sample. The chances of finding that a predictor is valid when the predictor is actually or truly valid is lower for small sample sizes than for large ones.

2.

3.

You might also like