Evaluating Training

Learning evaluation is a widely researched area and this is understandable since the subject is fundamental to the existence and performance of education around the world.

Like anything else, evaluating customer satisfaction must first begin with a clear appreciation of customers’ expectations. Expectations agreed, stated, published or otherwise provide the basis for evaluating all types of customer satisfaction.

It is important to understand the purpose of a training course evaluation before planning the content and choosing method of executing this task.

Evaluation feedback assists in improving efficiency and effectiveness of:

  • Training content and methods:
  • An organisations budget, personnel, and other resources
  • Employee performance
  • Organisational productivity

Through evaluation trainers can:

  • Recognise where they need to improve their teaching skills
  • Receive suggestions from trainees for improving future training
  • Determine if the training provided matches workplace needs

Planning Your Evaluation

Kirkpatrick’s four-level framework is one example that can be used.  This framework consists of four levels that progress in difficulty from 1 (the easiest to conduct) to 4 (the hardest) which essentially measure essentially measure:

  • Reaction and attitude – of the student; what they thought and felt about the training
  • Learning – the resulting increase in knowledge or capability
  • Behaviour – extent of behaviour and capability improvement and implementation/application
  • Results – the effects on the business or environment resulting from the trainee’s performance

All these measures are recommended for full and meaningful evaluation of learning in organisations.

Examples:

Level 1: One way to assess trainee reactions and attitudes is to use a questionnaire.

  • Questions can gather opinions about training methods, the instructor, the environment in which training took place, or other aspects of the training process.
  • Pencil-and-paper surveys are convenient to use for trainees and the evaluator.

Level 2: Written or performance tests can assess change in knowledge/skills.

  • The best way to measure changes in knowledge or skills is to test trainees before and after training.  If it is not possible to test trainees before training, their performance can be tested after training and they can be asked whether or not their understanding or skill came from the training session.

Note that even if a positive change is found, it is possible the trainees gained the new knowledge or skill from a source other than the training.

Level 3: Post-training testing or observations can assess use of skills on the job.

  • This level must be completed outside of the classroom after trainees have had an opportunity to use what they have learned.  This level is more difficult because it requires trainers or some other evaluator to follow-up months after training.

Level 4: Quantifiable measures are often used when assessing organizational impact.

  • Some examples of measures that can be used are numbers regarding sales, injuries, or productivity.  It can be difficult to determine the extent to which other factors besides training (i.e., economics of region) may have contributed to changes in organizational performance.

Since Kirkpatrick established his original model, other theorists (for example Jack Phillips), and indeed Kirkpatrick himself, have referred to a possible fifth level, namely ROI (Return On Investment). In my view ROI can easily be included in Kirkpatrick’s original fourth level ‘Results’. The inclusion and relevance of a fifth level is therefore arguably only relevant if the assessment of Return On Investment might otherwise be ignored or forgotten when referring simply to the ‘Results’ level.

Key Points to Remember:

Different aspects of training can be evaluated.

  • Level 1: Trainees’ perceptions
  • Level 2: Knowledge/skills gained
  • Level 3: Worksite implementation
  • Level 4: Impact on organization

Remember to direct your evaluation to specific questions.

  • How did the trainees react to the training?
  • Do people seem to have increased their skills?
  • Are new skills being used?
  • Has the organisation been impacted?

Ways to Gather Data:

One of the most important aspects of conducting an evaluation is choosing the right ways to find information. There are questions you can ask before starting your evaluation process to help you chose the methods that are best for your situation.

Questions:

Who is interested in the evaluation results?

  • Trainer?  Manager?  Organisation?  Government Agency?

What questions do they want answered?

  • Are skills/knowledge gained?  Did transfer occur?  Are there improvements?

What resources are available?

  • Financial?  Time?  Personnel?  Equipment?  Materials?

There are many ways to gather information. Each has advantages and disadvantages.

Data Gathering Techniques:

Questionnaires:

  • Advantage: Allows evaluators to quickly gather data from large groups.
  • Disadvantage: Not always accurate because of factors such as people not responding honestly or accurately. Possible reasons for this are an uncomfortable testing environment, the desire to respond in a socially acceptable manner and misunderstanding the instructions or questions.

Interviews:

  • Advantage: Allows the evaluator to gather data that is more accurate than that from questionnaires since the interviewer can verbally address any misunderstandings or questions. Can also ask for more in-depth information than is practical in written surveys.
  • Disadvantage: Can be time-consuming and expensive if many questions are asked of many trainees. Analysis of in-depth data also takes a lot of time.

Facial expressions/Body language:

  • Advantage: Allows the evaluator to gather information without being intrusive.
  • Disadvantage: One person’s perception of an expression may not be the same as another.

Performance tests:

  • Advantage: Can measure the skills of a worker in a real or simulated work environment.
  • Disadvantage: Can be difficult to simulate a work environment. If the test is conducted in the actual work environment, then it must be scheduled with regard to production concerns.

Written tests:

  • Advantage: Often standardized and validated before use.  A reliable, valid test is able to consistently measure the same thing every time it is used.  Written tests are usually completed in a classroom setting where large groups can be evaluated at the same time.
  • Disadvantage: If there are problems with the testing environment (i.e., the room is too hot, the chairs are uncomfortable), the test-takers may become distracted and not respond accurately. Literacy or language problems can also be an issue.

Workplace observations:

  • Advantage: Gives clearest data about whether or not training is being used in workplace.
  • Disadvantage: Requires evaluation sometime after training in the work environment. This may interfere with production.

Team Games:

  • Advantage: Can be a creative way to engage individuals and keep their attention.
  • Disadvantage: Games make it difficult to “measure” or evaluate individual trainees.

Group discussion:

  • Advantage: Can be a great way to gather information about training or to answer questions by creating an open forum where individuals can interact and talk.
  • Disadvantage: Individual differences exist between those that participate in the discussion, and this factor may influence the type of information received.  If some individuals are quieter than others, feel pressure to conform to what others are saying, or are disinterested, they may not share information or report how they really feel about the training.

Analysis of statistics:

  • Advantage: The use of numbers and statistics is highly regarded in providing and understanding information.
  • Disadvantage: The numbers can sometimes be manipulated in such a way that data can be misleading.  Misleading data can lead to incorrect or inaccurate beliefs about the information gathered regarding training.

Writing Better Questions:

Good questions are needed for an effective survey. Poorly worded questions can confuse people and cause them to provide inaccurate information that will not be useful.

Recognising Bad Questions:

Following are examples of poorly-written questions.  Read each one and think about why it is not a good question. After each question, the problem with it is explained and a better way to ask that question is suggested.

Q1.  Did you think this class was informative and enjoyable?   Yes     Somewhat     No

Q1: This is an example of a double-barrelled question, where two items (informative AND enjoyable) are combined in one question.  The trainee cannot, for example, respond “Yes” to informative and “No” to enjoyable.

   Q1. Improved:

1.1 Did you think this class was informative? Yes Somewhat No
1.2 Did you think this class was enjoyable? Yes Somewhat No

Q2. Did this class meet Part 46 annual refresher requirements?       Yes      No

Q2: This question assumes everyone knows Part 46 requirements. Anyone who did not know these requirements would not be able to respond accurately.

    Q2 Improved:

2.1 Do you know what Part 46 requires for annual refresher training? Yes No
2.2 If you answered “Yes” above, do you think this class met those requirements? Yes No

Q3. Please give the following items a rating from 4 (most positive) to 1 (most negative)

The Instructor                                               4          3          2          1

The Room                                                      4          3          2          1

The course materials                                   4          3          2          1

Q3: This question does not give enough information. ‘Positive’ and ‘Negative’ are only two options; different levels of each do not exist. The information that this question is trying to obtain may be more easily and accurately obtained by creating open-ended questions.

    Q3 Improved:

3.1 List one thing you liked about the instructor and one thing that could be improved.
Liked: __________­­________ Improvement: ­­­__________­­________

 

3.2 List one thing you liked about the room and one thing that could be improved.
Liked: __________­­________ Improvement: ­­­__________­­________

 

3.3 List one thing you liked about the course materials and one thing that could be improved.
Liked: __________­­________ Improvement: ­­­__________­­________

While Kirkpatrick’s model is not the only one of its type, for most industrial and commercial applications it suffices; indeed most organisations would be absolutely thrilled if their training and learning evaluation, and thereby their ongoing people-development, were planned and managed according to Kirkpatrick’s model.

For reference, should you be keen to look at more ideas, there are many to choose from…

  • Jack Phillips’ Five Level ROI Model
  • Daniel Stufflebeam’s CIPP Model (Context, Input, Process, Product)
  • Robert Stake’s Responsive Evaluation Model
  • Robert Stake’s Congruence-Contingency Model
  • Kaufman’s Five Levels of Evaluation
  • CIRO (Context, Input, Reaction, Outcome)
  • PERT (Program Evaluation and Review Technique)
  • Alkins’ UCLA Model
  • Michael Scriven’s Goal-Free Evaluation Approach
  • Provus’s Discrepancy Model
  • Eisner’s Connoisseurship Evaluation Models
  • Illuminative Evaluation Model
  • Portraiture Model
  • and also the American Evaluation Association

 

If you’re interested in finding out more please call Rachel Chambers on 01920 460211 or email: rachelc@nowtraining.co.uk

About the Author

Comments are closed.