Skip to Content
Healthcare Professionals

Workplace Based Assessments of Competence

This page summarises methods of assessments for trainees in Dermatology.
 
Most of what Dr Clive Archer said in the BAD Newsletter article on 'Assessment of Trainees - an Evolving Process' (2008) remains highly relevant. 
 
These methods are designed to measure objectively a trainee's performance in a simple, fair and reproducible fashion. They complement the Specialty Certificate Examination, which is designed mainly to test knowledge, whereas WBAs focus more on skills, attitudes and behaviours. 
 
As with other medical specialties, the specialty curriculum for Dermatology was developed by our Specialty Advisory Committee (SAC) at the RCP, and initially received approval from the Postgraduate Medical Education and Training Board (PMETB) in 2007. The system for assessment was also approved by PMETB later that year. (PMETB was subsequently subsumed by the GMC). The specialty curriculum runs in parallel to the continuing generic curriculum which covers important aspects of medical education, focusing on good clinical care, maintaining good clinical practice, relationships with patients and communication, working with colleagues, teaching and training, professional behaviour, and management and NHS structure.
 
What is assessment?
 
Assessment is ‘measuring how much the learner has learned’ or in the context of trainees, ‘a measure of progress or level of achievement of an individual against an agreed standard’. In the UK, evaluation is ‘measuring the quality of teaching’ (e.g. your own teaching, a course, or the whole institution) and is distinguished from assessment. However, in other countries such as the USA, the term ‘evaluation’ may be used as an alternative to ‘assessment’. 
 
Appraisal is ‘a review of progress against agreed objectives’. This should be a positive process, giving someone feedback on their performance, charting continuing progress, and identifying development needs. Reflection is an important part of a trainee’s transition towards becoming a consultant.
 
A good assessment should accurately determine the depth and breadth of learning that someone has achieved, and be seen as fair by the assessors and those being assessed. Other features of good assessment include reliability (consistency), validity (does it measure what it is supposed to measure?) and transparency (is the assessment system clear and understood by both the assessor and trainee?). In addition, trainee assessments should be objective, summative as well as formative, and use multiple assessments involving several assessors, leading to judgements at the time of the encounter. Another important consideration is feasibility. A system of assessment should not be too time-consuming, expensive or labour-intensive. 
 
Workplace Based Assessments (WBAs)
 
Trainers should now be familiar with these assessments, the formats of which are summarised below. In terms of number of assessments to be completed, it is helpful to consult the ARCP decision aid (section 5.5 of the current [2010] JRCPTB curriculum). In summary, trainees should complete each year at least 4 adequate mini-CEXs, 4 surgical and 2 non-surgical DOPS, 10 CBDs, 1 patient survey, 1 teaching observation and 1 audit assessment. MSF should be undertaken in years 1 and 3 of training. 
 
SLE versus AoP
 
Note that as an ongoing pilot, DOPS and mini-CEX have been divided into supervised learning events (SLEs) and Assessments of Performance (AoPs). SLEs are for mainly formative assessment (“for learning” with great emphasis on feedback) and AoPs summative (“of learning” with a pass/fail judgement). The JRCPTB recommends that trainees should undertake SLEs in a particular topic area prior to attempting an AoP. SLE documentation is not available to the ARCP panel whereas AoP results are one of the key strands of evidence used in the ARCP decision process (see below). AoPsmust be spread over the year, rather than all being clumped together close to the ARCP. This policy is still under review and the latest information will be available on the JRCPTB website. 
 
The mini-Clinical Evaluation Exercise (mini-CEX) was devised by Norcini and colleagues in the USA to assess medical residents, the ‘traditional’ clinical evaluation exercise (CEX), recommended by the American Board of Internal Medicine in 1994, having serious limitations. In a typical CEX the resident was observed with only one patient by only one examiner over a period of about two hours!
In the mini-CEX, an assessor should observe a trainee doing a focused task with a patient (e.g. taking a history, performing an examination, discussing a treatment plan), over 15-20 minutes only, rating the performance on a scale and giving instant feedback to the trainee. Scoring should reflect the performance of the trainee against that which would reasonably be expected at their stage of training and level of experience. Immediate feedback from the assessor to the trainee is important, especially where deficiencies have been identified. 
 
Direct Observation of Procedural Skills (DOPS) was devised by the Colleges. The assessor observes a trainee undertaking a routine procedure for as long as the procedure takes. The period observed is often a matter of minutes only and may be part of a larger procedure (e.g. observing suturing technique after excision of a skin lesion). Again the assessor rates performance on a scale and gives instant feedback. 
 
Multisource Feedback (MSF) was initially referred to as 360° assessment, involving collection of data on a doctor’s performance from a variety of colleagues such as other doctors, allied health professionals, nurses, and secretarial and clerical staff. In the RCP pilot study 20 raters were used and this number is recommended to maintain validity and a mix of assessors, although reliability was achieved with smaller numbers. 
 
Case-based Discussion (CbD) is a formal way of doing what many doctors have done for a long time. CbD is designed to assess clinical decision-making and the use of medical knowledge. It provides systematic assessment and structured feedback on regular discussion of suitably complex cases. The trainee selects three case records from patients they have recently seen, and the assessor selects one of these for the CbD session. CbD enables discussion of the ethical and legal framework of practice. 
 
As with other workplace based assessments, the trainees are encouraged to drive the process and to select the assessors. However, these should include the directly supervising consultant. 
 
The purpose of Teaching Observation Assessment (TOA) is to help trainees develop their teaching skills and to form part of the overall assessment process. Prior to the TOA, the assessor(s) should establish the trainee’s previous teaching experience, the subject matter to be covered, what teaching techniques will be used, and what they would like the learners to take away from the session. After observing the trainee teaching, there is an RCP assessment form to complete (with rather detailed descriptors of competencies), and the assessors should give immediate feedback, especially where deficiencies have been identified.
 
The Audit Assessment Tool (AAT) is designed to assess competence in completing an audit. It may be based on a written report or a presentation at an audit meeting, and when possible there should be more than one assessor for the same audit. Assessors should score the trainee on the appropriate RCP form, reflecting the performance of the trainee against that which you would reasonably expect at their stage of training and level of experience. Again feedback should be given to the trainee there and then.
 
The RCP Patient Survey (Patient Satisfaction Questionnaire, PSQ) form ‘What did you think of this doctor?’ emphasizes that ‘the questionnaire is only about the doctor you have seen today’ and asks patients not to comment ‘about other members of staff, the NHS system, or waiting times’. Of relevance to discussions on what are acceptable consultation times for new, follow-up and complex patients in Dermatology, traditionally a high-volume specialty, we should all be aware of questions like: Did the doctor listen to what you had to say? Did the doctor give you enough opportunity to ask questions? Did the doctor answer all your questions? Are you involved as much as you want to be in the decisions about your care and treatment? 
 
Educational Supervision and the ARCP Process
 
Many will have come across the distinction in hospital medicine between a Clinical Supervisor and an Educational Supervisor, the latter being similar to the well-developed system of ‘Trainers’ in the general practice setting. Educational Supervisors will be expected to undertake appraisals, conduct workplace based assessments and to write the supervisor’s report, as part of annual planning. Educational Supervisors should aim to meet regularly with their trainees for mutual feedback (e.g. at least 4 times a year), monitor trainee progress towards agreed objectives, identify shortcomings and arrange to address them, provide liaison with others involved with the trainee, and provide careers advice.
 
The ARCP (Annual Review of Competence Progression) is informed by evidence from assessments, as well as other information from the trainee’s e-portfolio and the Educational Supervisor’s report. The evidence provided should lead to an annual assessment outcome of progress with curriculum targets. ARCP outcomes include: satisfactory progress, unsatisfactory or insufficient evidence (trainee required to meet panel), inadequate progress (trainee required to meet panel, additional training required), release from training programme (sustained lack of progress), incomplete evidence provided, out of programme time, training completed.
Back to top