Some might ask why I was given this task, and the answer is quite simple: I was
the only member of the Board, and one of only a few people in Australia, who at the time had been trained to conduct competency-based
assessments. I had been trained in the UK by a well skilled and experienced assessor who had, herself, been trained by someone
with a high degree of competence in the subject. (If I was to say that I held the D32/D33 NVQ award, and had been trained
in D34 Internal Verifier award, then those in the know would understand what I'm talking about.)
So,
in racing terms, I had a very good pedigree where CBA was concerned.
And like any (well, most)
parents I have hoped that these elements of the system would grow and mature into what should be the healthy beating heart
of a strong and vibrant process which drives individual, group and societal growth. But, sadly, my hopes have been dashed.
What I have seen instead is a reversion to an age-old approach to assessment that school teachers and university tutors have
followed forever. And to make matters worse these have been enshrined in the so-called competency standards (I'll explain
why I call them 'so-called' in a later rant) used in the extant AQF qualification for trainers and workplace assessors and
pushed by some as the de facto standard for assessments carried out in the workplace.
I'm not
going to go into any great detail here about the problems that I see with this qualification. I have vented my dissatisfaction
to the industry body concerned so often that I am sure they are sick of hearing from me. Instead I will explain here how assessment
is supposed to work and the important role that RPL (recognition of prior learning) plays in it.
Competency-Based Assessment - The Facts
It is widely agreed that the two most
important parts of any VET system is the standards against which on-the-job assessments are carried out, and the way such
assessments are conducted. More important is their quality, and if this is lacking in either one then the system cannot be
said to be rigorous in its processes and relevant in its outcomes.
By definition, the standards
against which all COMPETENCY-BASED assessments (not education or training-based assessments) are carried out have got to accurately
reflect what the individual or group must do, on the job and in pursuit of business and strategic objectives, and the environment
in which they must do it. Any assessment of individual or collective competence against these standards has got to be reliable
in that others replicating such an assessment will come to the same conclusions, and valid in that it does nothing but set
out to achieve the objectives it intends to achieve. Coupled with this is the need for it to be transparent, in other words
everyone can see what is going on, how and why.
Some people have added 'fair' to this criteria
but this raises questions about what is meant by 'fair' - and to whom. For example, is a rigorous assessment fair because
it aims to determine competence necessary for performance in a complex and asymmetric environment and as such helps prepare
those undergoing the assessment to be better prepared for survival there? Or is it fair because it is a simple enough process
that anyone can pass it regardless of what the future holds for them? Personally I don't use the term 'fair' in describing
a good assessment process because it just isn't, well, fair to do so. In fact, trying to make it fair actually dumbs
down the system while purporting to make it more rigorous and reliable.
But I am digressing on
another rant. Back to the subject.
In order for any assessment to be reliable, valid and
transparent, it must contain as few rules as possible. Why? Because it is a very complex process. What we are asking assessors
to do is apply the same rules to every assessment and candidate (for assessment), and none will be the same as any other -
even when two or more candidates work in the same area doing the same work for the same outcomes. None. All of them are different
and must be treated as such. If we try to put rules around assessments which attempt to give every assessor guidance
on how to conduct every assessment we will end up with more rules than we could ever follow - just as the processes are now
governed by so many rules that it is a surprise that any assessments are carried out at all.
A
rule of thumb is that the more complex the system or the environment, the fewer should be the number of rules we need to successfully
traverse them. Competency-based assessment is an extremely complex (but NOT complicated) process therefore there should be
very few rules concerning how they should be conducted. And there is.
Luckily for you I know what
these rules are and I will now share them with you.
Most guidelines, and all of the training conducted
for assessors trying to obtain their Certificate IV in TAA (Training and Assessment), suggest that there are many ways to
conduct an assessment. There is discussion about Formative Assessment and Cummulative Assessment, and assessing candidates
using written and oral tests and so on. Some guidance actually goes so far as to tell the assessor exactly what he or she
should be looking for by way of evidence - a dumb idea but a very common practice. (Dumb because [a] with such guidance this
will be all that the assessor looks for and [b] it is all the candidate will provide. Bang goes reliability and validity in
the assessment.)
The truth is that there is only one way to conduct an assessment: A candidate
will present evidence that he or she believes supports his or her claim for competence against a given set of standards and
the assessor asks a number of questions. These questions are:
- Is the evidence valid - in
other words does it demonstrate what the candidate says it demonstrates?
- Is the evidence reliable
- in other words, would other assessors come to the same conclusion about it, or would similar evidence result in the same
or similar conclusions?
- Is the evidence sufficient - in other words, does it cover the whole
range of competence that the candidate is seeking assessment against?
- Is the evidence authentic
- in other words, does it show something that the candidate actually did or does?
- Is the evidence
current - in other words, does it show that the candidate (still) has the required skills and knowledge and can replicate
them in the future, regardless of how old the evidence is?
If the assessor can answer YES
to all of these questions then the evidence does support the candidate's claim. If he or she answers NO to some or all of
these questions then the candidate needs to provide further evidence in support of his or her claim. This is known as Supporting
Evidence and the more Direct Evidence the candidate produces, the less Supporting Evidence is required - and vice versa.
It is as simple as that. There is absolutely no need to further complicate CBA by putting in rules and guidance as
to what evidence an individual should be supplying (because sure as eggs such directed evidence won't give a positive answer
to all of the above questions) or implying that if the assessor follows the rules then all assessments will be valid, reliable
and transparent. By providing these rules the assessment automatically starts to lose its reliability.
This process is extremely simple, but why the powers-that-be have made it so complicated (as opposed to complex)
is beyond me.
Recognition of Prior Learning
Another
aspect of CBA that is in almost every instance totally misunderstood and therefore poorly applied is RPL. I have seen a couple
of instances where RPL has been applied well but in the main such an application has been by those whose training have included
a standard of practice that is accepted right around the world and not just that which has been devised here.
If CBA is, despite the rhetoric and so-called guidance, a very simple process then RPL is a no sweat at all task.
In fact no assessment can ever be carried out without including in small or large part recognition of prior learning - and
I'll tell you why.
All of the official definitions of RPL in this country have been a pure guess.
One only has to look at the definitions of RPL and RCC (recognition of current competence) given in the literature concerning
the TAA qualifications to see that the definition of one basically says that it is not the other. How definitive is that?
RPL, it is said, is recognising learning that has occurred in any form and any environment. This is true, but the
emphasis in this definition and by those applying it is on LEARNING, in other words that which has been given by educators,
teachers or trainers. I will admit that definitions of RPL do emphasis that the learning will have occurred anywhere, but
when it comes to defining the differences between RPL and RCC it is only that latter which tries to suggest that what is being
'recognised' is competence. In other words, the on-the-job performance as opposed to the learning which has been gained. As
a result organisations such as the Defence Forces (who really should know better) give us 'RPL/RCC' as a category of assessment.
They do this in order to ensure that if the evidence being presented isn't one then it must be the other, that is either learning
or competence.
The truth of the matter is that competence cannot be fully demonstrated, and in
turn assessed, unless it includes learning (ie, underpinning knowledge) which, as opposed to knowledge gained instead informs
the way in which skills are applied on the job and in pursuit of on-the-job business and strategic objectives. One cannot
be applied without the other, and while skills and knowledge are assessed separately (in a good CBA system), their application
is assessed as a whole. The reason for this is that SKILLS + KNOWLEDGE + APPLICATION = COMPETENCE. There is no other formula
for it. Skills or knowledge, by themselves, do not demonstrate competence (no matter how rigorously they are assessed in a
training environment). Only their application in the workplace will do that and here is where the learning comes in.
Skills and knowledge are learned, but so too is they way they complement each other and the way(s) in which they are applied
on-the-job in stable, complex and very chaotic environments. Therefore if we are to concentrate only on assessing the
learning that individuals have achieved then we are miles away from assessing competence. Learning is an essential element
of competence and while it may not be brightly highlighted in the standards against which the assessment is being carried
out it can be inferred from the way in which individual and collective skills and knowledge are applied in the workplace.
Now - finally getting to the point - where and how do people 'learn' how to apply their skills and knowledge in the
workplace? Generally through experience, or previous jobs or assessments. Or they may just figure it out for themselves, or
watch others as they apply their skills and knowledge. They may even learn it through special courses which look at the ways
in which unique and innovative ways of approaching work are covered. This gets us into the realm of 'learning to learn' and
by itself would take a whole new page so I'll not go any further down this path. (You'll have to buy my new book for further
discussion on this point.) What I do want to consider, however, is that like 'learning' there are many aspects of competence
which cannot be observed during an assessment session.
Take, for example, the application of one's
full range of skills and knowledge while employed at a particular function. As an assessor we can determine that an individual
or group is competent at their task or function, but we cannot observe every single thing they do against all of the standards
relevant to that task or function. Some of it has got to be implied - sometimes simply because the person is still employed
and therefore performing to a standard satisfactory to their employer. (If the individual is not performing to the desired
standard, and is not being pulled up for this by the employer, then this is an issue of competence on the part of the employer,
not the employee. He or she may well be performing to the desired - or inferred, given the employer's inaction to correct
it - level of competence, only competent against the wrong standards. This then becomes a peformance problem, and not one
of competence.)
This is an essential element of competence and not always assessed during training,
but one which will have a bearing on whether or not the skills and knowledge individuals possess are applicable to their workplace.
It is true that the purest form of assessment is that which is observed by the assessor. This
is Direct Evidence and, when performed a number of times to the level required in the standards, demonstrates to the assessor
that the candidate does indeed possess the desired level of competence. But it is also impossible to observe everything that
a candidate does. Time and opportunity are just two reasons why this is so. The answer therefore is to assess evidence which
does not come from direct observation, and here is where RPL comes in.
Assessment of evidence
that does not arise from direct observation includes reviewing and asking the above questions about Indirect or Supporting
Evidence. This evidence supports the candidate's claim that he or she is competent and can come from anywhere - volunteer
work, past jobs or experiences, home activities, hobbies, other areas of specialistion or professional practice, and so on.
Examples: In the UK my team was involved in a project aimed at recognising the skills and knowledge
possessed by women, most of whom had never held a full time job but all of whom were parents or carers. They were being assessed
against management standards of competence and, while it was a long and hard job, those who remained at the end of the
project were found to possess a high degree of skills and knowledge relevant to good management. All that they needed was
an opportunity to contextualise their skills and knowledge and apply them in the workplace (ie, add the 'learning' and 'application'
to become competent). One of these candidates later came to work for me and said of the experience that she had raised five
boys so there was nobody who could tell her that she didn't understand man-management.
These
assessments were predominantly carried out of Indirect or Supporting Evidence and therefore were RPL. Other assessments, even
those carried out of individuals or teams at the conclusion of a competency-based training course, will involve to a greater
or lesser degree the same forms of evidence therefore RPL makes up an essential element of all assessments regardless of how
experienced the candidates are or the context in which their assessments are being conducted.
To
talk about CBA without including RPL is to deny candidates the opportunity to demonstrate their full range of competence and,
even though I pooh-poohed the idea above, is very unfair.
In summary, RPL is practised in every
country in which CBA is used. It is called many things - Accreditation of Prior Learning, Accreditation of Current Competence,
Crediting Current Competence, and so on. In each and every case it is exactly the same thing - an assessment of evidence that
did not come from direct observation on the part of the assessor. Hopefully one day our system will realise that it is alone
in trying to define it any other way.