The SRA has recently posted an update on its Training for Tomorrow blog, where it’s tried to justify the central place it intends to give multiple choice testing within the proposed Solicitors Qualifying Examination. The proposal came in for a lot of criticism in the responses to the consultation, as most readers will be aware. Whilst it’s encouraging to see the SRA engaging actively with the academic literature, their post suggests that they still miss the point of the objections. Specifically, although the SRA cites the US Multistate Bar Exam (MBE) as a precedent, the place of multiple-choice testing in the MBE has been quite controversial in the US and has generated a critical literature. There are three key points that emerge from the US debate which are of profound relevance to the SRA’s proposals. As this will be a long post, I’ll explain the points in summary before I get to the details.
Firstly, multiple-choice testing has known limitations in terms of its ability to assess legal knowledge (not skills, not writing, but legal knowledge itself). For this reason, even in the US most states that use multiple-choice testing in their bar exams (including California and New York) do not rely exclusively on it to assess legal knowledge, contrary to what the SRA’s blog post asserts. Instead, they assess legal knowledge partly through the Multistate Bar Exam and partly through an essay-based exam. There are very good reasons for this, as I explain in detail below.
Secondly, the SRA’s proposals are predicated on a distinction between the assessment of legal knowledge and legal skills that simply has no mainstream parallel. We have never done this in England and Wales (the LLB / LPC separation is not analogous as I explain below), and the jurisdictions they cite as precedents do not do this either – ironically, given the tenor of the SRA’s proposals, US bar exams actually use essay questions to bring together the testing of knowledge and skills.
Thirdly, the case in favour of the dominant approach to multiple-choice testing is largely based on research done on its suitability for assessing the ability to practice medicine. However, there are material differences between the practice of law and the practice of medicine, which suggest that we can’t just extrapolate from one field to another. Again, the details are below.
On to the details.
Contrary to the SRA’s claim, it is not ‘commonplace’ to assess legal knowledge purely through multiple choice testing. There may well be some jurisdictions which do so, but the ones cited by the SRA aren’t among them. This is largely because their examinations do not draw the sharp divide between legal knowledge and legal skills that underpins the SRA’s proposals. This makes the SRA’s proposals quite unlike the precedents they cite.
The Multistate Bar Exam (MBE), on which the SRA places quite a bit of weight as a precedent, is part of a broader suite of offerings which cumulatively go by the name of the Uniform Bar Examination (UBE). The UBE has three parts: the Multistate Bar Examination, the Multistate Essay Examination (see this example), and the Multistate Performance Test. New York uses the full suite. California doesn’t use the UBE, but it follows precisely the same model. It simply prefers to set its own California-specific versions of the MEE and the MPT to complement the MBE.
The MBE, like the SQE’s proposed Part 1, is focused on legal knowledge. Likewise, the MPT, like the SQE’s proposed Part 2, is focused on legal skills. The SQE, however, has no equivalent to the MEE. The model the SRA is proposing is, in effect, the UBE without the MEE. This will make England and Wales quite exceptional among major common law jurisdictions. More fundamentally, though, it is problematic because the MEE performs a very valuable function. It expressly seeks to bring together the assessment of knowledge and skills, and it does so because the American bar is well aware of certain critical limitations which multiple choice testing has when it comes to assessing legal knowledge. The key limitation, identified cogently and concisely in the very article cited by the SRA, is that multiple-choice tests do not assess “in-depth knowledge of a particular topic.” (Case and Donahue 2008, 373). The article is vague about what this means, as is the material put out by the NCBE, but state bars are more candid about what they’re looking for. Here’s what California has to say in its explanation of what is expected of students on the essay questions exam:
The answer must show knowledge and understanding of the pertinent principles and theories of law, their qualifications and limitations, and their relationships to each other.
The last of these is precisely what multiple-choice tests are weak on; yet it is absolutely critical to the practice of law. The statistical literature on testing uses a framework of three parameters on which a candidate’s answers depend. These are the candidate’s ability, the difficulty of the question, and the question’s ‘discrimination’ – that is to say, how good the question is at separating good candidates from poor candidates. I’m not aware of any empirical research on this, but seen in theoretical terms one would expect multiple choice questions to have a very low discrimination on the question of the relationship of principles to each other. This is because the only way to assess this in a multiple-choice question is by giving students four alternative ways in which the principles could relate to each other and asking them to select one. This necessarily means that you can’t test if they actually know what the relationship is. You can only test if their general awareness of law is good enough to let them spot what sort of relationship is more plausible.
My own experience with using multiple-choice tests diagnostically (an approach I strongly recommend) is that this is a significant issue. Consider the distinction between the equitable duty of an investment trustee not to make an unauthorised profit with the trustee’s common law duty to use reasonable care and skill in making investment decisions. Getting the relationship between the two right, and figuring out how it affects the parties’ remedies, is not easy. It has been my experience that if you give students a choice between four possible ways in which the two relate, they will get the answer right more frequently than they will if they have no information about how the two relate. Longform answer questions do the latter, and that is what makes them not just valuable but irreplaceable as a way of testing legal knowledge. It is for this precise reason that jurisdictions in the US have not, in fact, sought to get rid of essay questions in their approach to assessing legal knowledge.
This matters for practice. Frederick Pollock once half-jokingly called lawyers “knights of our lady of the common law”. Stripped of the chivalric hyperbole, that isn’t a bad description. We are warriors with words, and our task is to use our knowledge of the law and our words to shield our clients from those who might seek to do them harm. Whether we’re drafting contracts, preparing for a settlement negotiation against an insurance company, or formulating arguments for a hearing, our main aim is shielding our clients from persons who could ruthlessly seek to pursue interests at variance with theirs. There are plenty of examples where solicitors have failed to do this, precisely because they’ve adopted the sort of objective diagnostic approach to problems that a multiple-choice test will further encourage. The facts that led to the Supreme Court decision in Scott v Southern Pacific Mortgages Ltd [2014] UKSC 52 serve as a particularly troubling example, and they arose purely because the solicitors failed to appreciate the significance of the difference between equitable proprietary and personal interests.
It is, of course, possible to test for that specific item of knowledge, but it speaks to a broader underlying issue – one which law schools increasingly refer to as ‘creativity’. Unlike medicine, legal practice is not objective or criteria-based. Creativity matters in the practice of law, and it implicates both legal knowledge and legal skills. It matters obviously in high-level commercial practice, as anyone who’s ever gotten bogged down in structuring a project finance transaction will attest; but as Scott v Southern Pacific Mortgages illustrates, it also matters in ordinary conveyancing and other such transactions. At the consultation event in York, the SRA’s representative said that the SRA took the view that creativity was not relevant to practice, but that was in all likelihood based on a misunderstanding of what academics mean by the word. By creativity, we mean the ability to think across and beyond standard legal categories, and to use that knowledge to formulate solutions to particular factual situations. These are attributes which are sorely needed in the practice of law, and they simply cannot be reduced to a matter of ‘legal skills’ to be tested in Part 2 of the proposed SQE. They fundamentally implicate both knowledge and skills.
This is directly relevant to the sharp distinction the SQE will make between the assessment of knowledge (Part I) and the assessment of skills (Part II). The SRA’s intention of separating out the assessment of knowledge and skills is puzzling. It is harder to offer an informed critique of this proposal, as I have no idea where it’s come from, and the SRA’s documents don’t give much background. As the discussion of the US should show, other jurisdictions don’t do this – yes, they have assessments purely focused on skills and assessments purely focused on knowledge, but they also have the essay-based assessments whose express purpose is to bring the two together. The Californian bar, for example, explain in the document quoted above that the essay questions test a combination of the students’ knowledge of the law, their ability to apply that knowledge to the given facts, and to reason in a logical, lawyer-like manner from the premises they identify onwards to a sound conclusion.
This makes perfect sense, and it is also how legal education has long been approached in England and Wales. We’ve long separated out the LL.B. and the LPC, sure, but both have always been about teaching knowledge and skills. The LL.B. teaches and assesses, and has always taught and assessed, a whole range of skills that are of critical importance to the practice of law (legal reasoning, argumentation, legal research, abstracting principles from cases, using legal knowledge to design transactional solutions, and so on). Likewise, the LPC teaches and assesses, and has always taught and assessed, a huge subset of substantive vocationally-directed legal knowledge (for example, rules of civil and criminal procedure) in addition to vocational skills. I do not see any reason at all to erect a wall of separation between the assessment of knowledge and the assessment of skills. It is quite contrary to how legal education has traditionally worked here, and I suspect much of the deep discomfort the SQE proposals have created springs from this.
This also leads to the question of whether too much reliance is being placed on results obtained in the context of medical testing. As noted above, there are important differences between the practice of medicine and the practice of law, involving what I’ve called ‘creativity’ (although nothing hinges on that particular label). There are also some peculiar features of the US’ experience of the Multistate Bar Exam. One which stands out is the high degree of correlation between LNAT results and MBE results (r2=0.94) (Day 2004, 328-331). This is puzzling because the scores should not be so highly correlated: aptitude is one factor in achievement, but it is not the only one. Although this has been noted in the literature, it hasn’t been adequately explained as far as I am aware. Day’s article, which is very supportive of the bar exam (in its present format, which includes the MEE), seems to suggest it reflects a problem with law schools, but given the extent of the correlation, that would mean that students learn literally nothing at law school, which appears unlikely to be correct. Secondly, exploratory research done in 1998 on multiple-choice tests in accounting found that the difficulty and discrimination parameters of questions on those exams did not appear to depend on whether the questions actually followed the sort of question-writing guidance that Case and Donahue (and the SRA) discuss. The K-type guidance was the sole exception. I do not know to what extent this applies to law, but it does suggest that taking the experience of one discipline (medicine) and applying it to another (law) may not be as straightforward as it appears. To be clear: I’m not saying Case and Donahue are wrong – their advice is broadly sensible, and I follow much of what they say myself when writing multiple-choice questions. But I don’t use it summatively. More research, specific to the practice of law in England and Wales, is needed before fundamental changes to assessment are brought in.
One final, and somewhat broader, point. Legal assessment has moved on a lot since the days when your typical exam question consisted of a random quote followed by a carriage return and the word “Discuss”. There’s been a fair bit of engagement with the pedagogy of testing, and many of us (including myself) use a range of ways of testing our students’ ability to work with the law. I personally use multiple choice testing of the very type the SRA has in mind in diagnostic and formative assessment, and I’ll be spending a considerable portion of this summer drafting very advanced multiple-choice questions for a companion website to my forthcoming textbook on contract law. The SRA isn’t facing a situation where academics and practitioners are reacting in a blind, knee-jerk fashion to the horrifying notion that we may have to do something differently from the way it was done in the days of Coke and Hale. The concerns we have with the proposals are genuine, and are based on decades of experience in teaching, mentoring, and assessing young lawyers. This makes it a bit irksome when the SRA’s response to us starts off as if we need to be educated in modern pedagogy. I believe I speak for most academics and many practitioners when I say that we would appreciate it if our concerns were taken somewhat more seriously and given an appropriate hearing.
Finally, I want to make it clear that this post is offered in a spirit of constructive engagement. We would all like to see a more responsive and equal legal profession which does more to promote access to justice and to legal assistance for those who need it. To the extent that is the SRA’s aim, it will have the fullest support of the vast majority of legal academics. But we must ensure that we get any reforms right, because if we don’t it is the weakest in society who will be worst hit. The best-off, most sophisticated transactors, will not be the ones who suffer.
This challenge is reflected in the criticisms made of the proposed SQE multiple choice test. People argue that it will neither test to the required standard nor assess candidates’ abilities to analyse facts or build an argument.