Wednesday, August 19, 2015

CLE Sucks (When Training Is Terrible)

Technology training is important. For a long time, I simply thought it was my job to push lawyers into technology training. And, for just as long, I thought anyone who resisted the idea of technology training for lawyers was being myopic. Again, the comic I repeatedly use to make this point:

I had my simplistic worldview punctured by a friend who, I hate to admit, has a stellar track record of pointing out deficiencies in my thinking. He observed that we already mandate training for lawyers. It is called Minimum Continuing Legal Education (MCLE), and it has a well-deserved reputation for, too often, being an annoying timesuck. CLE audiences are notoriously checked out as they wait for the clock to tick down. Presenters deliver the 101 version of their message because they have no way to gauge their audiences’ pre-existing knowledge and, even if they did, would still have to choose between catering to the least or most informed constituencies. It’s awful, and I hate it as both as a lecturee and lecturer.

When you are dealing with individuals who have a high degree of variance in their pre-existing knowledge base, traditional training methods are terrible. Gather everyone in the room and talk at them for a prescribed period of time is a recipe for disengagement. Even if the audience seems engaged, you have no way of knowing whether they are absorbing the content. Because it’s easy to measure, we have developed a very unfortunate habit of using time as a proxy for learning. The deficiencies of traditional training methods are even more evident when you are trying to teach skills. But I will address that in my next post.

Let’s start with a simple knowledge-centric example. With the State Bar of California finalizing its formal opinion that insufficient understanding of electronic discovery can violate the rules of professional conduct, there is a strong impetus for California litigators to enroll in ediscovery CLE. They will sit in rooms or watch videos in which bona fide experts tell them what they need to know and provide a compendium of useful reference material. But what evidence do we have that the audience listened, let alone learned anything? How do we know they weren’t responding to client emails or playing Angry Birds?

What if we gave them access to the same experts and compendium of reference materials but no credit for the time spent with either? What if, instead, they got the credit for successfully completing a competence-based assessment of their ediscovery knowledge? Every assessment would be computer generated from a large corpus of pertinent questions so that gaming the assessment would be far harder than, say, tuning out while the video plays in the background.

Our fear might be that they would simply look up the answers to the questions. Fair enough, and we could probably address this fear by putting a time limit on each question. But even if we didn’t, think about what they are doing in looking up the answers. They are analyzing a question, researching an answer, and coming to a correct conclusion. In short, they are engaging with the subject material in the precise manner expected of a competent lawyer. And while our confidence in their knowledge is far from complete, it is a marked improvement over the status quo.

To the extent we are primarily concerned with time, the new approach is a problem. Some lawyers will already possess the requisite knowledge and get through the assessment in short order on the first attempt. Other lawyers will struggle with the material, requiring considerable study and multiple attempts to pass the assessment. Both types of lawyers, and those in between, will earn the same credits. For me, this is a feature, not a bug. The singular focus on time is misguided. The point of CLE is to ensure that lawyers are keeping current. Lawyers that keep current as part of their regular practice should benefit from this fact and not be forced to sit through remedial lectures just because those lectures may help some of their peers.

We should not use time as a surrogate for knowledge or skill when we can measure knowledge and skill directly. Validation that training has been successful is only one of the advantages of competence-based assessments. My next post will provide more details on why and how competence-based assessments should augment our traditional approaches to technology skills training.

[Before the trainer community excoriates me for knocking down a straw man, let me concede that I use the term “traditional” to refer to training methods that are familiar, not necessarily ubiquitous. Sit In Room/Be Talked At is my impressionistic sense of what most lawyers think of when I recommend technology training, which I often do. There are superior methods long employed by many trainers in many different settings. But a large contingent of lawyers wouldn’t know because they refuse to go.]

+++++++++++++++++++++++++++++++++++++++++++

Casey Flaherty is a lawyer, consultant, writer, and speaker. He believes that there is a better way to deliver legal services. Better for the clients. Better for the legal professionals. Better for the bottom line. Casey is creator of the Legal Technology Assessment, an integrated Basic Technology Benchmarking and training platform. Follow Casey on LinkedIn and on Twitter @DCaseyF.

See also:
Introduction
Strategic Sourcing in Legal: The Service Delivery Review
Deep Supplier Relationships in Legal
Law Firm Realizations
Structured Dialogue in the Law Dept/Firm Relationship
The Role of Nontraditional Stakeholders in Deepening the Law Dept/Firm Relationship

No comments:

Post a Comment