Thursday, May 27, 2004

Volokh evidences the existence of “merit”:

Eugene Volokh actually statistically demonstrates that, in all likelihood, students who get better grades on law school essay exams (or at least his exams) actually write better exams. In other words, he demonstrates that there likely is a great deal of objectivity when it comes to grading law school essay exams.

When I was in law school, students, and even many professors, argued that the grading process was “subjective,” meaning that grades in law school really didn’t mean much of anything other than whether the professor was in a good mood on the day that he or she graded your exam (and what does it say about the professors that they would argue that their own grades don’t mean much?).

All of this, of course, was a big attack on the notion of meritocracy. Not everyone graduates with honors, and yes, some folks have to graduate at the bottom of the class. Moreover, (and also very important as to why meritocracy is attacked) grades aren’t “statistically” proportioned among all “groups”; some “groups,” as a whole, get better grades, on average, than others.

Attacking pretty much any notion of meritocracy is just part & parcel of what it means to be a politically correct bleeding heart EGALITARIAN law student or professor.

I myself am skeptical of these attacks on meritocracy. I tend to believe that the students who get the better grades on the exams actually wrote better exams (not that they were necessarily smarter) and, if you wrote enough good exams such that you attained a good GPA, then you are probably a better student and likewise probably (for good reason, I italicized that term both times that I wrote it) would make a better lawyer than someone with a lower average.

Now, one problem with these generalizations is that while I think the generalizations are accurate, they nonetheless are generalizations—meaning that they tell you, more or less, what is likely, rather than what—to an absolute moral certainty—will be. But absent a crystal ball, we will never know what will be.

In other words, it’s entirely possible—and I’m sure there are many anecdotes that can be given to me that illustrate this—that someone who graduates law school at the bottom of his class, doesn’t serve on a journal or do anything to distinguish himself, fails the bar one or more times—may turn out to be a spectacular attorney. And likewise it’s entirely possible that someone who graduates at the top of her class, makes Law Review, passes the Bar exam on the first try, may turn out to be a total disaster in the real world. But this doesn’t mean that such merit criteria are meaningless. Such criteria are useful insofar as they help to PREDICT (that’s the magic word) who the more competent lawyer will be. And unless we get a perfect “one” as a correlation between our evaluative merit criteria and success on the job (which never happens—that’s our crystal ball standard), there are always going to be exceptions to the rule.

But we err when we conclude that the existence of the exception, or of many exceptions, destroys the rule. In other words, if I were to make the claim that grades, honors status, Law Review, etc. do mean something, I will likely get the egalitarian response, “well I knew X, and she was Ms. Perfect student and turned out to be a disaster,” or “Y failed the bar twice and then she became a superstar….” That these exceptions exist do not prove absence of predictive validity regarding evaluative “merit” criteria. Such critics of merit criteria like Lani Guiner would have to demonstrate that there is a “zero” correlation between traditional merit criteria and success as an attorney.

Let me close with an actual example from my law-school. When I attended Temple University's School of Law, we had a sub-par Bar passage rate as compared to other colleges of similar reputation (we no longer do—and on an interesting side note, as law school admissions have gotten more competitive since I graduated in 1999, Temple’s average LSAT admit score raised—and our bar passage rate increased with our increase in LSATs—thus a correlation between LSAT scores and Bar passage rates for our school exists). So the Dean, in his concern, studied the issue. He found that those graduating in the top third of the class had roughly a 90% passage rate (which is very good), and there was a significant drop for the middle third, and the bottom third of the class had an extremely poor passage rate. In other words, there was a correlation between GPA and passing the bar. But it wasn’t a perfect correlation—it’s possible to point to an example of that 10% in our top third who FAILED the bar and there were many folks in the bottom third who passed the bar as well. But these exceptions don’t disprove the accuracy of the generalization, “the students who do better in law school tend to be better bar testers as well.”

No comments: