When I arrived at the National Institute of Justice to begin my Visiting Research Fellowship in 1995, my program manager Dick Rau mentioned that he had a job for me. I was expected to provide some technical support to NIJ staff, but I certainly did not expect what this job would turn into. I think if I had known what a hornet’s nest I was blithely walking into I would have run away as fast as I could.

The job was to “help the questioned documents community with their troubles. After all, you work in documents too, right? I know, language, not handwriting, but close enough…”

Just a few months before I arrived at NIJ, United States v Starzecpyzel, 880 F. Supp. 1027 (S.D.N.Y 1995)  had been published.  With this case,  the first attack on a forensic science –handwriting identification– had been launched, based on the United States Supreme Court’s ruling about expert scientific evidence and the need for error rates –the famous Daubert ruling. This attack had shaken the questioned documents (QD) community because it was so unexpected. Handwriting identification by visual inspection had been admitted as solid testimony for decades, and in some famous cases.  Handwriting examiners had never had to state how well the visual inspection method worked, or how proficiently they could use the visual inspection method–they just did their casework and frankly didn’t have time for any basic research which wasn’t previously needed by the courts anyway. Under the old Frye rules, they had no problems because the group they belonged to–other forensic document examiners– all agreed that visual inspection worked extremely well, and some have said to me, even perfectly, at identifying the writer of a text.

When I read the Starzecpyzel case at NIJ (which you can find here ), I could see that not only handwriting but other forensic sciences were heading in the same direction –the need for error rates!

I had come to forensic science from linguistics and psycholinguistic experimentation, and the goal of my fellowship proposal at NIJ was to determine what, if any, language-based methods could reliably determine authorship. I knew the research agenda I was working under –develop a database of known authors so I had “ground truth data” to work with, then start testing techniques that had been proposed, come up with some of my own, and start counting up the results: how many misses, how many hits. What I was doing really did seem not too far off from what the Courts and law professors were asking the QD community to do.

Oooops. Dick set up some meetings with QD examiners in the Federal agencies. It was clear that the community was torn in shreds at the thought of a research agenda, with some staunchly opposed to it and others just overwhelmed with how to even get started. In others words, no one was jumping for joy and writing a grant to do the research; clearly I was a stranger in a strange land.

Over the course of the next three years, I arranged small meetings, then a workshop for academics, Federal and State QD examiners, and finally the establishment of the Technical Working Group on Questioned Document Examiner (TWGDOC). I’m proud to wear the many tomato-stained shirts I got from bearing the brunt of obstructionism, anger and fear at the meetings I scheduled to “help them with their troubles….”

Those few who wanted to get started, who saw the need to change their traditional way of doing QD examination, are doing some good work these days, as shown by the 2012 AFDE conference agenda.

And even those who so rabidly opposed my ideas about computerized measurement, statistical procedures, database development, those very same are now in the “new paradigm”, as a symposium sponsored by NIJ and the FBI attests. No tomatoes allowed inside, this time around.

I’ll be reporting on this good work in days to come.