Jacob dropped me a line saying that there is light at the end of the tunnel, a roadmap out of genebank database hell. He’s right, of course. What they’re doing at IRRI ((Some of it under the coordination of the System-wide Genetic Resources Programme (SGRP) of the CGIAR.)) and elsewhere will definitely help users of genebanks avoid the digital quarmire I described in my earlier post. But how did we ever let ourselves get into this mess in the first place?
So, you want to know the answer?
Some of us have too long memories!
Who remembers Dave Rogers in Colorado and the TAXIR and EXIR systems? Or, Lothar Seidewitz in Braunschweig who tried to pioneer standardisation of descriptors? They both go back to the 1970s.
My perception was that it became almost politically incorrect to try and make everyone conform in terms of data management systems or standardised descriptive terminology for genetic resources back in the 80s and 90s, and hence the quagmire that Luigi refers to. Doing your own thing ruled then, but history is a great teacher, and we now need to move forward quickly. GIGA would certainly seem to be that way.
I think the answer is not too little or too much system or standards. They are important, for sure, but I think the reason for genetic resources database hell is a lack of attention to detail and enthusiasm (or rewards) for the painstaking grunt work of maintaining good databases. Most of that work is pretty straightforward. While you need some skills, above all you need perseverance, and a long view. Perhaps you also need to be of the opinion that the value of an accession is greatly dependent on quality of the associated data. Take GRIN, the USDA germplasm database system. Very good system. Does it mean the data is necessarily good? No, as that depends on individual curators and the interest of people with an overarching role to ensure – or enforce – quality control.