LifeFormulae Blog » Posts for tag 'MIAME'

Effective Bioinformatics Programming - Part 5 No comments yet

First, a little irony. In the late ’90’s I interviewed with BMC software in Houston. At that time, BMC was a supporter of big iron, providing report facilities, etc.

When asked what software I currently used, I replied with “GNU software”. The interviewer asked, “What is GNU? I’ve never heard of it.”

I explained that it was free software that you could download from the web, etc. But they weren’t really interested.

Anyway, eWEEK.com had a feature this week - ‘MindTouch Names 20 Most Powerful Open-Source Voices of 2010. The first name mentioned was William Hurley. The chief architect of Open Source strategy at BMC. (http://www.eweek.com/c/a/IT-Management/OSBC-Names-20-Most-Powerful-Open-Source-Voices-of-2010-758420/?kc=EWKNLEDP03232010A).

I guess they’re interested now.

Data Standards

There are any number of sequence data formats. This link at EBI – http://www.ebi.ac.uk/2can/tutorials/formats.html describes several.

What is really astounding is that most of these formats have remained to same over the years. The tab-delimited and CSV (comma separated values) format is as prolific as ever, as is the GenBank report.

And equally astonishing is the fact that manipulating the data (e.g. parsing GenBank reports) is still the same.

True, the Bio libraries such as BioPerl, BioJava, BioRuby, now provide modules that make this easier, (if you can install them) but it is still the same old download and parse.

There are also several groups trying to standardize sequence data. The SO (Sequence Ontology) group (http://www.sequenceontology.org) is trying to do for sequence annotations what GO (Gene Ontology - http://www.geneontology.org) did for genes and gene product attributes.

MIGS (Minimum Information About A Genome Sequence spec at http://nora.nerc.ac.uk/5548/) is following the course of the MAGE MIAME Standard (Minimum Information About a Microarray Experiment at http://www.mged.org/Workgroups/MIAME/miame.html). Good luck with that, as many scientists have openly voiced objections to that standard.

XML and the Web

XML (eXtensible Markup Language) and WSDL (Web Services Description Language) are one method of easing the interchange of data. Links at – http://en.wikipedia.org/wiki/XML and http://en.wikipedia.org/wiki/Web_Services_Description_Language.

There are a number of drawbacks to this setup.

Not all of the sequence data is available in XML or well-formed XML.

Some XML, such as NCBI XML, needs further interpretation. For example, the sequence feature (annotation) locations must be “translated” for further use.

XSLT has performance issues, and is size-delimited. We tried processing LARTS converted NCBI ASN.1 GenBank XML data to XSLT and found there were definite size limitations.

Using WSDL means exposing yourself to the world via the web.

Javascript has too many security questions to consider seriously.

Software Development

Software development takes time and the right people. True, there is a lot of open source software out there, but I’ve mentioned the perils of that method in a previous blog.

A scientist with a grant to produce results dependent on computer analysis is only going to write code that is good enough to create code (or find someone (read post-doc) who can create that code very cheaply) that will back up those findings.

Has the code been extensively tested? Are the results produced by the code valid? Can the code be used by future projects? Is the software portable? Is it robust? Can it be ported to different hardware environments?

There is a great article – “Are we taking supercomputing code seriously?” at (http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/). This article, in turn, has links to other articles on methods and algorithms, and error behavior, for example. This one on scientific software considers how multi-processing has influenced algorithm development and the problem of different multi-processors co-existing on the same machine (http://www.scientific-computing.com/features/feature.php?feature_id=262).

He states that in the rush to do science, scientists fail to spot software for what it is: the analogue of the experimental instrument. Therefore the software must be treated with the same respect that a physical experiment would.

When I started my career, I worked on a system that was a totally integrated database system for hospitals. It was one of those systems that was so very ahead of its time (mid-80’s), that a corporation bought the product and squashed it.

Anyway, our Systems and Extensions group supported the 6 compilers that comprised the system software that made the system function. The tailoring group wrote the code that created the screens that drove the system.

At the inception of the system, a decision was to be made over the make up of the tailoring group: should they be programmers that would be taught medical jargon, terms, etc; or should they be medical personnel – doctors, nurses, techs, that would be taught programming?

The decision was to go with medical personnel, as it was surmised they would understand hospitals better.

At the same time, a decision to limit the number of screens a hospital could request (called tailoring) to 500 was discussed. The decision was to let the hospital have however many screens it wanted.

The tailoring group got their training and set in to programming. After a period of time, it was realized that the group had, in essence, created one bad program and copied it thousands of times.

It was so bad, we did two things. We created a program profiler that produced a performance summary of the programming aspects of that program. (We were immediately asked to remove it by the tailoring group, as it was too confusing.) Two, we created an automated programming module that would create the code from the display widgets on the screen designed by the tailoring group.

This approach was helping, but people were abandoning ship as talk of an acquisition was surfacing. Our junior programmer went from new-hire to senior team member in 30 days.

I think we would have done a lot better with programmers learning medical terms.

As for the hospital screen limit, we had hospitals with 10,000 individual screens. We should have stuck with 500.

One last thing. When looking at any piece of scientific programming, please realize that in the Authors accreditation usually starts with the PI. The people who did the actual work are generally listed at the end of the line. The PI may have had the idea, but likely as not could not code it.

Interpreting Standards No comments yet

I found an article in the December 2008 issue of Nature Methods to be of particular interest, not in the least that I personally know the authors.

The article, under CORRESPONDENCE on page 991, surveyed a series of papers  from the 2007 issues of 20 journals. The purpose was to refute the Nature Methods editorial of March 2008 which asserted that the deposition of supporting raw microarray datasets is “routine”.  Data cited in the article was compared to that currently available in public databases.  They found that the rate of deposition of datasets was less than 50%. Only half of the discovery data that was the basis of the articles was available to the public.

They further cited that the fault of the MIAME (Minimum Information about a Microarray Experiment) standard. They assert that “owing to their highly contextual nature, have a more complex metadata structure than sequence data.”

The MIAME standard was forged by the MGED (Microarray and Gene Expression Data) and published in Nature Genetics, 29, 365-371 (2001).  The MGED also house the the Microarray and Expression (MAGE) Object Model which defines the entire environment of the experiment (e.g. organism, array design, etc.). MIAME is the standard and MAGE adheres to the MIAME standard and suggests formats for representation and submission of microarray data.

The premier microarray data repository is ArrayExpress located at http://www.ebi.ac.uk/microarray-as/ae/.

ArrayExpress is a public repository for transcriptomics data, which is aimed at storing  and MINSEQE for high-throughput data (http://www.mged.org/minseqe/) - is compliant data in accordance with MGED (http://www.mged.org/recommendations). The ArrayExpress Warehouse stores gene-indexed expression profiles from a curated subset of experiments in the repository.

Other sites are GEO (Gene Expression Omnibus - http://www.ncbi.nlm.nih.gov/geo/) and CIBIX (Center for Information Biology gene Expression database - http://cibex.nig.ac.jp/index.jsp).

MIAMExpress (a MIAME compliant microarray data submission tool) is currently available at http://sourceforge.net/projects/miamexpress/ and is the submission tool for microarray experiments. It is downloaded to your local system and must be made executable (compiled) on your system in order to use it.  A local of the MySQL database is required as well as the Perl programming language. 

The MIAME/MAGE meta-data model is described in UML (Universal Modelling Language).
They suggest mark-up languages for data submissions. They provide MAGE-ML which is XML dtd.  In addition, there is a tabular format (MAGE-TAB) that has just been announced. It is a spreadsheet-like tabular format.

This data model is difficult to interpret. Fitting your data to this model can be a real trick.  I know, I’ve tried. And I’ve got years of work with formal data specifications behind me.  For the average lab tech it is almost impossible to interpret.  A bioinformatics programmer with exposure to MS Word and MS Excel (which I have read are the two most important requirements to succeed in bioinformatics (!)) would be in the same boat. 

I have nothing against models and standards.  Standards bring order to chaos — if they are simple enough to interpret and implement.

The article goes on to call for the interpretation of the microarray data in the GenBank format.

Just about every everybody in the biosciences field is familiar with this format.  Most important, they know how to submit data that will be interpreted as GenBank data.

GenBank data is stored internally at NCBI in ASN.1.  The ASN.1 format is extensively used in telecommunications and other areas.  After years of working with ASN.1 and especially NCBI ASN.1, I have to say that it is ideal for the storage of sequence and other data.

ASN.1 is infinitely extensible through its recursive abilities.   This is great in that it can encompass all the data for a particular data object.  However, the nesting nature of the ASN.1 construct can cause one to literally pull out one’s hair. 

ASN.1 doesn’t gracefully translate into SQL.  It is possible, but not very pretty and the queries are ridiculously complex. �

Using NCBI toolkit code to access ASN.1 data works if one knows C/C++ and has lots of experience working with suites of large complex software.

Our product (LARTS) was developed to make working with NCBI ASN.1 data a little easier and to create a new paradigm of searching NCBI ASN.1 data.

NCBI ASN.1 was distilled into a grammar that is parsed much like a programming language or the way a sentence is parsed for English class.  That grammar translates the ASN.1 into XML Schema.  This XML can then filtered for specific values or formatted for specific output such as a Genbank-like report.

The new paradigm means that the serious user should become somewhat familiar with the NCBI ASN.1 data structures.  By serious, I mean someone who wants to go beyand the currently offered output formats.

Our ncbixref link (http://www.lifeformulae.com/lartsonline/docs/ncbixref/NCBI-Seqset.html#Bioseq-set) provides a way to traverse these structures, starting with the top-level Bioseq-set.

In some instances, the ASN.1 data structure names don’t really describe the data they define.  For example, the ASN.1 data structure for dbSNP is ExchangeSet (http://www.lifeformulae.com/lartsonline/docs/ncbixref/Docsum-3-0.html#ExchangeSet).

Yet Another Standard

The Genomics Standards Consortium has a suggested format for next-generation sequencing experiments called MIGS (http://gensc.org/gc_wiki/index.php/Main_Page), or miminum information about a genome sequence. It’s extension is MIMS - Minimum Information about a Metagenomic Sequence.  The MIGS/MIMS data models are expressed in GCDML — Genomic Contextual Data Markup Language - http://gensc.org/gc_wiki/index.php/GCDML. GCDML is implemented using XML Schema.

Let’s hope the meta-data is kept to that “minimum”, but looking at http://www.nature.com/nbt/journal/v26/n5/box/nbt1360_BX1.html, it doesn’t seem so.

At any rate, the move toward XML Schema is a good thing and fits in well with our thinking.

Top of page / Subscribe to new Entries (RSS)