Email updates

Keep up to date with the latest news and content from Automated Experimentation and BioMed Central.

Open Access Open Badges Research

OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS

Siu-wai Leung123*, Xueping Quan14*, Paolo Besana1, Qian Li1, Mark Collins1, Dietlind Gerloff5 and Dave Robertson1*

Author Affiliations

1 School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK

2 State Key Laboratory of Quality Research in Chinese Medicine, University of Macau, Macao SAR, China

3 Institute of Chinese Medical Sciences, University of Macau, Macao SAR, China

4 Division of Ecology and Evolution, Imperial College London, London SW7 2AZ, UK

5 Department of Biomolecular Engineering, University of California, Santa Cruz, CA 95064, USA

For all author emails, please log on.

Automated Experimentation 2011, 3:3  doi:10.1186/1759-4499-3-3

Published: 22 December 2011



Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions.


A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms.


Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking.


The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation.