MoBiPlant / README.md
manufernandezbur's picture
Update README.md
93da00d verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en
tags:
  - biology
  - plant
  - molecular
  - gene function
  - gene regulation

Dataset Card for MoBiPlant

Table of Contents

  1. Dataset Summary
  2. Dataset Details
  3. Supported Tasks and Leaderboards
  4. Languages
  5. Dataset Structure
  6. Data Fields
  7. Usage
  8. Citation

Dataset Summary

MoBiPlant is a multiple-choice question-answering dataset curated by plant molecular biologists worldwide. It comprises two merged versions:

  • Expert MoBiPlant: 565 expert-level questions authored by leading researchers.
  • Synthetic MoBiPlant: 1,075 questions generated by large language models from papers in top plant science journals.

Each example consists of a question about plant molecular biology, a set of answer options, and the index of the correct answer. This dataset benchmarks MCQ-based knowledge in models within the plant molecular biology domain.

Dataset Details

Supported Tasks and Leaderboards

The primary task is:

  • Multiple-Choice Question Answering: Given a question and a list of answer choices, predict the index of the correct option.

Leaderboard

Benchmark on Expert MoBiPlant (565 questions):

Model CoT Answer Accuracy (%)
LLaMA 3.1 405B 77.6
GPT-4o 81.2
o1-mini 81.1
deepseek v3 84.3
deepseek-r1 86.4
Claude 3.5 Sonnet 88.1
Gemini 1.5 Pro 76.8

For full results on both versions, see the associated paper.

Languages

  • Language: English

Dataset Structure

  • Versions:

    • Expert: 565 expert-authored questions.
    • Synthetic: 1,075 LLM-generated questions.
  • Splits:

    • The train split contains all examples (1,640 total). To access each version, see Usage.
  • Number of Examples:

    • 1,640 total examples across expert and synthetic sets.

Data Fields

Each entry in the train split contains:

Field Type Description
question string The MCQ question text.
options list[string] A list of possible answer strings.
answer int Index of the correct option in options (0-based).
area string General research area (e.g., GENE REGULATION - TRANSLATION).
normalized_area string Normalized research area category (e.g., GENE REGULATION).
plant_species list[string] Original plant species labels (e.g., ["Arabidopsis thaliana", "Zea mays"]).
normalized_plant_species string Normalized plant species label (e.g., Non-specific).
doi string DOI of the primary source publication.
source string URL or citation of the source article.
source_journal string Journal of publication of the source article.
Year int Publication year of the source.
Citations int Number of citations the source article has received.
is_expert bool True if the example belongs to the Expert MoBiPlant subset; False otherwise.

Usage

from datasets import load_dataset

# Load from HF
mobiplant = load_dataset("manufernandezbur/MoBiPlant")['train']

# Filter out expert and synthetic versions (optional)
expert_mobiplant = mobiplant.filter(lambda question: question['is_expert'])
synth_mobiplant = mobiplant.filter(lambda question: not question['is_expert'])

# Example iteration
for example in expert_mobiplant:
    question = example["question"]
    options = example["options"]
    label = example["answer"]
    print(f'Question: {question}')
    print('Options: ','\n'.join([ chr(65+i) + opt for i,opt in enumerate(options)]))
    print('Correct Answer: ', options[label])

Citation

@article {Fernandez Burda2025.08.31.672925,
    author = {Fernandez Burda, Manuel and Ferrero, Lucia and Gaggion, Nicol{\'a}s and Fonouni-Farde, Camille and The MoBiPlant Consortium and Crespi, Mart{\'\i}n and Ariel, Federico and Ferrante, Enzo},
    title = {What Large Language Models Know About Plant Molecular Biology},
    elocation-id = {2025.08.31.672925},
    year = {2025},
    doi = {10.1101/2025.08.31.672925},
    publisher = {Cold Spring Harbor Laboratory},
    abstract = {Large language models (LLMs) are rapidly permeating scientific research, yet their capabilities in plant molecular biology remain largely uncharacterized. Here, we present MoBiPlant, the first comprehensive benchmark for evaluating LLMs in this domain, developed by a consortium of 112 plant scientists across 19 countries. MoBiPlant comprises 565 expert-curated multiple-choice questions and 1,075 synthetically generated questions, spanning core topics from gene regulation to plant-environment interactions. We benchmarked seven leading chat-based LLMs using both automated scoring and human evaluation of open-ended answers. Models performed well on multiple-choice tasks (exceeding 75% accuracy), although most of them exhibited a consistent bias towards option A. In contrast, expert reviews exposed persistent limitations, including factual misalignment, hallucinations, and low self-awareness. Critically, we found that model performance strongly correlated with the citation frequency of source literature, suggesting that LLMs do not simply encode plant biology knowledge uniformly, but are instead shaped by the visibility and frequency of information in their training corpora. This understanding is key to guiding both the development of next-generation models and the informed use of current tools in the everyday work of plant researchers. MoBiPlant is publicly available online in this link.Competing Interest StatementThe authors have declared no competing interest.},
    URL = {https://www.biorxiv.org/content/early/2025/09/04/2025.08.31.672925},
    eprint = {https://www.biorxiv.org/content/early/2025/09/04/2025.08.31.672925.full.pdf},
    journal = {bioRxiv}
}