cve_dataset / README.md
smridhgupta's picture
Update README.md
3d00c9c verified
metadata
license: other
task_categories:
  - text-generation
  - question-answering
  - text-classification
  - text-retrieval
  - text-ranking
language:
  - en
tags:
  - security
  - cve
  - nvd
  - vulnerabilities
  - cybersecurity
  - cyber-security
  - cwe
  - cvss
  - jsonl
  - slimpajama
  - text
  - dataset
  - rag
  - retrieval
  - question-answering
pretty_name: TanDev CVE Dataset
size_categories:
  - 100K<n<1M
dataset_info:
  config_name: cve
  features:
    - name: text
      dtype: string
    - name: meta
      struct:
        - name: source
          dtype: string
        - name: source_url
          dtype: string
        - name: license
          dtype: string
        - name: cve_id
          dtype: string
        - name: published
          dtype: string
        - name: last_modified
          dtype: string
        - name: cvss
          struct:
            - name: severity
              dtype: string
            - name: baseScore
              dtype: float64
            - name: vectorString
              dtype: string
        - name: cwes
          list: string
        - name: num_cpes
          dtype: int64
        - name: redpajama_set_name
          dtype: string
  splits:
    - name: train
      num_bytes: 379034186
      num_examples: 316780
  download_size: 90048495
  dataset_size: 379034186
configs:
  - config_name: cve
    data_files:
      - split: train
        path: cve/train-*

TanDev CVE Dataset (NVD SlimPajama Corpus)

A SlimPajama‑style corpus of CVE entries derived from the NIST NVD (CVE 2.0) data feeds (2002→present). Each row is a cleaned, single‑document text representation of a CVE with structured metadata for CVSS, CWE(s), timestamps, and a canonical NVD link—ready for pretraining/continued‑pretraining, RAG, retrieval/evaluation, and downstream classifiers.

Token count: ~108.2M tokens.

License: "TanDev Proprietary License — All Rights Reserved"

⚠️ Ethical and responsible use. This dataset summarizes publicly available vulnerability records. Use responsibly for research, education, and defensive security; always validate against vendor advisories before operational use.


What’s in this release (Parquet)

  • Primary delivery = Parquet shards under data/<config>/…/train-*.parquet for fast streaming with datasets.
  • Raw JSON kept alongside in raw/ for transparency and reproducibility.
  • One named config: cve (covers all available CVE rows).

If you previously downloaded raw/cve.json[l], you can keep using it. The Hub will serve Parquet for load_dataset(..., name="cve") automatically.


Directory layout

/                             # dataset root (this card lives here as README.md)
  raw/
    cve.json | cve.jsonl[.gz|.zst]   # original export retained
  data/
    cve/
      default/1.0.0/train/
        train-00000-of-XXXXX.parquet
        train-00001-of-XXXXX.parquet
        ...

Schema

Each record follows exactly this structure:

{
  "text": "<single-document CVE text with header, dates, CWEs, affected summary, description, references>",
  "meta": {
    "source": "nvd",
    "source_url": "https://nvd.nist.gov/vuln/detail/CVE-YYYY-XXXXX",
    "license": "Public Domain (US Gov / NIST NVD)",
    "cve_id": "CVE-YYYY-XXXXX",
    "published": "YYYY-MM-DDTHH:MM:SSZ",
    "last_modified": "YYYY-MM-DDTHH:MM:SSZ",
    "cvss": {
      "severity": "CRITICAL|HIGH|MEDIUM|LOW|NONE|UNSPECIFIED",
      "baseScore": 9.8,
      "vectorString": "CVSS:3.1/AV:N/AC:L/..."
    },
    "cwes": ["CWE-79", "CWE-89"],
    "num_cpes": 0,
    "redpajama_set_name": "SecurityNVD"
  }
}

Field notes

  • text — plain UTF‑8 prose; no HTML; newlines preserved; boilerplate reduced.
  • meta.cvss.severitystring label (e.g., CRITICAL, HIGH, …).
  • meta.cwesdeduped CWE identifiers/names when available.
  • meta.num_cpes — count of affected CPE matches retained for compactness.
  • meta.source_url — canonical NVD details page for the specific CVE.

Loading

Load the Parquet config (recommended)

from datasets import load_dataset

REPO = "tandevllc/cve_dataset"
ds = load_dataset(REPO, name="cve", split="train")
print(len(ds), ds.column_names)
print(ds[0]["text"].split("\n", 4)[0])  # header line
print(ds[0]["meta"]["cve_id"], ds[0]["meta"]["cvss"]["severity"])  # e.g., CVE id + severity

Typical filters

# Severity slice
critical = ds.filter(lambda r: (r.get("meta") or {}).get("cvss", {}).get("severity") == "CRITICAL")

# Year slice by published timestamp
recent_2024 = ds.filter(lambda r: (r.get("meta") or {}).get("published", "").startswith("2024-"))

# CWE presence
has_xss = ds.filter(lambda r: any("CWE-79" in c for c in (r.get("meta") or {}).get("cwes", [])))

RAG / retrieval quickstart

# Build a tiny vector index over the text field
from datasets import load_dataset
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.neighbors import NearestNeighbors

repo = "tandevllc/cve_dataset"
corpus = load_dataset(repo, name="cve", split="train")
texts = corpus["text"]
vec = TfidfVectorizer(min_df=3).fit(texts)
X = vec.transform(texts)
knn = NearestNeighbors(n_neighbors=10, metric="cosine").fit(X)

# Query
q = "unauthenticated RCE in Apache HTTP Server"
qv = vec.transform([q])
_, idx = knn.kneighbors(qv, n_neighbors=10)
results = corpus.select(idx[0].tolist())

Intended uses

  • Security research: trend analysis, CWE/technology clustering, severity drift.
  • Pretraining / continued pretraining of security‑aware LLMs.
  • RAG over vulnerability text for look‑ups and enrichment.
  • Classification: severity, CWE family, vendor/product (with external joins), exploitability proxies.
  • Summarization & QA: human‑readable notes out of CVE bulletins.

Not a substitute for vendor advisories or patches. Always confirm details with original sources.


Limitations & caveats

  • Abstraction: Some vendor‑specific nuances are simplified in the textual rendering.
  • Coverage: Mirrors what is present in NVD; if a CVE lacks English description, it may be omitted.
  • Metadata sparsity: CWEs and CVSS may be missing for certain records.
  • CPEs: Only the count (num_cpes) is preserved to keep records compact.

Citation

@dataset{tandevllc_2025_cve_dataset,
  author = {Gupta, Smridh},
  title  = {TanDev CVE Dataset},
  year   = {2025},
  url    = {https://huggingface.co/datasets/tandevllc/cve_dataset}
}

Maintainer

Smridh Gupta[email protected]