pakkinlau commited on
Commit
503c107
·
1 Parent(s): 4528f22

updating interface

Browse files
Files changed (3) hide show
  1. 000_preview.parquet +0 -3
  2. README.md +45 -12
  3. dataset_infos.json +2 -2
000_preview.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:90bcb110dcc8d1b54b2ec4ab071a3059e5b235959fb3c6fafb41a2f7c047c1cd
3
- size 6180
 
 
 
 
README.md CHANGED
@@ -53,6 +53,8 @@ print("Timeseries:", None if X is None else X.shape, " Connectivity:", None if
53
 
54
  ## Use with `datasets` (viewer‑ready, no scripts)
55
 
 
 
56
  You can explore a tiny, fast preview split directly via the `datasets` library. The preview embeds a small 8×8 top‑left
57
  slice of the correlation matrix so the Hugging Face viewer renders rows/columns quickly. Paths to the full on‑disk
58
  arrays are included for downstream loading.
@@ -60,28 +62,31 @@ arrays are included for downstream loading.
60
  ```python
61
  from datasets import load_dataset
62
 
63
- # Tiny viewer-ready preview (embeds small 8×8 matrices). One split named "train".
64
- ds = load_dataset(
65
- "pakkinlau/multi-modal-derived-brain-network",
66
- data_files="manifests/preview.parquet",
67
- split="train",
68
- )
69
  row = ds[0]
70
  print(row["parcellation"], row["subject"]) # e.g., 'AAL116', 'sub-control3351'
71
  print(row["corr_shape"], row["ts_shape"]) # e.g., [116, 116], [116]
72
  corr8 = row["correlation_matrix"] # 8×8 nested list (for display)
73
 
74
  # Light dev slice (metadata+paths only). Stream to avoid downloads in CI.
75
- dev = load_dataset(
76
- "pakkinlau/multi-modal-derived-brain-network",
77
- data_files="manifests/dev.parquet",
78
- split="train",
79
- streaming=True,
80
- )
81
  for ex in dev.take(3):
82
  _ = (ex["parcellation"], ex["subject"], ex["corr_path"]) # no embedded arrays
83
  ```
84
 
 
 
 
 
 
 
 
 
 
85
  To access the full arrays, load from the returned `corr_path` / `ts_path` using SciPy or `mat73` with variable name fallbacks:
86
 
87
  ```python
@@ -135,11 +140,39 @@ manifests/
135
  dev.parquet # Parquet version (fast viewer)
136
  ```
137
 
 
 
 
 
 
 
 
 
 
138
  ### Integrity & Checksums
139
 
140
  Rows in the preview/dev manifests include `*_sha256` and `*_bytes` for both `corr_path` and `ts_path`, derived from `manifests/manifest.jsonl`.
141
  You can verify a local copy by recomputing SHA‑256 and matching the values.
142
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  ### Scripts (optional)
144
 
145
  - `scripts/enrich_manifests.py`: Enrich preview/dev JSONL with shapes (from sidecars), embedded 8×8 tiles (from `preview/`), and checksums (from `manifests/manifest.jsonl`).
 
53
 
54
  ## Use with `datasets` (viewer‑ready, no scripts)
55
 
56
+ Note: modern `datasets` (>= 3.x) does not execute local Python dataset scripts. Use `data_files=` with Parquet/JSONL as shown below.
57
+
58
  You can explore a tiny, fast preview split directly via the `datasets` library. The preview embeds a small 8×8 top‑left
59
  slice of the correlation matrix so the Hugging Face viewer renders rows/columns quickly. Paths to the full on‑disk
60
  arrays are included for downstream loading.
 
62
  ```python
63
  from datasets import load_dataset
64
 
65
+ # Root-level Viewer splits (recommended on the Hub):
66
+ # train.parquet tiny preview with embedded 8×8 matrices
67
+ # validation.parquet — metadata-only dev slice
68
+
69
+ ds = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="train.parquet", split="train")
 
70
  row = ds[0]
71
  print(row["parcellation"], row["subject"]) # e.g., 'AAL116', 'sub-control3351'
72
  print(row["corr_shape"], row["ts_shape"]) # e.g., [116, 116], [116]
73
  corr8 = row["correlation_matrix"] # 8×8 nested list (for display)
74
 
75
  # Light dev slice (metadata+paths only). Stream to avoid downloads in CI.
76
+ dev = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="validation.parquet", split="train", streaming=True)
 
 
 
 
 
77
  for ex in dev.take(3):
78
  _ = (ex["parcellation"], ex["subject"], ex["corr_path"]) # no embedded arrays
79
  ```
80
 
81
+ You can also use the manifest entrypoints under `manifests/`:
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+
86
+ preview = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="manifests/preview.parquet", split="train")
87
+ dev = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="manifests/dev.parquet", split="train", streaming=True)
88
+ ```
89
+
90
  To access the full arrays, load from the returned `corr_path` / `ts_path` using SciPy or `mat73` with variable name fallbacks:
91
 
92
  ```python
 
140
  dev.parquet # Parquet version (fast viewer)
141
  ```
142
 
143
+ ### Data files
144
+
145
+ - Root (used by the Viewer):
146
+ - `train.parquet` — tiny viewer‑ready preview with embedded 8×8 correlation matrices
147
+ - `validation.parquet` — dev metadata‑only slice (no embedded arrays)
148
+ - Manifests (secondary entrypoints):
149
+ - `manifests/preview.parquet` — same content as `train.parquet` (if duplicated)
150
+ - `manifests/dev.parquet` — same as `validation.parquet` (if duplicated)
151
+
152
  ### Integrity & Checksums
153
 
154
  Rows in the preview/dev manifests include `*_sha256` and `*_bytes` for both `corr_path` and `ts_path`, derived from `manifests/manifest.jsonl`.
155
  You can verify a local copy by recomputing SHA‑256 and matching the values.
156
 
157
+ Example (verify a correlation .mat):
158
+
159
+ ```python
160
+ import hashlib
161
+ from pathlib import Path
162
+
163
+ def sha256(path: Path, buf=131072):
164
+ h = hashlib.sha256()
165
+ with open(path, 'rb') as f:
166
+ while True:
167
+ b = f.read(buf)
168
+ if not b:
169
+ break
170
+ h.update(b)
171
+ return h.hexdigest()
172
+
173
+ # compare with row['corr_sha256']
174
+ ```
175
+
176
  ### Scripts (optional)
177
 
178
  - `scripts/enrich_manifests.py`: Enrich preview/dev JSONL with shapes (from sidecars), embedded 8×8 tiles (from `preview/`), and checksums (from `manifests/manifest.jsonl`).
dataset_infos.json CHANGED
@@ -23,8 +23,8 @@
23
  "builder_name": "parquet",
24
  "config_name": "default",
25
  "version": {"version_str": "1.0.0", "major": 1, "minor": 0, "patch": 0},
26
- "splits": {"train": {"name": "train", "num_examples": 5}},
27
- "data_files": {"train": ["train.parquet"]},
28
  "download_checksums": {},
29
  "download_size": 0,
30
  "dataset_size": 0,
 
23
  "builder_name": "parquet",
24
  "config_name": "default",
25
  "version": {"version_str": "1.0.0", "major": 1, "minor": 0, "patch": 0},
26
+ "splits": {"train": {"name": "train", "num_examples": 5}, "validation": {"name": "validation", "num_examples": 20}},
27
+ "data_files": {"train": ["train.parquet"], "validation": ["validation.parquet"]},
28
  "download_checksums": {},
29
  "download_size": 0,
30
  "dataset_size": 0,