Datasets:
File size: 5,591 Bytes
b58a9fb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 |
---
license: mit
language:
- en
tags:
- conversations
- tagging
- embeddings
- bittensor
pretty_name: Bittensor Conversational Tagging and Embedding
size_categories:
- 10M<n<100M
---
[](https://huggingface.co/datasets/your-username/readyai-bittensor-conversational-tags)
## ποΈ ReadyAI - Bittensor Conversational Tagging and Embedding Dataset
ReadyAI is an open-source initiative focused on low-cost, resource-minimal pipelines for structuring raw data for AI applications.
This dataset is part of the ReadyAI Conversational Genome Project, leveraging the Bittensor decentralized network.
AI runs on structured data β and this dataset bridges the gap between raw conversation transcripts and structured, vectorized semantic tags.
You can find more about our subnet on GitHub [here](https://github.com/afterpartyai/bittensor-conversation-genome-project).
---
## π Dataset Overview
This dataset contains **annotated conversation transcripts** with:
- Human-readable semantic tags
- **Embedding vectors** contextualized to each conversation
- Participant metadata
It is ideal for:
- Conversational AI training
- Dialogue understanding research
- Retrieval-augmented generation (RAG)
- Semantic search
- Fine-tuning large language models (LLMs)
---
## π Dataset Structure
The dataset consists of four main components:
### 1. **data/bittensor-conversational-tags-and-embeddings-part-*.parquet** β Tag Embeddings and Metadata
Each Parquet file contains rows with:
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Unique conversation group ID |
| tag_id | int64 | Unique identifier for the tag |
| tag | string | Semantic tag (e.g., "climate change") |
| vector | list of float32 | Embedding vector representing the tag's meaning **in the conversation's context** |
β
Files split into ~1GB chunks for efficient loading and streaming.
---
### 2. **tag_to_id.parquet** β Tag Mapping
Mapping between tag IDs and human-readable tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| tag_id | int64 | Unique tag ID |
| tag | string | Semantic tag text |
β
Useful for reverse-mapping tags from models or outputs.
---
### 3. **conversations_to_tags.parquet** β Conversation-to-Tag Mappings
Links conversations to their associated semantic tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| tag_ids | list of int64 | List of tag IDs relevant to the conversation |
β
For supervised training, retrieval tasks, or semantic labeling.
---
### 4. **conversations.parquet** β Full Conversation Text and Participants
Contains the raw multi-turn dialogue and metadata.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| transcript | string | Full conversation text |
| participants | list of strings | List of speaker identifiers |
β
Useful for dialogue modeling, multi-speaker AI, or fine-tuning.
---
## π How to Use
**Install dependencies**
```
pip install pandas pyarrow
```
**Load a single Parquet split**
```
import pandas as pd
df = pd.read_parquet("data/bittensor-conversational-tags-and-embeddings-part-0000.parquet")
print(df.head())
```
**Load all tag splits**
```
import pandas as pd
import glob
files = sorted(glob.glob("data/bittensor-conversational-tags-and-embeddings-part-*.parquet"))
df_tags = pd.concat((pd.read_parquet(f) for f in files), ignore_index=True)
print(f"Loaded {len(df_tags)} tag records.")
```
**Load tag dictionary**
```
tag_dict = pd.read_parquet("tag_to_id.parquet")
print(tag_dict.head())
```
**Load conversation to tags mapping**
```
df_mapping = pd.read_parquet("conversations_to_tags.parquet")
print(df_mapping.head())
```
**Load full conversations dialog and metadata**
```
df_conversations = pd.read_parquet("conversations.parquet")
print(df_conversations.head())
```
---
## π₯ Example: Reconstruct Tags for a Conversation
```
# Build tag lookup
tag_lookup = dict(zip(tag_dict['tag_id'], tag_dict['tag']))
# Pick a conversation
sample = df_mapping.iloc[0]
c_guid = sample['c_guid']
tag_ids = sample['tag_ids']
# Translate tag IDs to human-readable tags
tags = [tag_lookup.get(tid, "Unknown") for tid in tag_ids]
print(f"Conversation {c_guid} has tags: {tags}")
```
---
## π¦ Handling Split Files
| Situation | Strategy |
|:----------|:---------|
| Enough RAM | Use `pd.concat()` to merge splits |
| Low memory | Process each split one-by-one |
| Hugging Face datasets | Use streaming mode |
**Example (streaming with Hugging Face `datasets`)**
```
from datasets import load_dataset
# Stream the dataset directly
dataset = load_dataset(
"ReadyAi/bittensor-conversational-tags-and-embeddings",
split="train",
streaming=True
)
for example in dataset:
print(example)
break
```
---
## π License
MIT License β
β
Free to use and modify,
β Commercial redistribution without permission is prohibited.
---
## β¨ Credits
Built using contributions from Bittensor conversational miners and the ReadyAI open-source community.
---
## π― Summary
| Component | Description |
|:----------|:------------|
| parquets/part_*.parquet | Semantic tags and their contextual embeddings |
| tag_to_id.parquet | Dictionary mapping of tag IDs to text |
| conversations_to_tags.parquet | Links conversations to tags |
| conversations.parquet | Full multi-turn dialogue with participant metadata | |