You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Tama

Tama is a conversational AI model designed for immersive roleplay (RP) and natural chatting.
It focuses on expressive dialogue, creative storytelling, and character-driven conversations for users seeking engaging, lifelike interactions.

⚠️ Note: This model can generate mature or NSFW content.


Model Details

Property Description
Base Model Qwen/Qwen3-4B-Instruct-2507
Architecture Transformer (causal language model)
Parameters ~4B
Training Objective Supervised Fine-Tuning (SFT)
Primary Use Cases Chatting, Roleplay, Creative Writing, Storytelling

Dataset

  • Curated from high-quality roleplay, dialogue, and storytelling datasets.
  • Emphasis on emotional depth, character consistency, and realistic conversational flow.
  • Cleaned, filtered, and formatted for instruction-style fine-tuning.

Training Methods

  • SFT (Supervised Fine-Tuning): Trained on multi-turn dialogue data for contextual continuity.
  • Prompt Engineering: Enhanced for character-driven, personality-rich responses.
  • Roleplay Optimization: Special tuning for immersive, dynamic RP experience.

Intended Use

Tama is ideal for:

  • Engaging roleplay and fictional character conversations.
  • Writing assistance for stories, dialogue, and creative fiction.
  • Companion-style chatting for inspiration and entertainment.

Not intended for: factual Q&A, critical decision-making, or professional advice.


🚫 Limitations & Disclaimer

  • May generate inaccurate or inappropriate content.
  • The author and contributors are not responsible for misuse or content generated.
  • Users must comply with Hugging Face policies and applicable laws.

πŸ’› Support

If you like this project, consider supporting me:

Buy Me A Coffee


Cite:

@misc{Tama,
  title        = {Tama},
  author       = {Beyoru},
  year         = {2025},
  howpublished = {\url{https://huggingface.co/beyoru/Tama}}
}
Downloads last month
55
Safetensors
Model size
4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for beyoru/Tama

Finetuned
(170)
this model