tohur's picture
Update README.md
ec1dda9 verified
metadata
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
license: llama3.1
datasets:
  - tohur/natsumura-rp-identity-sharegpt
  - tohur/ultrachat_uncensored_sharegpt
  - Nopm/Opus_WritingStruct
  - ResplendentAI/bluemoon
  - tohur/Internal-Knowledge-Map-sharegpt
  - felix-ha/tiny-stories
  - tdh87/Stories
  - tdh87/Just-stories
  - tdh87/Just-stories-2

natsumura-storytelling-rp-1.0-llama-3.1-8b (fixed)

This is my Storytelling/RP model for my Natsumura series of 8b models. This model is finetuned on storytelling and roleplaying datasets so should be a great model to use for character chatbots in applications such as Sillytavern, Agnai, RisuAI and more. And should be a great model to use for fictional writing. Up to a 128k context.

  • Developed by: Tohur

  • License: llama3.1

  • Finetuned from model : meta-llama/Meta-Llama-3.1-8B-Instruct

    This model is based on meta-llama/Meta-Llama-3.1-8B-Instruct, and is governed by Llama 3.1 Community License Natsumura is uncensored, which makes the model compliant.It will be highly compliant with any requests, even unethical ones. You are responsible for any content you create using this model. Please use it responsibly.

Quantized GGUF

All GGUF models are available here: natsumura-storytelling-rp-1.0-llama-3.1-8b-GGUF

use in ollama

ollama pull Tohur/natsumura-storytelling-rp-llama-3.1

Datasets used:

  • tohur/natsumura-rp-identity-sharegpt
  • tohur/ultrachat_uncensored_sharegpt
  • Nopm/Opus_WritingStruct
  • ResplendentAI/bluemoon
  • tohur/Internal-Knowledge-Map-sharegpt
  • felix-ha/tiny-stories
  • tdh87/Stories
  • tdh87/Just-stories
  • tdh87/Just-stories-2

The following parameters were used in Llama Factory during training:

  • per_device_train_batch_size=2
  • gradient_accumulation_steps=4
  • lr_scheduler_type="cosine"
  • logging_steps=10
  • warmup_ratio=0.1
  • save_steps=1000
  • learning_rate=2e-5
  • num_train_epochs=3.0
  • max_samples=500
  • max_grad_norm=1.0
  • quantization_bit=4
  • loraplus_lr_ratio=16.0
  • fp16=True

Inference

I use the following settings for inference:

"temperature": 1.0,
"repetition_penalty": 1.05,
"top_p": 0.95
"top_k": 40
"min_p": 0.05

Prompt template: llama3

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>