iarroyof's picture
Update README.md
4732ef5 verified
metadata
language: en
tags:
  - t5
  - text-to-text
  - nlp
  - sharded
  - large-model
license: apache-2.0
model_name: T5-11B-SSM-NQ Sharded
model_id: iarroyof/t5-11b-ssm-nq-sharded
base_model: google/t5-11b-ssm-nq
size: 11B
downloads: null
datasets:
  - natural_questions
pipeline_tag: text2text-generation
library_name: transformers
widget:
  - text: What is the capital of France?
  - text: 'Translate English to French: How are you?'
metrics:
  - rouge
  - bleu

Model Description

This is a sharded version of the T5-11B-SSM-NQ model, fine-tuned on the Natural Questions dataset for text-to-text generation tasks. The model is stored and processed in multiple shards to facilitate easier handling of its large size (11 billion parameters).

Usage

This model can be used for text-to-text generation tasks like question answering and text summarization.

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained('iarroyof/t5-11b-ssm-nq-sharded')
model = AutoModelForSeq2SeqLM.from_pretrained(
    'iarroyof/t5-11b-ssm-nq-sharded',
    device_map='auto',
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
)

inputs = tokenizer('What is and how to deal with insomnia?', return_tensors='pt').input_ids.to('cuda')
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---