Safetensors
clip

Is it possible fine-tune this model?

#1
by invhun - opened

Dear zer0int,

I'm writing to you as a researcher highly impressed by your Hugging Face model, zer0int/LongCLIP-KO-LITE-TypoAttack-Attn-ViT-L-14.

I am very interested in fine-tuning your model for a project focused on Korean long text-long video retrieval. For this, would it be possible to make the model's source code available?

Thank you for your excellent work.

Sincerely,

@invhun

Thank you for your interest! My model is based on the Long-CLIP model https://github.com/beichenzbc/Long-CLIP, which is again based on pre-trained https://github.com/openai/CLIP, of which the source code / training dataset has not been made available.

However, the code for fine-tuning my LongCLIP-KO model - along with all datasets (adversarial dataset + COCO-SPRIGHT training dataset) - are publicly available; you can find the training code on my github: https://github.com/zer0int/CLIP-fine-tune. The file ko-long-1-fine-tune-clip-ko-head-dropout.py uses the modified, custom 'import clip' package; running that script results in this very model I am offering here on HuggingFace. So you can use that as a starting base for your project!

Kind regards!

Sign up or log in to comment