DLRNA-BERTa / README_spaces.md
IlPakoZ's picture
Upload 18 files
3912a9f verified
|
raw
history blame
2.13 kB
metadata
title: Drug-Target Interaction Predictor
emoji: 🧬
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 4.0.0
app_file: app.py
pinned: false
license: mit

Drug-Target Interaction Predictor

An interactive application for predicting drug-target interactions using deep learning. This model uses a novel cross-attention architecture to model the interactions between drug molecules (represented as SMILES) and target RNA sequences.

Features

  • 🔮 Prediction Interface: Input RNA sequences and drug SMILES to get binding affinity predictions
  • ⚙️ Model Management: Load and configure different model checkpoints
  • 📊 Interpretability: Visualize attention weights to understand model decisions
  • 🧬 Scientific Accuracy: Based on state-of-the-art deep learning architectures

How to Use

  1. Load Model: Go to the "Model Settings" tab and specify the path to your trained model
  2. Make Predictions:
    • Enter a target RNA sequence
    • Enter a drug SMILES string
    • Click "Predict Interaction" to get binding affinity score
  3. Explore Examples: Try the provided examples to see the model in action

Model Architecture

The model combines:

  • Target protein encoder for processing amino acid sequences
  • Drug encoder for processing molecular SMILES representations
  • Cross-attention mechanism to capture drug-target interactions
  • Regression head for binding affinity prediction

Input Format

  • Target Sequence: Standard amino acid single-letter codes (e.g., "AUGCUAGCUAGUACGUA...")
  • Drug SMILES: Simplified Molecular Input Line Entry System notation (e.g., "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O")

Example Usage

Try these example inputs:

  • Target: AUGCGAUCGACGUACGUUAGCCGUAGCGUAGCUAGUGUAGCUAGUAGCU
  • Drug: C1=CC=C(C=C1)NC(=O)C2=CC=CC=N2

Technical Details

  • Built with Transformers and PyTorch
  • Uses Gradio for the interactive interface
  • Supports GPU acceleration when available
  • Includes attention visualization for model interpretability

For more details, see the model documentation.