---
title: AI Assistant
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
sdk_version: "0.0.1"
app_file: main.py
pinned: true
---
# Python AI Assistant
This is a web-based AI assistant built with FastAPI and Hugging Face Inference API. The app allows users to interact with multiple AI models, either by chatting with text models or generating images with image models.
Fig. 1: Screenshot of the Python AI Assistant interface.
## Features
- Chat with various text-based AI models.
- Generate images using a text prompt.
- Dynamic model selection from a dropdown menu.
- Inline chat interface showing user questions and AI responses.
- Supports both text and image output.
- Base64-encoded images are displayed directly in the chat window.
## Available Models
- **DeepSeek-V3-0324**: Text-based AI model for general-purpose conversation and responses.
- **Llama-3.2-3B-Instruct**: Instruction-tuned text model suitable for chat and instruction tasks.
- **GLM-4.5-Air**: Multilingual text model optimized for instruction following and efficiency.
- **FLUX.1-dev**: Generates images based on text prompts.
## Structure
```text
python-ai-assistant/
├── main.py # FastAPI app entry point with routes and endpoints
├── models/ # Contains model definitions
│ ├── __init__.py # Marks 'models' as a Python package
│ └── request_models.py # Defines available AI models and their metadata
├── services/ # Contains service logic
│ ├── __init__.py # Marks 'services' as a Python package
│ └── ai_assistant.py # Core logic for interacting with Hugging Face models
├── static/ # Frontend assets
│ ├── css/
│ │ └── styles.css # All CSS styles for the chat interface
│ ├── images/
│ │ └── models/ # Icons for each AI model in the dropdown
│ └── index.html # Main frontend page for the chat interface
├── README.md # Project documentation, usage, and models list
└── requirements.txt # Python dependencies for the project
```
## Installation
1. Clone the repository:
```bash
git clone https://github.com/yauheniya-ai/python-ai-assistant.git
cd python-ai-assistant
```
2. Create a virtual environment and install dependencies:
```bash
uv venv --python 3.12
source .venv/bin/activate
uv pip install -r requirements.txt
```
3. Create a .env file with your Hugging Face API token:
```text
HUGGINGFACE_API_TOKEN=your_token_here
```
4. Start the app:
```bash
uvicorn main:app --reload
```
5. Open your browser at http://127.0.0.1:8000.
## Usage
- Select a model from the dropdown menu at the top.
- Type your question or prompt in the input box.
- Click the send button or press Enter to submit.
- Text responses appear directly in the chat. Image responses are displayed in-line and can be saved.
## Notes
The app differentiates between text models and image models for proper rendering.
Only models available via Hugging Face Inference API are supported.