Spaces:
Sleeping
title: AI Assistant
emoji: π€
colorFrom: blue
colorTo: purple
sdk: docker
sdk_version: 0.0.1
app_file: main.py
pinned: true
Python AI Assistant
This is a web-based AI assistant built with FastAPI and Hugging Face Inference API. The app allows users to interact with multiple AI models, either by chatting with text models or generating images with image models.
Fig. 1: Screenshot of the Python AI Assistant interface.
Features
- Chat with various text-based AI models.
- Generate images using a text prompt.
- Dynamic model selection from a dropdown menu.
- Inline chat interface showing user questions and AI responses.
- Supports both text and image output.
- Base64-encoded images are displayed directly in the chat window.
Available Models
DeepSeek-V3-0324: Text-based AI model for general-purpose conversation and responses.
Llama-3.2-3B-Instruct: Instruction-tuned text model suitable for chat and instruction tasks.
GLM-4.5-Air: Multilingual text model optimized for instruction following and efficiency.
FLUX.1-dev: Generates images based on text prompts.
Structure
python-ai-assistant/
βββ main.py # FastAPI app entry point with routes and endpoints
βββ models/ # Contains model definitions
β βββ __init__.py # Marks 'models' as a Python package
β βββ request_models.py # Defines available AI models and their metadata
βββ services/ # Contains service logic
β βββ __init__.py # Marks 'services' as a Python package
β βββ ai_assistant.py # Core logic for interacting with Hugging Face models
βββ static/ # Frontend assets
β βββ css/
β β βββ styles.css # All CSS styles for the chat interface
β βββ images/
β β βββ models/ # Icons for each AI model in the dropdown
β βββ index.html # Main frontend page for the chat interface
βββ README.md # Project documentation, usage, and models list
βββ requirements.txt # Python dependencies for the project
Installation
Clone the repository:
git clone https://github.com/yauheniya-ai/python-ai-assistant.git cd python-ai-assistantCreate a virtual environment and install dependencies:
uv venv --python 3.12 source .venv/bin/activate uv pip install -r requirements.txtCreate a .env file with your Hugging Face API token:
HUGGINGFACE_API_TOKEN=your_token_hereStart the app:
uvicorn main:app --reloadOpen your browser at http://127.0.0.1:8000.
Usage
- Select a model from the dropdown menu at the top.
- Type your question or prompt in the input box.
- Click the send button or press Enter to submit.
- Text responses appear directly in the chat. Image responses are displayed in-line and can be saved.
Notes
The app differentiates between text models and image models for proper rendering. Only models available via Hugging Face Inference API are supported.