Spaces:
Sleeping
Sleeping
| title: AI Assistant | |
| emoji: π€ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: docker | |
| sdk_version: "0.0.1" | |
| app_file: main.py | |
| pinned: true | |
| # Python AI Assistant | |
| This is a web-based AI assistant built with FastAPI and Hugging Face Inference API. The app allows users to interact with multiple AI models, either by chatting with text models or generating images with image models. | |
| <div style="text-align:center;"> | |
| <img src="./Screenshot.png" style="width:100%; max-width:1200px;"> | |
| <p><em>Fig. 1: Screenshot of the Python AI Assistant interface.</em></p> | |
| </div> | |
| ## Features | |
| - Chat with various text-based AI models. | |
| - Generate images using a text prompt. | |
| - Dynamic model selection from a dropdown menu. | |
| - Inline chat interface showing user questions and AI responses. | |
| - Supports both text and image output. | |
| - Base64-encoded images are displayed directly in the chat window. | |
| ## Available Models | |
| - **DeepSeek-V3-0324**: Text-based AI model for general-purpose conversation and responses. | |
| - **Llama-3.2-3B-Instruct**: Instruction-tuned text model suitable for chat and instruction tasks. | |
| - **GLM-4.5-Air**: Multilingual text model optimized for instruction following and efficiency. | |
| - **FLUX.1-dev**: Generates images based on text prompts. | |
| ## Structure | |
| ```text | |
| python-ai-assistant/ | |
| βββ main.py # FastAPI app entry point with routes and endpoints | |
| βββ models/ # Contains model definitions | |
| β βββ __init__.py # Marks 'models' as a Python package | |
| β βββ request_models.py # Defines available AI models and their metadata | |
| βββ services/ # Contains service logic | |
| β βββ __init__.py # Marks 'services' as a Python package | |
| β βββ ai_assistant.py # Core logic for interacting with Hugging Face models | |
| βββ static/ # Frontend assets | |
| β βββ css/ | |
| β β βββ styles.css # All CSS styles for the chat interface | |
| β βββ images/ | |
| β β βββ models/ # Icons for each AI model in the dropdown | |
| β βββ index.html # Main frontend page for the chat interface | |
| βββ README.md # Project documentation, usage, and models list | |
| βββ requirements.txt # Python dependencies for the project | |
| ``` | |
| ## Installation | |
| 1. Clone the repository: | |
| ```bash | |
| git clone https://github.com/yauheniya-ai/python-ai-assistant.git | |
| cd python-ai-assistant | |
| ``` | |
| 2. Create a virtual environment and install dependencies: | |
| ```bash | |
| uv venv --python 3.12 | |
| source .venv/bin/activate | |
| uv pip install -r requirements.txt | |
| ``` | |
| 3. Create a .env file with your Hugging Face API token: | |
| ```text | |
| HUGGINGFACE_API_TOKEN=your_token_here | |
| ``` | |
| 4. Start the app: | |
| ```bash | |
| uvicorn main:app --reload | |
| ``` | |
| 5. Open your browser at http://127.0.0.1:8000. | |
| ## Usage | |
| - Select a model from the dropdown menu at the top. | |
| - Type your question or prompt in the input box. | |
| - Click the send button or press Enter to submit. | |
| - Text responses appear directly in the chat. Image responses are displayed in-line and can be saved. | |
| ## Notes | |
| The app differentiates between text models and image models for proper rendering. | |
| Only models available via Hugging Face Inference API are supported. | |