maomao88 commited on
Commit
b014f93
·
1 Parent(s): e572672

remove flash_attn from requirements

Browse files
Files changed (3) hide show
  1. Dockerfile +0 -1
  2. README.md +6 -1
  3. install_gpu_packages.py +10 -0
Dockerfile CHANGED
@@ -18,7 +18,6 @@ WORKDIR /app
18
  # Copy backend requirements
19
  COPY --chown=user backend/requirements.txt /app/requirements.txt
20
  RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
21
- RUN pip install --no-cache-dir einops flash_attn
22
 
23
  # Copy backend and frontend
24
  COPY --chown=user backend/ /app/backend/
 
18
  # Copy backend requirements
19
  COPY --chown=user backend/requirements.txt /app/requirements.txt
20
  RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
 
21
 
22
  # Copy backend and frontend
23
  COPY --chown=user backend/ /app/backend/
README.md CHANGED
@@ -33,7 +33,12 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
33
 
34
  3. Install dependencies
35
 
36
- 4. Run the app
 
 
 
 
 
37
  ```commandline
38
  uvicorn app:app --reload --host 0.0.0.0 --port 8000
39
  ```
 
33
 
34
  3. Install dependencies
35
 
36
+ 4. If you're running on a GPU device, which enables more models options, install the GPU related packages:
37
+ ```commandline
38
+ python install_gpu_packages.py
39
+ ```
40
+
41
+ 5.Run the app
42
  ```commandline
43
  uvicorn app:app --reload --host 0.0.0.0 --port 8000
44
  ```
install_gpu_packages.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ import subprocess
2
+ import sys
3
+ import torch
4
+
5
+ if torch.cuda.is_available():
6
+ print("GPU detected. Installing GPU packages...")
7
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "flash_attn", "einops"])
8
+ else:
9
+ print("No GPU detected. Installing CPU-only packages if needed...")
10
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "einops"])