kabudadada commited on
Commit
7eb1167
·
1 Parent(s): 53a71e8

Add Foam-Agent MCP service with conda environment support

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .env +16 -0
  2. Dockerfile +25 -0
  3. Foam-Agent/mcp_output/README_MCP.md +205 -0
  4. Foam-Agent/mcp_output/analysis.json +153 -0
  5. Foam-Agent/mcp_output/env_info.json +15 -0
  6. Foam-Agent/mcp_output/mcp_logs/llm_statistics.json +11 -0
  7. Foam-Agent/mcp_output/mcp_logs/run_log.json +79 -0
  8. Foam-Agent/mcp_output/mcp_plugin/__init__.py +0 -0
  9. Foam-Agent/mcp_output/mcp_plugin/__pycache__/adapter.cpython-310.pyc +0 -0
  10. Foam-Agent/mcp_output/mcp_plugin/__pycache__/mcp_service.cpython-310.pyc +0 -0
  11. Foam-Agent/mcp_output/mcp_plugin/main.py +13 -0
  12. Foam-Agent/mcp_output/mcp_plugin/mcp_service.py +180 -0
  13. Foam-Agent/mcp_output/requirements.txt +7 -0
  14. Foam-Agent/mcp_output/start_mcp.py +55 -0
  15. Foam-Agent/mcp_output/tests_mcp/test_mcp_basic.py +49 -0
  16. Foam-Agent/mcp_output/tests_smoke/test_smoke.py +29 -0
  17. Foam-Agent/source/.gitignore +181 -0
  18. Foam-Agent/source/LICENSE +21 -0
  19. Foam-Agent/source/README.md +151 -0
  20. Foam-Agent/source/__init__.py +4 -0
  21. Foam-Agent/source/database/foamgpt/foamgpt_data.py +80 -0
  22. Foam-Agent/source/database/foamgpt/foamgpt_gen.py +211 -0
  23. Foam-Agent/source/database/foamgpt/foamgpt_huggingface.py +72 -0
  24. Foam-Agent/source/database/foamgpt/foamgpt_openai.py +108 -0
  25. Foam-Agent/source/database/foamgpt/foamgpt_parser.py +174 -0
  26. Foam-Agent/source/database/script/__test_faiss.py +36 -0
  27. Foam-Agent/source/database/script/faiss_allrun_scripts.py +106 -0
  28. Foam-Agent/source/database/script/faiss_command_help.py +77 -0
  29. Foam-Agent/source/database/script/faiss_tutorials_details.py +96 -0
  30. Foam-Agent/source/database/script/faiss_tutorials_structure.py +95 -0
  31. Foam-Agent/source/database/script/tutorial_parser.py +376 -0
  32. Foam-Agent/source/environment.yml +104 -0
  33. Foam-Agent/source/foambench_main.py +119 -0
  34. Foam-Agent/source/src/__init__.py +4 -0
  35. Foam-Agent/source/src/config.py +20 -0
  36. Foam-Agent/source/src/main.py +160 -0
  37. Foam-Agent/source/src/nodes/__init__.py +1 -0
  38. Foam-Agent/source/src/nodes/architect_node.py +4 -0
  39. Foam-Agent/source/src/nodes/hpc_runner_node.py +463 -0
  40. Foam-Agent/source/src/nodes/input_writer_node.py +272 -0
  41. Foam-Agent/source/src/nodes/local_runner_node.py +49 -0
  42. Foam-Agent/source/src/nodes/meshing_node.py +926 -0
  43. Foam-Agent/source/src/nodes/reviewer_node.py +75 -0
  44. Foam-Agent/source/src/nodes/visualization_node.py +300 -0
  45. Foam-Agent/source/src/router_func.py +158 -0
  46. Foam-Agent/source/src/tracking_aws.py +152 -0
  47. Foam-Agent/source/src/utils.py +535 -0
  48. README.md +146 -10
  49. app.py +13 -0
  50. docker-compose.yml +16 -0
.env ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OpenAI API Configuration
2
+ OPENAI_API_KEY=sk-jQe76a54b309e6635aa04de6b1135c2cdcacfe15bbb4pB4N
3
+ OPENAI_BASE_URL=https://api.gptsapi.net/v1
4
+ # Alternative API providers (uncomment to use)
5
+ # OPENAI_BASE_URL=https://api.gptsapi.net/v1
6
+ # OPENAI_BASE_URL=https://your-custom-api-endpoint.com/v1
7
+
8
+ # Foam-Agent Configuration
9
+ WM_PROJECT_DIR=/opt/openfoam10
10
+ MODEL_PROVIDER=openai
11
+ MODEL_VERSION=gpt-4o
12
+ TEMPERATURE=0.6
13
+
14
+ # MCP Service Configuration
15
+ MCP_TRANSPORT=http
16
+ MCP_PORT=7860
Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM continuumio/miniconda3:latest
2
+
3
+ RUN useradd -m -u 1000 user
4
+ USER user
5
+ ENV PATH="/home/user/.local/bin:$PATH"
6
+ WORKDIR /app
7
+
8
+ # Copy environment.yml and install dependencies
9
+ COPY --chown=user ./environment.yml environment.yml
10
+ RUN conda env create -f environment.yml && \
11
+ conda clean -afy
12
+
13
+ # Copy application files
14
+ COPY --chown=user . /app
15
+ ENV PYTHONPATH=/app/Foam-Agent/source:$PYTHONPATH
16
+
17
+ # Install additional MCP dependencies
18
+ RUN /opt/conda/envs/openfoamAgent/bin/pip install fastapi uvicorn[standard] fastmcp
19
+
20
+ EXPOSE 7860
21
+ ENV MCP_TRANSPORT=http
22
+ ENV MCP_PORT=7860
23
+
24
+ # Activate conda environment and run
25
+ CMD ["/opt/conda/envs/openfoamAgent/bin/python", "Foam-Agent/mcp_output/start_mcp.py"]
Foam-Agent/mcp_output/README_MCP.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Foam-Agent
2
+
3
+ Foam-Agent is a multi-agent system designed to handle complex tasks through collaborative approaches. This project integrates various tools and technology stacks, supporting task automation, data processing, visualization, and integration with GPT models. Foam-Agent's target users include researchers, engineers, and developers, particularly suitable for complex scenarios requiring multi-step collaboration.
4
+
5
+ ---
6
+
7
+ ## Project Overview
8
+
9
+ ### Features and Characteristics
10
+ - **Multi-Agent Collaboration**: The system consists of multiple specialized agents (such as Architect Agent, Meshing Agent, Reviewer Agent, etc.), each responsible for specific subtasks.
11
+ - **Knowledge Base System**: Provides a knowledge base for storing and retrieving task-related data.
12
+ - **Data Processing and Visualization**: Supports data extraction, processing, and visualization, helping users understand and analyze complex datasets.
13
+ - **Automated Workflows**: Achieves automated task processing through multi-agent collaboration, reducing manual intervention.
14
+ - **FoamGPT Data Pipeline**: Utilizes GPT models for data processing and generation, supporting natural language processing (NLP) tasks.
15
+
16
+ ### Core Modules
17
+ - **Architect Agent**: Responsible for overall task planning and allocation.
18
+ - **Meshing Agent**: Handles mesh generation or data structure-related tasks.
19
+ - **Input Writer Agent**: Responsible for input data preparation and formatting.
20
+ - **Local Runner Agent**: Executes local tasks.
21
+ - **Reviewer Agent**: Checks the quality of task results.
22
+ - **Visualization Agent**: Generates visualization outputs for task results.
23
+ - **FoamGPT Integration**: Supports OpenAI and Hugging Face GPT models for data generation and processing.
24
+
25
+ ---
26
+
27
+ ## Installation Instructions
28
+
29
+ ### System Requirements
30
+ - **Operating System**: Linux, macOS, or Windows
31
+ - **Python Version**: 3.8 or higher
32
+ - **Dependencies**:
33
+ - Required: `numpy`, `pandas`, `matplotlib`, `faiss`, `openai`, `transformers`, `pyyaml`
34
+ - Optional: `plotly`, `seaborn`, `docker`
35
+
36
+ ### Installation Steps
37
+ 1. **Clone Repository**
38
+ ```bash
39
+ git clone https://github.com/csml-rpi/Foam-Agent.git
40
+ cd Foam-Agent
41
+ ```
42
+
43
+ 2. **Create Virtual Environment**
44
+ ```bash
45
+ python -m venv venv
46
+ source venv/bin/activate # Linux/macOS
47
+ venv\Scripts\activate # Windows
48
+ ```
49
+
50
+ 3. **Install Dependencies**
51
+ If using `conda`:
52
+ ```bash
53
+ conda env create -f environment.yml
54
+ conda activate foam-agent
55
+ ```
56
+ If using `pip`:
57
+ ```bash
58
+ pip install -r requirements.txt
59
+ ```
60
+
61
+ 4. **Verify Installation**
62
+ ```bash
63
+ python foambench_main.py --help
64
+ ```
65
+
66
+ ---
67
+
68
+ ## Usage Instructions
69
+
70
+ ### Starting the Main Program
71
+ Run the following command to start the Foam-Agent system:
72
+ ```bash
73
+ python foambench_main.py
74
+ ```
75
+
76
+ ### Example Use Cases
77
+ 1. **Run Single Task**
78
+ ```bash
79
+ python foambench_main.py --task single --config config.yaml
80
+ ```
81
+ 2. **Run Batch Tasks**
82
+ ```bash
83
+ python foambench_main.py --task batch --config batch_config.yaml
84
+ ```
85
+
86
+ ### Configuration Files
87
+ Foam-Agent uses YAML files for configuration. Here is an example configuration file:
88
+ ```yaml
89
+ task:
90
+ name: "example_task"
91
+ agents:
92
+ - architect
93
+ - meshing
94
+ - reviewer
95
+ output:
96
+ directory: "./results"
97
+ format: "json"
98
+ ```
99
+
100
+ Save the configuration file as `config.yaml` and specify it via the `--config` parameter during runtime.
101
+
102
+ ---
103
+
104
+ ## Available Tool Endpoints
105
+
106
+ The following are the main tool endpoints provided by Foam-Agent:
107
+
108
+ ### Core Modules
109
+ - **Architect Agent**: Responsible for overall task planning and allocation.
110
+ - **Meshing Agent**: Handles mesh generation or data structure-related tasks.
111
+ - **Input Writer Agent**: Responsible for input data preparation and formatting.
112
+ - **Local Runner Agent**: Executes local tasks.
113
+ - **Reviewer Agent**: Checks the quality of task results.
114
+ - **Visualization Agent**: Generates visualization outputs for task results.
115
+
116
+ ### Data Processing Modules
117
+ - **FoamGPT Data Pipeline**:
118
+ - `foamgpt_data`: Manages data loading and saving related to FoamGPT.
119
+ - `foamgpt_gen`: Uses GPT models to generate Foam-related outputs.
120
+ - `foamgpt_huggingface`: Integrates Hugging Face models.
121
+ - `foamgpt_openai`: Integrates OpenAI GPT models.
122
+ - `foamgpt_parser`: Parses the output generated by FoamGPT.
123
+
124
+ ### FAISS Tools
125
+ - `faiss_allrun_scripts`: Runs scripts related to FAISS indexing.
126
+ - `faiss_command_help`: Provides help information for FAISS commands.
127
+ - `faiss_tutorials_details`: Gets detailed information about FAISS tutorials.
128
+ - `faiss_tutorials_structure`: Gets structural information about FAISS tutorials.
129
+ - `tutorial_parser`: Parses tutorial content.
130
+
131
+ ---
132
+
133
+ ## Notes
134
+
135
+ 1. **Environment Configuration**:
136
+ - Ensure that all required dependencies are installed in the Python environment.
137
+ - If using GPU, ensure that CUDA and related drivers are installed correctly.
138
+
139
+ 2. **Configuration Files**:
140
+ - Paths in configuration files should be absolute or relative to the project root directory.
141
+ - Ensure that the configuration file format is correct, otherwise it may cause parsing errors.
142
+
143
+ 3. **Network Connection**:
144
+ - If using OpenAI or Hugging Face GPT models, ensure that the network connection is normal and the API key is configured.
145
+
146
+ ---
147
+
148
+ ## Troubleshooting
149
+
150
+ ### Common Issues and Solutions
151
+
152
+ #### 1. **Dependency Installation Failed**
153
+ - **Issue**: Error occurred during dependency installation.
154
+ - **Solution**:
155
+ - Ensure that the Python version is 3.8 or higher.
156
+ - Use `pip install --upgrade pip` to update pip.
157
+ - If the problem persists, try using `conda` to install dependencies.
158
+
159
+ #### 2. **Foam-Agent Cannot Start**
160
+ - **Issue**: Error occurred when running `foambench_main.py`.
161
+ - **Solution**:
162
+ - Check if the configuration file is correct.
163
+ - Use the `--help` parameter to view available options.
164
+ - Ensure that all dependencies are installed correctly.
165
+
166
+ #### 3. **Task Execution Failed**
167
+ - **Issue**: Task execution interrupted or results are incorrect.
168
+ - **Solution**:
169
+ - Check if the input data format is correct.
170
+ - View the log file for detailed error information.
171
+ - Ensure that the agent module configuration is correct.
172
+
173
+ #### 4. **FoamGPT Model Loading Failed**
174
+ - **Issue**: Cannot load GPT model.
175
+ - **Solution**:
176
+ - Check if the network connection is normal.
177
+ - Ensure that OpenAI or Hugging Face API keys are configured.
178
+ - Ensure that `transformers` and `openai` libraries are installed.
179
+
180
+ ---
181
+
182
+ ## Contributing
183
+
184
+ Welcome to contribute to the Foam-Agent project! You can participate in the following ways:
185
+ - Submit issue reports or feature requests.
186
+ - Submit code improvements or new features.
187
+ - Submit documentation improvements.
188
+
189
+ Please read [CONTRIBUTING.md](CONTRIBUTING.md) for more contribution guidelines.
190
+
191
+ ---
192
+
193
+ ## License
194
+
195
+ Foam-Agent follows the [MIT License](LICENSE). You are free to use, modify, and distribute this project, but you must retain the original license statement.
196
+
197
+ ---
198
+
199
+ ## Contact
200
+
201
+ If you have any questions or suggestions, please contact us via the following methods:
202
+ - **GitHub Issues**: [Submit Issues](https://github.com/csml-rpi/Foam-Agent/issues)
203
+ - **Email**: [email protected]
204
+
205
+ Thank you for using Foam-Agent!
Foam-Agent/mcp_output/analysis.json ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "summary": {
3
+ "repository_url": "https://github.com/csml-rpi/Foam-Agent",
4
+ "summary_text": "Repository: csml-rpi/Foam-Agent\nCommit: main\nFiles analyzed: 50+\n\nEstimated tokens: 150k+",
5
+ "file_tree": "...",
6
+ "content": {},
7
+ "processed_by": "gitingest",
8
+ "success": true
9
+ },
10
+ "structure": {
11
+ "packages": [
12
+ "source.src",
13
+ "source.src.nodes",
14
+ "source.database.foamgpt",
15
+ "source.database.script"
16
+ ]
17
+ },
18
+ "dependencies": {
19
+ "has_environment_yml": true,
20
+ "has_requirements_txt": false,
21
+ "pyproject": false,
22
+ "setup_cfg": false,
23
+ "setup_py": false
24
+ },
25
+ "entry_points": {
26
+ "imports": [],
27
+ "cli": [
28
+ {
29
+ "command": "foambench_main.py",
30
+ "description": "Main entry point for starting the Foam-Agent system."
31
+ }
32
+ ],
33
+ "modules": []
34
+ },
35
+ "llm_analysis": {
36
+ "core_modules": [
37
+ {
38
+ "package": "source.src.nodes",
39
+ "module": "architect_node",
40
+ "functions": [
41
+ "save_file"
42
+ ],
43
+ "description": "Defines a placeholder for a file-saving function."
44
+ },
45
+ {
46
+ "package": "source.src.nodes",
47
+ "module": "meshing_node",
48
+ "functions": [
49
+ "handle_custom_mesh",
50
+ "handle_gmsh_mesh",
51
+ "meshing_node"
52
+ ],
53
+ "description": "Handles different mesh generation scenarios based on user requirements, including custom meshes and GMSH-based generation."
54
+ },
55
+ {
56
+ "package": "source.src.nodes",
57
+ "module": "input_writer_node",
58
+ "functions": [
59
+ "input_writer_node"
60
+ ],
61
+ "description": "Generates and rewrites OpenFOAM input files based on the specified mode (initial write or rewrite)."
62
+ },
63
+ {
64
+ "package": "source.src.nodes",
65
+ "module": "local_runner_node",
66
+ "functions": [
67
+ "local_runner_node"
68
+ ],
69
+ "description": "Executes the Allrun script for the simulation case and checks for errors."
70
+ },
71
+ {
72
+ "package": "source.src.nodes",
73
+ "module": "reviewer_node",
74
+ "functions": [
75
+ "reviewer_node"
76
+ ],
77
+ "description": "Reviews error logs from simulation runs and provides analysis and suggestions for fixes."
78
+ },
79
+ {
80
+ "package": "source.src.nodes",
81
+ "module": "visualization_node",
82
+ "functions": [
83
+ "visualization_node"
84
+ ],
85
+ "description": "Generates PyVista visualizations from successfully completed OpenFOAM cases."
86
+ },
87
+ {
88
+ "package": "source.src",
89
+ "module": "utils",
90
+ "classes": [
91
+ "LLMService",
92
+ "GraphState"
93
+ ],
94
+ "functions": [
95
+ "retrieve_faiss",
96
+ "run_command",
97
+ "check_foam_errors"
98
+ ],
99
+ "description": "Provides core utility functions and classes for the system, including the LLM service wrapper, state management, and file/command operations."
100
+ }
101
+ ],
102
+ "cli_commands": [
103
+ {
104
+ "command": "foambench_main.py",
105
+ "description": "Main entry point for starting the Foam-Agent system."
106
+ }
107
+ ],
108
+ "import_strategy": {
109
+ "primary": "import",
110
+ "fallback": "cli",
111
+ "confidence": 0.8
112
+ },
113
+ "dependencies": {
114
+ "required": [
115
+ "langchain",
116
+ "faiss-cpu",
117
+ "langchain-openai",
118
+ "langchain-aws",
119
+ "langchain-anthropic",
120
+ "pydantic",
121
+ "boto3",
122
+ "requests",
123
+ "pyyaml",
124
+ "numpy",
125
+ "matplotlib",
126
+ "pyvista"
127
+ ],
128
+ "optional": []
129
+ },
130
+ "risk_assessment": {
131
+ "import_feasibility": 0.8,
132
+ "intrusiveness_risk": "low",
133
+ "complexity": "high"
134
+ }
135
+ },
136
+ "deepwiki_analysis": {
137
+ "repo_url": "https://github.com/csml-rpi/Foam-Agent",
138
+ "repo_name": "Foam-Agent",
139
+ "analysis": "### Analysis Report: Foam-Agent GitHub Repository\n\n#### 1. What are the main functions and purposes of this repository?\n\nFoam-Agent is an automated system designed to generate and run OpenFOAM simulations based on user requirements. It operates as a stateful pipeline, where a series of functional nodes process data sequentially to set up, execute, and visualize a simulation case. Its main functions are:\n\n- **Automated Case Generation**: Takes a natural language description of a simulation and generates the complete directory structure and input files for OpenFOAM.\n- **Mesh Generation**: Includes logic for generating meshes using GMSH based on user requirements.\n- **Simulation Execution**: Runs the generated OpenFOAM case using an `Allrun` script.\n- **Error Correction Loop**: If a simulation fails, it reviews the error logs, uses an LLM to propose corrections, rewrites the input files, and re-runs the simulation.\n- **Data Retrieval**: Utilizes a FAISS vector database to retrieve information about similar tutorials, command help, and existing scripts to inform the generation process.\n- **Visualization**: Automatically generates visualizations of the simulation results using PyVista.\n\nThe target users are engineers and researchers who need to run OpenFOAM simulations but want to automate the process of case setup and execution.\n\n---\n\n#### 2. What are the core modules and entry points of this repository?\n\nThe system is not a multi-agent system in the traditional sense but rather a graph-based pipeline orchestrated by a main script. The core components are:\n\n- **Core Modules (Functional Nodes)**:\n - `meshing_node`: Generates the computational mesh.\n - `input_writer_node`: Creates and modifies the OpenFOAM input dictionaries.\n - `local_runner_node`: Executes the simulation script.\n - `reviewer_node`: Analyzes errors if the simulation fails.\n - `visualization_node`: Creates plots and images from the results.\n\n- **Core Utilities (`utils.py`)**:\n - `LLMService`: A wrapper for interacting with various large language models (OpenAI, Bedrock, Anthropic).\n - `GraphState`: A TypedDict that defines the shared state passed between nodes in the pipeline.\n - `retrieve_faiss`: A function to query the FAISS vector database for relevant information.\n\n- **Main Entry Points**:\n - `foambench_main.py`: The primary script that initializes the configuration, sets up the graph, and starts the simulation generation process.\n - `source/src/main.py`: Contains the logic for defining the graph structure and the flow between the different nodes.\n\n---\n\n#### 3. What are the main technology stacks and dependencies used by this repository?\n\n- **Language**: Python\n- **Orchestration**: LangChain (specifically for its graph/state machine capabilities and LLM integrations).\n- **AI/ML**: OpenAI, Anthropic, or AWS Bedrock models for generation and analysis.\n- **Vector Database**: FAISS for similarity search on tutorials and commands.\n- **Core Libraries**: Pydantic (for data validation), PyYAML (for configuration), Boto3 (for AWS). \n- **Visualization**: PyVista.\n\n#### 4. Is this project suitable for conversion to an MCP (Model Context Protocol) service? Why?\n\n**Suitability Analysis:**\nFoam-Agent is highly suitable for conversion to an MCP service. The reasons are:\n\n- **Clear Entry Point**: The entire workflow is triggered by a single entry point (`foambench_main.py`) that takes a user requirement, making it easy to expose as a single service endpoint.\n- **Stateful, Long-Running Task**: The process is a long-running, stateful task, which aligns well with the MCP model of managing complex, multi-step jobs.\n- **Modular Logic**: The graph-based node structure means the internal logic is already modular. While the entire pipeline would likely be a single MCP tool, the underlying design is robust.\n- **High Value, Complex Task**: Automating OpenFOAM simulations is a complex and valuable task. Encapsulating it as a service makes it accessible to a wider range of users without requiring them to manage the complex environment and dependencies.\n\n**Recommendations:**\n- **Service Endpoint**: Expose the functionality of `foambench_main.py` as a single tool endpoint that accepts the user requirement as input.\n- **State Management**: The existing `GraphState` can be used internally to manage the state of the task.\n- **Output**: The service should return the path to the final case directory and any generated visualizations as its output.",
140
+ "model": "gpt-4o",
141
+ "source": "llm_direct_analysis",
142
+ "success": true
143
+ },
144
+ "deepwiki_options": {
145
+ "enabled": true,
146
+ "model": "gpt-4o"
147
+ },
148
+ "risk": {
149
+ "import_feasibility": 0.8,
150
+ "intrusiveness_risk": "low",
151
+ "complexity": "high"
152
+ }
153
+ }
Foam-Agent/mcp_output/env_info.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "environment": {
3
+ "type": "conda",
4
+ "name": "Foam-Agent_847955_env",
5
+ "files": {},
6
+ "python": "3.10",
7
+ "exec_prefix": []
8
+ },
9
+ "original_tests": {
10
+ "passed": true,
11
+ "report_path": null
12
+ },
13
+ "timestamp": 1755848930.0114202,
14
+ "conda_available": true
15
+ }
Foam-Agent/mcp_output/mcp_logs/llm_statistics.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_calls": 4,
3
+ "failed_calls": 0,
4
+ "retry_count": 0,
5
+ "total_prompt_tokens": 13683,
6
+ "total_completion_tokens": 8017,
7
+ "total_tokens": 21700,
8
+ "average_prompt_tokens": 3420.75,
9
+ "average_completion_tokens": 2004.25,
10
+ "average_tokens": 5425.0
11
+ }
Foam-Agent/mcp_output/mcp_logs/run_log.json ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "timestamp": 1755849059.289963,
3
+ "node": "RunNode",
4
+ "test_result": {
5
+ "passed": false,
6
+ "report_path": null,
7
+ "stdout": "",
8
+ "stderr": "Traceback (most recent call last):\n\n File \"E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\start_mcp.py\", line 17, in <module>\n\n from mcp_service import create_app\n\n File \"E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\mcp_service.py\", line 8, in <module>\n\n from src.config import load_config, save_config\n\nImportError: cannot import name 'load_config' from 'src.config' (E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\source\\src\\config.py)\n\n\nERROR conda.cli.main_run:execute(49): `conda run python mcp_output\\start_mcp.py` failed. (See above for error)\n"
9
+ },
10
+ "run_result": {
11
+ "success": false,
12
+ "test_passed": false,
13
+ "exit_code": 1,
14
+ "stdout": "",
15
+ "stderr": "Traceback (most recent call last):\n\n File \"E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\start_mcp.py\", line 17, in <module>\n\n from mcp_service import create_app\n\n File \"E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\mcp_service.py\", line 8, in <module>\n\n from src.config import load_config, save_config\n\nImportError: cannot import name 'load_config' from 'src.config' (E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\source\\src\\config.py)\n\n\nERROR conda.cli.main_run:execute(49): `conda run python mcp_output\\start_mcp.py` failed. (See above for error)\n",
16
+ "timestamp": 1755849059.289963,
17
+ "details": {
18
+ "command": "D:\\download\\Anaconda\\Scripts\\conda.exe run -n Foam-Agent_847955_env --cwd E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent python mcp_output\\start_mcp.py",
19
+ "working_directory": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent",
20
+ "environment_type": "conda"
21
+ }
22
+ },
23
+ "environment": {
24
+ "type": "conda",
25
+ "name": "Foam-Agent_847955_env",
26
+ "files": {},
27
+ "python": "3.10",
28
+ "exec_prefix": []
29
+ },
30
+ "plugin_info": {
31
+ "files": {
32
+ "mcp_output/start_mcp.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\start_mcp.py",
33
+ "mcp_output/mcp_plugin/__init__.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\__init__.py",
34
+ "mcp_output/mcp_plugin/mcp_service.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\mcp_service.py",
35
+ "mcp_output/mcp_plugin/adapter.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\adapter.py",
36
+ "mcp_output/mcp_plugin/main.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin\\main.py",
37
+ "mcp_output/requirements.txt": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\requirements.txt",
38
+ "mcp_output/README_MCP.md": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\README_MCP.md",
39
+ "mcp_output/tests_mcp/test_mcp_basic.py": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\tests_mcp\\test_mcp_basic.py"
40
+ },
41
+ "adapter_mode": "import",
42
+ "endpoints": [
43
+ "health",
44
+ "version",
45
+ "architectnode*",
46
+ "meshingnode*",
47
+ "inputwriternode*",
48
+ "localrunnernode*",
49
+ "reviewernode*",
50
+ "visualizationnode*",
51
+ "load_config",
52
+ "save_config",
53
+ "route_task",
54
+ "track_aws_event",
55
+ "parse_data",
56
+ "validate_input",
57
+ "load_foam_data",
58
+ "save_foam_data",
59
+ "generate_foam_output",
60
+ "huggingface_integration",
61
+ "openai_integration",
62
+ "parse_foam_output",
63
+ "run_faiss_index",
64
+ "get_faiss_help",
65
+ "get_tutorial_details",
66
+ "get_tutorial_structure",
67
+ "parse_tutorial"
68
+ ],
69
+ "mcp_dir": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\mcp_plugin",
70
+ "tests_dir": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\tests_mcp",
71
+ "main_entry": "start_mcp.py",
72
+ "readme_path": "E:\\code\\fastMCP\\fastMCP\\mcp-repo-output\\workspace\\Foam-Agent\\mcp_output\\README_MCP.md",
73
+ "requirements": [
74
+ "fastmcp>=0.1.0",
75
+ "pydantic>=2.0.0"
76
+ ]
77
+ },
78
+ "fastmcp_installed": false
79
+ }
Foam-Agent/mcp_output/mcp_plugin/__init__.py ADDED
File without changes
Foam-Agent/mcp_output/mcp_plugin/__pycache__/adapter.cpython-310.pyc ADDED
Binary file (9.89 kB). View file
 
Foam-Agent/mcp_output/mcp_plugin/__pycache__/mcp_service.cpython-310.pyc ADDED
Binary file (7.35 kB). View file
 
Foam-Agent/mcp_output/mcp_plugin/main.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Service Auto-Wrapper - Auto-generated
3
+ """
4
+ from mcp_service import create_app
5
+
6
+ def main():
7
+ """Main entry function"""
8
+ app = create_app()
9
+ return app
10
+
11
+ if __name__ == "__main__":
12
+ app = main()
13
+ app.run()
Foam-Agent/mcp_output/mcp_plugin/mcp_service.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import tempfile
4
+ import subprocess
5
+ from typing import Dict, Any
6
+
7
+ project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
8
+ mcp_plugin_dir = os.path.join(project_root, "mcp_plugin")
9
+ if mcp_plugin_dir not in sys.path:
10
+ sys.path.insert(0, mcp_plugin_dir)
11
+
12
+ source_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "source")
13
+ if source_path not in sys.path:
14
+ sys.path.insert(0, source_path)
15
+
16
+ from fastmcp import FastMCP
17
+
18
+ mcp = FastMCP("FoamAgentService")
19
+
20
+ @mcp.tool(name="run_foam_agent", description="Run Foam-Agent workflow with natural language requirements")
21
+ def run_foam_agent(requirements: str, output_dir: str = "./output", custom_mesh: str = None) -> Dict[str, Any]:
22
+ try:
23
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
24
+ f.write(requirements)
25
+ prompt_path = f.name
26
+
27
+ cmd = [
28
+ "python", "src/main.py",
29
+ "--prompt_path", prompt_path,
30
+ "--output_dir", output_dir
31
+ ]
32
+ if custom_mesh:
33
+ cmd.extend(["--custom_mesh_path", custom_mesh])
34
+
35
+ result = subprocess.run(cmd, cwd=source_path, capture_output=True, text=True, timeout=3600)
36
+ os.unlink(prompt_path)
37
+
38
+ if result.returncode == 0:
39
+ return {"status": "success", "output": result.stdout, "case_dir": output_dir}
40
+ else:
41
+ return {"status": "error", "message": result.stderr}
42
+ except subprocess.TimeoutExpired:
43
+ return {"status": "error", "message": "Foam-Agent execution timed out after 1 hour"}
44
+ except Exception as e:
45
+ return {"status": "error", "message": str(e)}
46
+
47
+ @mcp.tool(name="run_foam_benchmark", description="Run complete Foam-Agent benchmark with OpenFOAM preprocessing")
48
+ def run_foam_benchmark(openfoam_path: str, requirements: str, output_dir: str = "./output", custom_mesh: str = None) -> Dict[str, Any]:
49
+ try:
50
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
51
+ f.write(requirements)
52
+ prompt_path = f.name
53
+
54
+ cmd = [
55
+ "python", "foambench_main.py",
56
+ "--openfoam_path", openfoam_path,
57
+ "--output", output_dir,
58
+ "--prompt_path", prompt_path
59
+ ]
60
+ if custom_mesh:
61
+ cmd.extend(["--custom_mesh_path", custom_mesh])
62
+
63
+ result = subprocess.run(cmd, cwd=source_path, capture_output=True, text=True, timeout=7200)
64
+ os.unlink(prompt_path)
65
+
66
+ if result.returncode == 0:
67
+ return {"status": "success", "output": result.stdout, "case_dir": output_dir}
68
+ else:
69
+ return {"status": "error", "message": result.stderr}
70
+ except subprocess.TimeoutExpired:
71
+ return {"status": "error", "message": "Foam-Agent benchmark execution timed out after 2 hours"}
72
+ except Exception as e:
73
+ return {"status": "error", "message": str(e)}
74
+
75
+ @mcp.tool(name="check_foam_agent_status", description="Check if Foam-Agent is properly configured")
76
+ def check_foam_agent_status() -> Dict[str, Any]:
77
+ status = {
78
+ "openfoam_installed": False,
79
+ "database_initialized": False,
80
+ "api_key_configured": False,
81
+ "api_url_configured": False,
82
+ "dependencies_installed": False
83
+ }
84
+
85
+ # Check OpenFOAM
86
+ openfoam_dir = os.environ.get('WM_PROJECT_DIR')
87
+ if openfoam_dir and os.path.exists(openfoam_dir):
88
+ status["openfoam_installed"] = True
89
+
90
+ # Check database
91
+ database_path = os.path.join(source_path, "database", "faiss")
92
+ if os.path.exists(database_path):
93
+ required_dbs = [
94
+ "openfoam_allrun_scripts",
95
+ "openfoam_tutorials_structure",
96
+ "openfoam_tutorials_details",
97
+ "openfoam_command_help"
98
+ ]
99
+ status["database_initialized"] = all(
100
+ os.path.exists(os.path.join(database_path, db)) for db in required_dbs
101
+ )
102
+
103
+ # Check API key
104
+ api_key = os.environ.get('OPENAI_API_KEY')
105
+ if api_key:
106
+ status["api_key_configured"] = True
107
+
108
+ # Check API URL
109
+ api_url = os.environ.get('OPENAI_BASE_URL')
110
+ if api_url:
111
+ status["api_url_configured"] = True
112
+
113
+ # Check dependencies
114
+ try:
115
+ import langchain
116
+ import faiss
117
+ import openai
118
+ status["dependencies_installed"] = True
119
+ except ImportError:
120
+ status["dependencies_installed"] = False
121
+
122
+ return {
123
+ "status": "success",
124
+ "checks": status,
125
+ "api_info": {
126
+ "api_key": api_key[:10] + "..." if api_key else "Not set",
127
+ "api_url": api_url if api_url else "Not set"
128
+ }
129
+ }
130
+
131
+ @mcp.tool(name="get_foam_agent_info", description="Get Foam-Agent system information")
132
+ def get_foam_agent_info() -> Dict[str, Any]:
133
+ return {
134
+ "status": "success",
135
+ "info": {
136
+ "name": "Foam-Agent",
137
+ "description": "Multi-agent framework for OpenFOAM CFD simulations",
138
+ "version": "1.0.0",
139
+ "capabilities": [
140
+ "Natural language to OpenFOAM workflow",
141
+ "Multi-agent architecture with LangGraph",
142
+ "FAISS vector database for knowledge retrieval",
143
+ "Automatic error correction and iteration",
144
+ "Custom mesh support (GMSH .msh files)",
145
+ "Multiple LLM provider support"
146
+ ],
147
+ "requirements": [
148
+ "OpenFOAM v10 installation",
149
+ "OpenAI API key or other LLM provider",
150
+ "Preprocessed OpenFOAM database",
151
+ "Python dependencies (langchain, faiss, etc.)"
152
+ ]
153
+ }
154
+ }
155
+
156
+ @mcp.tool(name="validate_requirements", description="Validate user requirements for CFD simulation")
157
+ def validate_requirements(requirements: str) -> Dict[str, Any]:
158
+ required_keywords = ["flow", "velocity", "pressure", "boundary", "mesh"]
159
+ found_keywords = [kw for kw in required_keywords if kw in requirements.lower()]
160
+
161
+ return {
162
+ "status": "success",
163
+ "validation": {
164
+ "has_flow_info": "flow" in requirements.lower(),
165
+ "has_boundary_conditions": "boundary" in requirements.lower(),
166
+ "has_geometry": "mesh" in requirements.lower() or "geometry" in requirements.lower(),
167
+ "found_keywords": found_keywords,
168
+ "completeness": len(found_keywords) / len(required_keywords),
169
+ "suggestions": [
170
+ "Include flow type (incompressible/compressible)",
171
+ "Specify boundary conditions clearly",
172
+ "Define geometry or mesh requirements",
173
+ "Set physical parameters (viscosity, density, etc.)",
174
+ "Specify solver preferences if any"
175
+ ]
176
+ }
177
+ }
178
+
179
+ def create_app():
180
+ return mcp
Foam-Agent/mcp_output/requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ fastmcp>=0.1.0
2
+ pydantic>=2.0.0
3
+ faiss-cpu
4
+ PyYAML
5
+ requests
6
+ boto3
7
+
Foam-Agent/mcp_output/start_mcp.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ """
3
+ MCP Service Startup Entry Point
4
+ """
5
+ import sys
6
+ import os
7
+ from pathlib import Path
8
+
9
+ # Load environment variables from .env file if it exists
10
+ def load_env_file():
11
+ env_file = Path(__file__).parent.parent.parent.parent / '.env'
12
+ if env_file.exists():
13
+ with open(env_file, 'r') as f:
14
+ for line in f:
15
+ line = line.strip()
16
+ if line and not line.startswith('#') and '=' in line:
17
+ key, value = line.split('=', 1)
18
+ os.environ[key] = value
19
+
20
+ # Load environment variables
21
+ load_env_file()
22
+
23
+ # Set default values if not provided
24
+ if not os.environ.get('OPENAI_API_KEY'):
25
+ print("Warning: OPENAI_API_KEY not set. Please set it in .env file or environment variables.")
26
+ if not os.environ.get('OPENAI_BASE_URL'):
27
+ os.environ['OPENAI_BASE_URL'] = 'https://api.openai.com/v1'
28
+
29
+ project_root = os.path.dirname(os.path.abspath(__file__))
30
+ mcp_plugin_dir = os.path.join(project_root, "mcp_plugin")
31
+ if mcp_plugin_dir not in sys.path:
32
+ sys.path.insert(0, mcp_plugin_dir)
33
+
34
+ # Set path to point to source directory
35
+ source_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "source")
36
+ sys.path.insert(0, source_path)
37
+
38
+ from mcp_service import create_app
39
+
40
+ def main():
41
+ """Start FastMCP Service"""
42
+ app = create_app()
43
+ # Use environment variable to configure port, default 8000
44
+ port = int(os.environ.get("MCP_PORT", "8000"))
45
+
46
+ # Select transport mode based on environment variable
47
+ transport = os.environ.get("MCP_TRANSPORT", "stdio")
48
+ if transport == "http":
49
+ app.run(transport="http", host="0.0.0.0", port=port)
50
+ else:
51
+ # Default to STDIO mode
52
+ app.run()
53
+
54
+ if __name__ == "__main__":
55
+ main()
Foam-Agent/mcp_output/tests_mcp/test_mcp_basic.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Service Basic Tests
3
+ """
4
+ import sys
5
+ import os
6
+
7
+ project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
8
+ mcp_plugin_dir = os.path.join(project_root, "mcp_plugin")
9
+ if mcp_plugin_dir not in sys.path:
10
+ sys.path.insert(0, mcp_plugin_dir)
11
+
12
+ source_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "source")
13
+ sys.path.insert(0, source_path)
14
+
15
+ def test_import_mcp_service():
16
+ """Test that the MCP service can be imported correctly"""
17
+ try:
18
+ from mcp_service import create_app
19
+ app = create_app()
20
+ assert app is not None
21
+ print("MCP service imported successfully")
22
+ return True
23
+ except Exception as e:
24
+ print(f"Failed to import MCP service: {e}")
25
+ return False
26
+
27
+ def test_adapter_init():
28
+ """Test that the adapter can be initialized correctly"""
29
+ try:
30
+ from adapter import Adapter
31
+ adapter = Adapter()
32
+ assert adapter is not None
33
+ print("Adapter initialized successfully")
34
+ return True
35
+ except Exception as e:
36
+ print(f"Failed to initialize adapter: {e}")
37
+ return False
38
+
39
+ if __name__ == "__main__":
40
+ print("Running MCP service basic tests...")
41
+ test1 = test_import_mcp_service()
42
+ test2 = test_adapter_init()
43
+
44
+ if test1 and test2:
45
+ print("All basic tests passed")
46
+ sys.exit(0)
47
+ else:
48
+ print("Some tests failed")
49
+ sys.exit(1)
Foam-Agent/mcp_output/tests_smoke/test_smoke.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib, sys
2
+ import os
3
+
4
+ # Add current directory to Python path
5
+ sys.path.insert(0, os.getcwd())
6
+
7
+ source_dir = os.path.join(os.getcwd(), "source")
8
+ if os.path.exists(source_dir):
9
+ sys.path.insert(0, source_dir)
10
+
11
+
12
+ try:
13
+ importlib.import_module("src.nodes")
14
+ print("OK - Successfully imported src.nodes")
15
+ except ImportError as e:
16
+ print(f"Failed to import src.nodes: {e}")
17
+ fallback_packages = []
18
+
19
+ fallback_packages = ['nodes']
20
+
21
+ for pkg in fallback_packages:
22
+ try:
23
+ importlib.import_module(pkg)
24
+ print(f"OK - Successfully imported {pkg}")
25
+ break
26
+ except ImportError:
27
+ continue
28
+ else:
29
+ print("All import attempts failed")
Foam-Agent/source/.gitignore ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Project
2
+ *.pkl
3
+ *.txt
4
+ runs/
5
+ database/faiss
6
+ database/raw
7
+ output/
8
+ benchmark/
9
+ database/foamgpt/data
10
+
11
+ # Byte-compiled / optimized / DLL files
12
+ __pycache__/
13
+ *.py[cod]
14
+ *$py.class
15
+
16
+ # C extensions
17
+ *.so
18
+
19
+ # Distribution / packaging
20
+ .Python
21
+ build/
22
+ develop-eggs/
23
+ dist/
24
+ downloads/
25
+ eggs/
26
+ .eggs/
27
+ lib/
28
+ lib64/
29
+ parts/
30
+ sdist/
31
+ var/
32
+ wheels/
33
+ share/python-wheels/
34
+ *.egg-info/
35
+ .installed.cfg
36
+ *.egg
37
+ MANIFEST
38
+
39
+ # PyInstaller
40
+ # Usually these files are written by a python script from a template
41
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
42
+ *.manifest
43
+ *.spec
44
+
45
+ # Installer logs
46
+ pip-log.txt
47
+ pip-delete-this-directory.txt
48
+
49
+ # Unit test / coverage reports
50
+ htmlcov/
51
+ .tox/
52
+ .nox/
53
+ .coverage
54
+ .coverage.*
55
+ .cache
56
+ nosetests.xml
57
+ coverage.xml
58
+ *.cover
59
+ *.py,cover
60
+ .hypothesis/
61
+ .pytest_cache/
62
+ cover/
63
+
64
+ # Translations
65
+ *.mo
66
+ *.pot
67
+
68
+ # Django stuff:
69
+ *.log
70
+ local_settings.py
71
+ db.sqlite3
72
+ db.sqlite3-journal
73
+
74
+ # Flask stuff:
75
+ instance/
76
+ .webassets-cache
77
+
78
+ # Scrapy stuff:
79
+ .scrapy
80
+
81
+ # Sphinx documentation
82
+ docs/_build/
83
+
84
+ # PyBuilder
85
+ .pybuilder/
86
+ target/
87
+
88
+ # Jupyter Notebook
89
+ .ipynb_checkpoints
90
+
91
+ # IPython
92
+ profile_default/
93
+ ipython_config.py
94
+
95
+ # pyenv
96
+ # For a library or package, you might want to ignore these files since the code is
97
+ # intended to run in multiple environments; otherwise, check them in:
98
+ # .python-version
99
+
100
+ # pipenv
101
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
102
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
103
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
104
+ # install all needed dependencies.
105
+ #Pipfile.lock
106
+
107
+ # UV
108
+ # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
109
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
110
+ # commonly ignored for libraries.
111
+ #uv.lock
112
+
113
+ # poetry
114
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
115
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
116
+ # commonly ignored for libraries.
117
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
118
+ #poetry.lock
119
+
120
+ # pdm
121
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
122
+ #pdm.lock
123
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
124
+ # in version control.
125
+ # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
126
+ .pdm.toml
127
+ .pdm-python
128
+ .pdm-build/
129
+
130
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
131
+ __pypackages__/
132
+
133
+ # Celery stuff
134
+ celerybeat-schedule
135
+ celerybeat.pid
136
+
137
+ # SageMath parsed files
138
+ *.sage.py
139
+
140
+ # Environments
141
+ .env
142
+ .venv
143
+ env/
144
+ venv/
145
+ ENV/
146
+ env.bak/
147
+ venv.bak/
148
+
149
+ # Spyder project settings
150
+ .spyderproject
151
+ .spyproject
152
+
153
+ # Rope project settings
154
+ .ropeproject
155
+
156
+ # mkdocs documentation
157
+ /site
158
+
159
+ # mypy
160
+ .mypy_cache/
161
+ .dmypy.json
162
+ dmypy.json
163
+
164
+ # Pyre type checker
165
+ .pyre/
166
+
167
+ # pytype static type analyzer
168
+ .pytype/
169
+
170
+ # Cython debug symbols
171
+ cython_debug/
172
+
173
+ # PyCharm
174
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
175
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
176
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
177
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
178
+ #.idea/
179
+
180
+ # PyPI configuration file
181
+ .pypirc
Foam-Agent/source/LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Ling Yue
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
Foam-Agent/source/README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Foam-Agent
2
+
3
+ <p align="center">
4
+ <img src="overview.png" alt="Foam-Agent System Architecture" width="600">
5
+ </p>
6
+
7
+ You can visit https://deepwiki.com/csml-rpi/Foam-Agent for a comprehensive introduction and to ask any questions interactively.
8
+
9
+ ## Introduction
10
+ **Foam-Agent** is a multi-agent framework that automates complex OpenFOAM-based CFD simulation workflows from natural language inputs. By leveraging advanced AI techniques, Foam-Agent significantly lowers the expertise barrier for Computational Fluid Dynamics while maintaining modeling accuracy.
11
+
12
+ Our framework offers three key innovations:
13
+ - **Hierarchical multi-index retrieval system** with specialized indices for different simulation aspects
14
+ - **Dependency-aware file generation system** ensuring consistency across configuration files
15
+ - **Iterative error correction mechanism** that diagnoses and resolves simulation failures without human intervention
16
+
17
+ ## Features
18
+ ### 🔍 **Enhanced Retrieval System**
19
+ - **Hierarchical retrieval** covering case files, directory structures, and dependencies
20
+ - **Specialized vector index architecture** for improved information retrieval
21
+ - **Context-specific knowledge retrieval** at different simulation stages
22
+
23
+ ### 🤖 **Multi-Agent Workflow Optimization**
24
+ - **Architect Agent** interprets requirements and plans file structures
25
+ - **Input Writer Agent** generates configuration files with consistency management
26
+ - **Runner Agent** executes simulations and captures outputs
27
+ - **Reviewer Agent** analyzes errors and proposes corrections
28
+
29
+ ### 🛠️ **Intelligent Error Correction**
30
+ - **Error pattern recognition** for common simulation failures
31
+ - **Automatic diagnosis and resolution** of configuration issues
32
+ - **Iterative refinement process** that progressively improves simulation configurations
33
+
34
+ ### 📐 **External Mesh File Support**
35
+ - **Custom mesh integration** with GMSH `.msh` files
36
+ - **Boundary condition specification** through natural language requirements
37
+ - **Currently supports** GMSH ASCII 2.2 format mesh files
38
+ - **Seamless workflow** from mesh import to simulation execution
39
+
40
+ **Example Usage:**
41
+ ```bash
42
+ python foambench_main.py --openfoam_path $WM_PROJECT_DIR --output ./output --prompt_path ./user_requirement.txt --custom_mesh_path ./tandem_wing.msh
43
+ ```
44
+
45
+ **Example Mesh File:** The `geometry.msh` file in this repository is taken from the [tandem wing tutorial](https://github.com/openfoamtutorials/tandem_wing) and demonstrates a 3D tandem wing simulation with NACA 0012 airfoils.
46
+
47
+ **Requirements Format:** In your `user_req_tandem_wing.txt`, describe the boundary conditions and physical parameters for your custom mesh. The agent will automatically detect the mesh type and generate appropriate OpenFOAM configuration files.
48
+
49
+ ## Getting Started
50
+
51
+ ### 1. Clone the repository and install dependencies
52
+
53
+ ```bash
54
+ git clone https://github.com/csml-rpi/Foam-Agent.git
55
+ cd Foam-Agent
56
+ git checkout v1.0.0
57
+ conda env create -f environment.yml
58
+ conda activate openfoamAgent
59
+ ```
60
+
61
+ ### 2. Install and configure OpenFOAM v10
62
+
63
+ Foam-Agent requires OpenFOAM v10. Please follow the official installation guide for your operating system:
64
+
65
+ - Official installation: [https://openfoam.org/version/10/](https://openfoam.org/version/10/)
66
+
67
+ Verify your installation with:
68
+
69
+ ```bash
70
+ echo $WM_PROJECT_DIR
71
+ ```
72
+ and the result should be
73
+ ```
74
+ /opt/openfoam10
75
+ ```
76
+ or something similar.
77
+
78
+ `WM_PROJECT_DIR` is an environment variable that comes with your OpenFOAM installation, indicating the location of OpenFOAM on your computer.
79
+
80
+ ### 3. Database preprocessing (first-time setup)
81
+
82
+ Before running any workflow, you must preprocess the OpenFOAM tutorial and command database. This can be done automatically or manually.
83
+
84
+ #### Recommended: Automatic preprocessing
85
+
86
+ ```bash
87
+ python foambench_main.py --openfoam_path $WM_PROJECT_DIR --output ./output --prompt_path ./user_requirement.txt
88
+ ```
89
+
90
+ This script will automatically run all necessary preprocessing scripts in `database/script/` and then launch the main workflow.
91
+
92
+ #### Manual preprocessing (advanced)
93
+
94
+ If you prefer to run preprocessing scripts manually, execute the following:
95
+
96
+ ```bash
97
+ python database/script/tutorial_parser.py --output_dir=./database/raw --wm_project_dir=$WM_PROJECT_DIR
98
+ python database/script/faiss_command_help.py --database_path=./database
99
+ python database/script/faiss_allrun_scripts.py --database_path=./database
100
+ python database/script/faiss_tutorials_structure.py --database_path=./database
101
+ python database/script/faiss_tutorials_details.py --database_path=./database
102
+ ```
103
+
104
+ ### 4. Run a demo workflow
105
+
106
+ #### Option 1: Automated benchmark (recommended)
107
+
108
+ ```bash
109
+ python foambench_main.py --openfoam_path $WM_PROJECT_DIR --output ./output --prompt_path ./user_requirement.txt
110
+ ```
111
+
112
+ #### Option 2: Directly run the main agent
113
+
114
+ ```bash
115
+ python src/main.py --prompt_path ./user_requirement.txt --output_dir ./output
116
+ ```
117
+
118
+ - You can also specify a custom mesh:
119
+
120
+ ```bash
121
+ python src/main.py --prompt_path ./user_requirement.txt --output_dir ./output --custom_mesh_path ./my_mesh.msh
122
+ ```
123
+
124
+ #### Example user_requirement.txt
125
+
126
+ ```
127
+ do a Reynolds-Averaged Simulation (RAS) pitzdaily simulation. Use PIMPLE algorithm. The domain is a 2D millimeter-scale channel geometry. Boundary conditions specify a fixed velocity of 10m/s at the inlet (left), zero gradient pressure at the outlet (right), and no-slip conditions for walls. Use timestep of 0.0001 and output every 0.01. Finaltime is 0.3. use nu value of 1e-5.
128
+ ```
129
+
130
+ ### 5. Configuration and environment variables
131
+
132
+ - Default configuration is in `src/config.py`. You can modify model provider, database path, and other parameters there.
133
+ - You must set the `OPENAI_API_KEY` environment variable if using OpenAI/Bedrock models.
134
+
135
+ ### 6. Troubleshooting
136
+
137
+ - **OpenFOAM environment not found**: Ensure you have sourced the OpenFOAM bashrc and restarted your terminal.
138
+ - **Database not initialized**: Make sure you have run `foambench_main.py` or all scripts in `database/script/`.
139
+ - **Missing dependencies**: After activating the environment, run `pip install -r requirements.txt` if needed.
140
+ - **API key errors**: Ensure `OPENAI_API_KEY` is set in your environment.
141
+
142
+ ## Citation
143
+ If you use Foam-Agent in your research, please cite our paper:
144
+ ```bibtex
145
+ @article{yue2025foam,
146
+ title={Foam-Agent: Towards Automated Intelligent CFD Workflows},
147
+ author={Yue, Ling and Somasekharan, Nithin and Cao, Yadi and Pan, Shaowu},
148
+ journal={arXiv preprint arXiv:2505.04997},
149
+ year={2025}
150
+ }
151
+ ```
Foam-Agent/source/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """
3
+ Foam-Agent 项目包初始化文件
4
+ """
Foam-Agent/source/database/foamgpt/foamgpt_data.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ from pathlib import Path
4
+ from typing import Dict, List
5
+ from collections import defaultdict
6
+ from tqdm import tqdm
7
+
8
+
9
+ def load_jsonl_data(file_path: Path) -> List[Dict]:
10
+ """Load data from JSONL file"""
11
+ print(f"Loading jsonl data from {file_path}")
12
+ data = []
13
+ with open(file_path, 'r', encoding='utf-8') as f:
14
+ for line in f:
15
+ if line.strip():
16
+ data.append(json.loads(line))
17
+ return data
18
+
19
+
20
+ def main():
21
+ user_req_jsonl = load_jsonl_data(Path(__file__).parent / 'data' / 'foamgpt_user_requirements.jsonl')
22
+ user_req_dict = {req['case_name']: req['user_requirement'] for req in user_req_jsonl}
23
+ print(f"Loaded {len(user_req_jsonl)} user requirements")
24
+
25
+ foamgpt_input_data = load_jsonl_data(Path(__file__).parent / 'data' / 'parsed_openfoam_cases.jsonl')
26
+ print(f"Loaded {len(foamgpt_input_data)} input data")
27
+ print(f"{foamgpt_input_data[0].keys()}")
28
+
29
+ output_data = []
30
+
31
+ for case_file_data in foamgpt_input_data:
32
+ case_name = case_file_data['case_name']
33
+ file_name = case_file_data['file_name']
34
+ folder_name = case_file_data['folder_name']
35
+ case_solver = case_file_data['case_solver']
36
+ case_domain = case_file_data['case_domain']
37
+ case_category = case_file_data['case_category']
38
+
39
+ case_user_requirement = user_req_dict[case_name]
40
+
41
+
42
+ system_prompt = (
43
+ "You are an expert in OpenFOAM simulation and numerical modeling."
44
+ f"Your task is to generate a complete and functional file named: <file_name>{file_name}</file_name> within the <folder_name>{folder_name}</folder_name> directory. "
45
+ "Before finalizing the output, ensure:\n"
46
+ "- Ensure units and dimensions are correct** for all physical variables.\n"
47
+ f"- Ensure case solver settings are consistent with the user's requirements. Available solvers are: {case_solver}.\n"
48
+ "Provide only the code—no explanations, comments, or additional text."
49
+ )
50
+
51
+ user_prompt = (
52
+ f"User requirement: {case_user_requirement}\n"
53
+ "Please ensure that the generated file is complete, functional, and logically sound."
54
+ "Additionally, apply your domain expertise to verify that all numerical values are consistent with the user's requirements, maintaining accuracy and coherence."
55
+ "When generating controlDict, do not include anything to preform post processing. Just include the necessary settings to run the simulation."
56
+ )
57
+
58
+ output_data.append({
59
+ "case_name": case_name,
60
+ "file_name": file_name,
61
+ "folder_name": folder_name,
62
+ "case_solver": case_solver,
63
+ "case_domain": case_domain,
64
+ "case_category": case_category,
65
+ "system_prompt": system_prompt,
66
+ "user_prompt": user_prompt,
67
+ "file_content": case_file_data['file_content'],
68
+ "user_requirement": case_user_requirement
69
+ })
70
+
71
+ with open(Path(__file__).parent / 'data' / 'foamgpt_all.jsonl', 'w', encoding='utf-8') as f:
72
+ for data in output_data:
73
+ json.dump(data, f, ensure_ascii=False)
74
+ f.write('\n')
75
+
76
+ print(f"Saved {len(output_data)} data to {Path(__file__).parent / 'data' / 'foamgpt_all.jsonl'}")
77
+
78
+ if __name__ == "__main__":
79
+ main()
80
+
Foam-Agent/source/database/foamgpt/foamgpt_gen.py ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ from pathlib import Path
4
+ from typing import Dict, List
5
+ from collections import defaultdict
6
+ from tqdm import tqdm
7
+
8
+ import sys
9
+ sys.path.append(str(Path(__file__).parent.parent.parent / "src"))
10
+
11
+ from utils import LLMService
12
+ from config import Config
13
+
14
+
15
+ def load_jsonl_data(file_path: Path) -> List[Dict]:
16
+ """Load data from JSONL file"""
17
+ print(f"Loading jsonl data from {file_path}")
18
+ data = []
19
+ with open(file_path, 'r', encoding='utf-8') as f:
20
+ for line in f:
21
+ if line.strip():
22
+ data.append(json.loads(line))
23
+ return data
24
+
25
+
26
+ def group_by_case_name(data: List[Dict]) -> Dict[str, List[Dict]]:
27
+ """Group records by case_name"""
28
+ grouped = defaultdict(list)
29
+ for record in data:
30
+ case_name = record.get('case_name', 'unknown')
31
+ grouped[case_name].append(record)
32
+ return dict(grouped)
33
+
34
+
35
+ def create_system_prompt() -> str:
36
+ """Create the system prompt for generating user requirements"""
37
+ return """You are an expert OpenFOAM simulation engineer. Your task is to analyze OpenFOAM case files and generate a realistic user requirement that a simulation engineer would specify when requesting such a simulation.
38
+
39
+ Based on the provided OpenFOAM case files, generate a user_requirement that follows these patterns:
40
+
41
+ STRUCTURE REQUIREMENTS:
42
+ 1. Start with "do a [simulation type]" or "Perform a [simulation type]" or "Conduct a [simulation type]"
43
+ 2. Include solver specification: "using [solver name] solver" or "Use [solver name] solver"
44
+ 3. Specify geometric details with precise dimensions.
45
+ 4. When reporting dimensions, report values as is in the geometry file without scaling using convertToMeters parameter. Further report the convertToMeters value seperatly.
46
+ 5. Define all boundary conditions for different patches/surfaces
47
+ 6. Include time parameters (start time, end time, timestep, output frequency)
48
+ 7. Specify physical properties (viscosity, density, temperature, pressure, etc.)
49
+ 8. Mention grid/mesh details when relevant
50
+ 9. Include algorithm details (PIMPLE, SIMPLE, etc.) when applicable
51
+ 10. When reporting intial location of fluid, report their location in x,y,z coordinates. For example water occupies the region 0<=x<=1, 0<=y<=1, 0<=z<=1.
52
+ 11. Detail the geometry of the domain as much as possible in a concise manner.
53
+
54
+ TECHNICAL ACCURACY:
55
+ - Use correct OpenFOAM terminology and solver names
56
+ - Include realistic engineering values with proper units
57
+ - Specify boundary condition types accurately (fixedValue, zeroGradient, noSlip, etc.)
58
+ - Include material properties relevant to the simulation type
59
+ - Mention turbulence models when applicable (k-epsilon, RAS, etc.)
60
+
61
+ FORMAT REQUIREMENTS:
62
+ - Generate a single, comprehensive sentence or paragraph
63
+ - Use technical language appropriate for CFD engineers
64
+ - Include specific numerical values extracted from the case files
65
+ - Maintain consistency with OpenFOAM naming conventions
66
+
67
+ EXAMPLES OF GOOD PATTERNS:
68
+ - "do a Reynolds-Averaged Simulation (RAS) pitzdaily simulation. Use PIMPLE algorithm. The domain is a 2D millimeter-scale channel geometry. Boundary conditions specify a fixed velocity of 10m/s at the inlet (left), zero gradient pressure at the outlet (right), and no-slip conditions for walls. Use timestep of 0.0001 and output every 0.01. Finaltime is 0.3. use nu value of 1e-5."
69
+
70
+ - "do an incompressible lid driven cavity flow. The cavity is a square with dimensions normalized to 1 unit on both the x and y axes and very thin in the z-direction (0.1 unit scaled down by a factor of 0.1, making it effectively 2D). Use a grid of 20X20 in x and y direction and 1 cell in z-direction(due to the expected 2D flow characteristics). The top wall ('movingWall') moves in the x-direction with a uniform velocity of 1 m/s. The 'fixedWalls' have a no-slip boundary condition (velocity equal to zero at the wall). The front and back faces are designated as 'empty'. The simulation runs from time 0 to 10 with a time step of 0.005 units, and results are output every 100 time steps. The viscosity (nu) is set as constant with a value of 1e-05 m^2/s."
71
+
72
+ Generate ONLY the user_requirement text as a single comprehensive statement, with no additional explanation, formatting, or metadata."""
73
+
74
+
75
+ def create_user_prompt(case_data: List[Dict]) -> str:
76
+ """Create the user prompt with case file information"""
77
+ case_info = case_data[0] # Get case metadata from first record
78
+
79
+ prompt = f"""Analyze this OpenFOAM case and generate a realistic user requirement:
80
+
81
+ CASE METADATA:
82
+ - Case Name: {case_info['case_name']}
83
+ - Domain: {case_info['case_domain']}
84
+ - Category: {case_info['case_category']}
85
+ - Solver: {case_info['case_solver']}
86
+
87
+ CASE FILES ANALYSIS:
88
+ """
89
+
90
+ # Group files by folder for better organization
91
+ files_by_folder = defaultdict(list)
92
+ for record in case_data:
93
+ files_by_folder[record['folder_name']].append(record)
94
+
95
+ # Add file contents organized by folder
96
+ for folder_name, files in files_by_folder.items():
97
+ prompt += f"\n=== {folder_name}/ ===\n"
98
+ for record in files:
99
+ file_name = record['file_name']
100
+ file_content = record['file_content']
101
+ prompt += f"\n--- {file_name} ---\n{file_content}\n"
102
+
103
+ prompt += """
104
+
105
+ TASK:
106
+ Based on the case files above, extract the key simulation parameters and generate a realistic user_requirement that an engineer would specify when requesting this simulation. Focus on:
107
+
108
+ 1. Simulation type and solver
109
+ 2. Domain geometry and dimensions
110
+ 3. Boundary conditions for all patches
111
+ 4. Time settings (timestep, end time, output frequency)
112
+ 5. Physical properties (viscosity, density, temperature, etc.)
113
+ 6. Grid/mesh specifications
114
+ 7. Algorithm settings (PIMPLE, SIMPLE, turbulence models, etc.)
115
+
116
+ Generate a single, comprehensive user_requirement statement that captures all essential simulation parameters."""
117
+
118
+ return prompt
119
+
120
+
121
+ def process_cases(grouped_data: Dict[str, List[Dict]], llm_service: LLMService, output_path: Path):
122
+ """Process each case and generate user requirements"""
123
+ results = []
124
+ total_cases = len(grouped_data)
125
+
126
+ print(f"Processing {total_cases} cases...")
127
+
128
+ for i, (case_name, case_data) in tqdm(enumerate(grouped_data.items(), 1)):
129
+ print(f"Processing case {i}/{total_cases}: {case_name}")
130
+
131
+ try:
132
+ # Create prompts
133
+ system_prompt = create_system_prompt()
134
+ user_prompt = create_user_prompt(case_data)
135
+
136
+ # Invoke LLM
137
+ user_requirement = llm_service.invoke(
138
+ user_prompt=user_prompt,
139
+ system_prompt=system_prompt
140
+ )
141
+
142
+ # Create result record
143
+ case_info = case_data[0] # Get metadata from first record
144
+ result = {
145
+ 'case_name': case_name,
146
+ 'case_domain': case_info['case_domain'],
147
+ 'case_category': case_info['case_category'],
148
+ 'case_solver': case_info['case_solver'],
149
+ 'user_requirement': str(user_requirement).strip(),
150
+ 'file_count': len(case_data),
151
+ 'files': [{'folder_name': r['folder_name'], 'file_name': r['file_name']} for r in case_data]
152
+ }
153
+
154
+ results.append(result)
155
+
156
+ # Save progress after each case
157
+ with open(output_path, 'w', encoding='utf-8') as f:
158
+ for result_record in results:
159
+ json.dump(result_record, f, ensure_ascii=False)
160
+ f.write('\n')
161
+
162
+ print(f" ✓ Generated user requirement for {case_name}")
163
+
164
+ except Exception as e:
165
+ print(f" ❌ Error processing {case_name}: {str(e)}")
166
+ continue
167
+
168
+ return results
169
+
170
+
171
+ def main():
172
+
173
+ input_path = Path(__file__).parent / 'data' / 'parsed_openfoam_cases.jsonl'
174
+ output_path = Path(__file__).parent / 'data' / 'foamgpt_user_requirements.jsonl'
175
+
176
+ print(f"📂 Input: {input_path}")
177
+ print(f"💾 Output: {output_path}")
178
+ print()
179
+
180
+ print("Initializing LLM service...")
181
+
182
+ config = Config()
183
+ print(f"Config: {config}")
184
+ llm_service = LLMService(config)
185
+
186
+ # Load and process data
187
+ print("Loading data...")
188
+ data = load_jsonl_data(input_path)
189
+ print(f"Loaded {len(data)} records")
190
+
191
+ print("Grouping by case name...")
192
+ grouped_data = group_by_case_name(data)
193
+ print(f"Found {len(grouped_data)} unique cases")
194
+
195
+ # Process cases
196
+ results = process_cases(grouped_data, llm_service, output_path)
197
+
198
+ # Print statistics
199
+ stats = llm_service.get_stats()
200
+ print(f"\n📊 Processing Summary:")
201
+ print(f" Total cases processed: {len(results)}")
202
+ print(f" LLM calls made: {stats['total_calls']}")
203
+ print(f" Failed calls: {stats['failed_calls']}")
204
+ print(f" Total tokens used: {stats['total_tokens']}")
205
+ print(f" Prompt tokens: {stats['total_prompt_tokens']}")
206
+ print(f" Completion tokens: {stats['total_completion_tokens']}")
207
+
208
+ print(f"\n✅ Done! User requirements saved to: {output_path}")
209
+
210
+ if __name__ == "__main__":
211
+ main()
Foam-Agent/source/database/foamgpt/foamgpt_huggingface.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import random
3
+ from huggingface_hub import upload_file
4
+ from pathlib import Path
5
+
6
+ # Data splitting configuration
7
+ input_file = Path(__file__).parent / 'data' / 'foamgpt_all.jsonl'
8
+ train_file = Path(__file__).parent / 'data' / 'foamgpt_train.jsonl'
9
+ test_file = Path(__file__).parent / 'data' / 'foamgpt_test.jsonl'
10
+ test_ratio = 0.1
11
+
12
+ # Hugging Face configuration
13
+ repo_id = "LeoYML/FoamGPT"
14
+
15
+ def split_data():
16
+ """Split data into training and test sets"""
17
+ print("Starting data splitting...")
18
+
19
+ # Set random seed for reproducibility
20
+ random.seed(0)
21
+
22
+ with open(input_file, "r", encoding="utf-8") as f:
23
+ lines = [json.loads(line) for line in f]
24
+
25
+ random.shuffle(lines)
26
+ split_idx = int(len(lines) * (1 - test_ratio))
27
+ train_data = lines[:split_idx]
28
+ test_data = lines[split_idx:]
29
+
30
+ # Write files
31
+ with open(train_file, "w", encoding="utf-8") as f:
32
+ for item in train_data:
33
+ f.write(json.dumps(item) + "\n")
34
+
35
+ with open(test_file, "w", encoding="utf-8") as f:
36
+ for item in test_data:
37
+ f.write(json.dumps(item) + "\n")
38
+
39
+ print(f"Data splitting completed: Train {len(train_data)} samples, Test {len(test_data)} samples")
40
+ return train_file, test_file
41
+
42
+ def upload_to_huggingface(train_file, test_file):
43
+ """Upload files to Hugging Face"""
44
+ print("Starting upload to Hugging Face...")
45
+
46
+ upload_file(
47
+ path_or_fileobj=train_file,
48
+ path_in_repo=train_file.name,
49
+ repo_id=repo_id,
50
+ repo_type="dataset"
51
+ )
52
+ print(f"Uploaded training file: {train_file}")
53
+
54
+ upload_file(
55
+ path_or_fileobj=test_file,
56
+ path_in_repo=test_file.name,
57
+ repo_id=repo_id,
58
+ repo_type="dataset"
59
+ )
60
+ print(f"Uploaded test file: {test_file}")
61
+
62
+ print("All files uploaded successfully!")
63
+
64
+ if __name__ == "__main__":
65
+ # Execute data splitting
66
+ print("Splitting data...")
67
+ train_file, test_file = split_data()
68
+
69
+ # Upload to Hugging Face
70
+ print("Uploading to Hugging Face...")
71
+ upload_to_huggingface(train_file, test_file)
72
+
Foam-Agent/source/database/foamgpt/foamgpt_openai.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Convert FoamGPT fine-tune data to OpenAI format for supervised fine-tuning.
4
+ """
5
+
6
+ import json
7
+ import os
8
+ from pathlib import Path
9
+
10
+ def convert_to_openai_format(input_file, output_file):
11
+ """
12
+ Convert FoamGPT fine-tune data to OpenAI format.
13
+
14
+ Args:
15
+ input_file (str): Path to input JSONL file
16
+ output_file (str): Path to output JSONL file
17
+ """
18
+
19
+ # Create output directory if it doesn't exist
20
+ output_path = Path(output_file)
21
+ output_path.parent.mkdir(parents=True, exist_ok=True)
22
+
23
+ converted_count = 0
24
+ error_count = 0
25
+
26
+ with open(input_file, 'r', encoding='utf-8') as infile, \
27
+ open(output_file, 'w', encoding='utf-8') as outfile:
28
+
29
+ for line_num, line in enumerate(infile, 1):
30
+ try:
31
+ # Parse the original data
32
+ data = json.loads(line.strip())
33
+
34
+ # Create OpenAI format
35
+ openai_format = {
36
+ "messages": [
37
+ {
38
+ "role": "system",
39
+ "content": data['system_prompt']
40
+ },
41
+ {
42
+ "role": "user",
43
+ "content": data['user_prompt']
44
+ },
45
+ {
46
+ "role": "assistant",
47
+ "content": data['file_content']
48
+ }
49
+ ]
50
+ }
51
+
52
+ # Write to output file
53
+ outfile.write(json.dumps(openai_format, ensure_ascii=False) + '\n')
54
+ converted_count += 1
55
+
56
+ # Progress indicator
57
+ if converted_count % 100 == 0:
58
+ print(f"Converted {converted_count} records...")
59
+
60
+ except json.JSONDecodeError as e:
61
+ print(f"Error parsing line {line_num}: {e}")
62
+ error_count += 1
63
+ continue
64
+ except Exception as e:
65
+ print(f"Unexpected error on line {line_num}: {e}")
66
+ error_count += 1
67
+ continue
68
+
69
+ print(f"\nConversion completed!")
70
+ print(f"Successfully converted: {converted_count} records")
71
+ print(f"Errors encountered: {error_count} records")
72
+ print(f"Output saved to: {output_file}")
73
+
74
+ def main():
75
+ """Main function to run the conversion."""
76
+
77
+ # Define input and output paths
78
+ input_file = f"{Path(__file__).parent}/data/foamgpt_train.jsonl"
79
+ output_file = f"{Path(__file__).parent}/data/foamgpt_openai_train.jsonl"
80
+
81
+ # Check if input file exists
82
+ if not os.path.exists(input_file):
83
+ print(f"Error: Input file '{input_file}' not found!")
84
+ return
85
+
86
+ print(f"Converting {input_file} to OpenAI format...")
87
+ print(f"Output will be saved to: {output_file}")
88
+
89
+ # Perform conversion
90
+ convert_to_openai_format(input_file, output_file)
91
+
92
+ # Define input and output paths
93
+ input_file = f"{Path(__file__).parent}/data/foamgpt_test.jsonl"
94
+ output_file = f"{Path(__file__).parent}/data/foamgpt_openai_test.jsonl"
95
+
96
+ # Check if input file exists
97
+ if not os.path.exists(input_file):
98
+ print(f"Error: Input file '{input_file}' not found!")
99
+ return
100
+
101
+ print(f"Converting {input_file} to OpenAI format...")
102
+ print(f"Output will be saved to: {output_file}")
103
+
104
+ # Perform conversion
105
+ convert_to_openai_format(input_file, output_file)
106
+
107
+ if __name__ == "__main__":
108
+ main()
Foam-Agent/source/database/foamgpt/foamgpt_parser.py ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import re
4
+ from pathlib import Path
5
+ from typing import Dict, List
6
+
7
+
8
+ def parse_case_content(case_content: str) -> Dict:
9
+ """Parse a single case from the content."""
10
+ case_data = {}
11
+
12
+ # Extract index information
13
+ index_match = re.search(r'<index>(.*?)</index>', case_content, re.DOTALL)
14
+ if index_match:
15
+ index_content = index_match.group(1)
16
+ case_data['case_name'] = re.search(r'case name:\s*(.+)', index_content).group(1).strip()
17
+ case_data['case_domain'] = re.search(r'case domain:\s*(.+)', index_content).group(1).strip()
18
+ case_data['case_category'] = re.search(r'case category:\s*(.+)', index_content).group(1).strip()
19
+ case_data['case_solver'] = re.search(r'case solver:\s*(.+)', index_content).group(1).strip()
20
+
21
+ # Extract tutorials section
22
+ tutorials_match = re.search(r'<tutorials>(.*?)</tutorials>', case_content, re.DOTALL)
23
+ if tutorials_match:
24
+ case_data['files'] = parse_tutorials(tutorials_match.group(1))
25
+
26
+ return case_data
27
+
28
+
29
+ def parse_tutorials(tutorials_content: str) -> List[Dict]:
30
+ """Parse the tutorials section to extract file information."""
31
+ files = []
32
+
33
+ # Find all directories
34
+ dir_pattern = r'<directory_begin>directory name:\s*(.+?)\n(.*?)</directory_end>'
35
+ for dir_match in re.finditer(dir_pattern, tutorials_content, re.DOTALL):
36
+ folder_name = dir_match.group(1).strip()
37
+ dir_content = dir_match.group(2)
38
+
39
+ # Find all files in this directory
40
+ file_pattern = r'<file_begin>file name:\s*(.+?)\n<file_content>(.*?)</file_content>'
41
+ for file_match in re.finditer(file_pattern, dir_content, re.DOTALL):
42
+ file_name = file_match.group(1).strip()
43
+ file_content = file_match.group(2)
44
+
45
+ files.append({
46
+ 'file_name': file_name,
47
+ 'folder_name': folder_name,
48
+ 'file_content': file_content
49
+ })
50
+
51
+ return files
52
+
53
+
54
+ def process_file(input_path: Path, output_path: Path, char_limit: int):
55
+ """Process the OpenFOAM tutorials file and convert to JSONL format."""
56
+
57
+ with open(input_path, 'r', encoding='utf-8') as f:
58
+ content = f.read()
59
+
60
+ # Split content by cases
61
+ case_pattern = r'<case_begin>(.*?)</case_end>'
62
+ cases = re.findall(case_pattern, content, re.DOTALL)
63
+
64
+ skipped_files = []
65
+ processed_records = []
66
+
67
+ print(f"Found {len(cases)} cases to process")
68
+
69
+ for i, case_content in enumerate(cases):
70
+ if (i + 1) % 10 == 0:
71
+ print(f"Processing case {i + 1}/{len(cases)}")
72
+
73
+ case_data = parse_case_content(case_content)
74
+
75
+ if 'files' not in case_data:
76
+ continue
77
+
78
+ for file_info in case_data['files']:
79
+ file_name = file_info['file_name'].strip()
80
+ folder_name = file_info['folder_name'].strip()
81
+ case_name = case_data['case_name'].strip()
82
+ case_domain = case_data['case_domain'].strip()
83
+ case_category = case_data['case_category'].strip()
84
+ case_solver = case_data['case_solver'].strip()
85
+ file_content = file_info['file_content'].strip()
86
+
87
+ # Check file content length
88
+ if len(file_content) > char_limit:
89
+ full_path = f"{case_name}/{folder_name}/{file_name}"
90
+ print(f"\n⚠️ WARNING: Skipping file due to length > {char_limit} characters")
91
+ print(f" Length: {len(file_content)} characters")
92
+ print(f" Content preview (first 500 chars):")
93
+ print(" " + "-" * 60)
94
+ print(file_content[:500] + "...")
95
+ print(" " + "-" * 60 + "\n")
96
+
97
+ if full_path == "pitzDaily/system/blockMeshDict":
98
+ pass
99
+
100
+ skipped_files.append({
101
+ 'path': full_path,
102
+ 'length': len(file_content)
103
+ })
104
+ continue
105
+
106
+ # Check if the file content is not beginning with "FoamFile"
107
+ if not file_content.startswith("FoamFile"):
108
+ print(f"\n⚠️ WARNING: Skipping file due to missing 'FoamFile' header")
109
+ print(f" Content preview (first 500 chars):")
110
+ print(" " + "-" * 60)
111
+ print(file_content[:500] + "...")
112
+ print(" " + "-" * 60 + "\n")
113
+
114
+ continue
115
+
116
+
117
+
118
+ record = {
119
+ 'file_name': file_name,
120
+ 'folder_name': folder_name,
121
+ 'case_name': case_name,
122
+ 'case_domain': case_domain,
123
+ 'case_category': case_category,
124
+ 'case_solver': case_solver,
125
+ 'file_content': file_content
126
+
127
+ }
128
+ processed_records.append(record)
129
+
130
+ # Write output
131
+ print(f"\nWriting {len(processed_records)} records to {output_path}")
132
+
133
+ with open(output_path, 'w', encoding='utf-8') as f:
134
+ for record in processed_records:
135
+ json.dump(record, f, ensure_ascii=False)
136
+ f.write('\n')
137
+
138
+ # Summary
139
+ print(f"\n📊 Processing Summary:")
140
+ print(f" Total cases processed: {len(cases)}")
141
+ print(f" Total files written: {len(processed_records)}")
142
+ print(f" Files skipped (too long): {len(skipped_files)}")
143
+
144
+ if skipped_files:
145
+ skipped_files = sorted(skipped_files, key=lambda x: x['length'])
146
+ print(f"\n📋 Skipped files summary:")
147
+ for skip in skipped_files[:10]: # Show first 10
148
+ print(f" - {skip['path']} ({skip['length']} chars)")
149
+ if len(skipped_files) > 10:
150
+ print(f" ... and {len(skipped_files) - 10} more")
151
+
152
+
153
+ def main():
154
+ parser = argparse.ArgumentParser(description='Convert OpenFOAM tutorials to JSONL format for HuggingFace')
155
+ parser.add_argument('--char-limit', type=int, default=1500,
156
+ help='Character limit for file content (default: 1500)')
157
+
158
+ args = parser.parse_args()
159
+
160
+ input_openfoam_file_path = Path(__file__).parent.parent / 'raw' / 'openfoam_tutorials_details.txt'
161
+ output_parsed_file_path = Path(__file__).parent / 'data' / 'parsed_openfoam_cases.jsonl'
162
+
163
+ print(f"📂 Input: {input_openfoam_file_path}")
164
+ print(f"📂 Output: {output_parsed_file_path}")
165
+ print(f"📏 Character limit: {args.char_limit}")
166
+ print()
167
+
168
+ process_file(input_openfoam_file_path, output_parsed_file_path, args.char_limit)
169
+
170
+ print(f"\n✅ Done! Output saved to: {output_parsed_file_path}")
171
+
172
+
173
+ if __name__ == "__main__":
174
+ main()
Foam-Agent/source/database/script/__test_faiss.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_community.vectorstores import FAISS
2
+ import argparse
3
+ from pathlib import Path
4
+ from langchain_community.embeddings.openai import OpenAIEmbeddings
5
+
6
+ # Step 1: Parse command-line arguments
7
+ parser = argparse.ArgumentParser(description="Process OpenFOAM case data and store embeddings in FAISS.")
8
+ parser.add_argument("--db_name", type=str, required=True, help="Name of the FAISS database to retrieve from")
9
+ parser.add_argument("--db_path", type=str, default=str(Path(__file__).resolve().parent.parent),
10
+ help="Path to the database directory (default: '../database')")
11
+
12
+ args = parser.parse_args()
13
+
14
+ database_path = args.db_path # Get the database path from arguments
15
+
16
+
17
+ # Step 1: Define the path to the FAISS database
18
+ persist_directory = f"{database_path}/faiss/{args.db_name}"
19
+
20
+ # Step 2: Load the FAISS database
21
+ embedding_model = OpenAIEmbeddings(model="text-embedding-3-small")
22
+ vectordb = FAISS.load_local(persist_directory, embedding_model, allow_dangerous_deserialization=True)
23
+
24
+ # Step 3: Retrieve all stored documents
25
+ documents = vectordb.docstore._dict.values() # Extract stored documents
26
+
27
+ # Step 4: Print the contents
28
+ print(f"📂 Loaded {len(documents)} documents from the FAISS database.\n")
29
+
30
+ for i, doc in enumerate(documents):
31
+ if i > 10:
32
+ break
33
+ print(f"Document {i + 1}:")
34
+ print(f"Page Content: {doc.page_content}")
35
+ print(f"Metadata: {doc.metadata}")
36
+ print("-" * 80)
Foam-Agent/source/database/script/faiss_allrun_scripts.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ import os
3
+ import re
4
+ import argparse
5
+ from pathlib import Path
6
+
7
+ from langchain_community.vectorstores import FAISS
8
+ from langchain_openai.embeddings import OpenAIEmbeddings
9
+ from langchain_core.documents import Document
10
+
11
+
12
+ def extract_field(field_name: str, text: str) -> str:
13
+ """Extract the specified field from the given text."""
14
+ match = re.search(fr"{field_name}:\s*(.*)", text)
15
+ return match.group(1).strip() if match else "Unknown"
16
+
17
+ def tokenize(text: str) -> str:
18
+ # Replace underscores with spaces
19
+ text = text.replace('_', ' ')
20
+ # Insert a space between a lowercase letter and an uppercase letter (global match)
21
+ text = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ', text)
22
+ return text.lower()
23
+
24
+ def main():
25
+ # Step 1: Parse command-line arguments
26
+ parser = argparse.ArgumentParser(
27
+ description="Process OpenFOAM case data and store embeddings in FAISS."
28
+ )
29
+ parser.add_argument(
30
+ "--database_path",
31
+ type=str,
32
+ default=Path(__file__).resolve().parent.parent,
33
+ help="Path to the database directory (default: '../../')",
34
+ )
35
+
36
+ args = parser.parse_args()
37
+ database_path = args.database_path
38
+ print(f"Database path: {database_path}")
39
+
40
+ # Step 2: Read the input file
41
+ database_allrun_path = os.path.join(database_path, "raw/openfoam_allrun_scripts.txt")
42
+ if not os.path.exists(database_allrun_path):
43
+ raise FileNotFoundError(f"File not found: {database_allrun_path}")
44
+
45
+ with open(database_allrun_path, "r", encoding="utf-8") as file:
46
+ file_content = file.read()
47
+
48
+ # Step 3: Extract segments using regex
49
+ pattern = re.compile(r"<case_begin>(.*?)</case_end>", re.DOTALL)
50
+ matches = pattern.findall(file_content)
51
+ if not matches:
52
+ raise ValueError("No cases found in the input file. Please check the file content.")
53
+
54
+ documents = []
55
+ for match in matches:
56
+ # Extract <index> content
57
+ index_match = re.search(r"<index>(.*?)</index>", match, re.DOTALL)
58
+ if not index_match:
59
+ continue
60
+ index_content = index_match.group(0).strip()
61
+ full_content = match.strip()
62
+
63
+ # Extract directory structure
64
+ dir_match = re.search(r"<directory_structure>(.*?)</directory_structure>", match, re.DOTALL)
65
+ dir_structure = dir_match.group(0).strip() if dir_match else "Unknown"
66
+
67
+ # Extract metadata fields from index_content
68
+ case_name = extract_field("case name", index_content)
69
+ case_domain = extract_field("case domain", index_content)
70
+ case_category = extract_field("case category", index_content)
71
+ case_solver = extract_field("case solver", index_content)
72
+
73
+ # allrun script content is not sensitive to case domain and category
74
+ index_content = f"<index>\ncase name: {case_name}\ncase solver: {case_solver}</index>"
75
+
76
+ # Extract allrun script content from full_content
77
+ script_match = re.search(r"<allrun_script>([\s\S]*?)</allrun_script>", full_content)
78
+ case_allrun_script = script_match.group(1).strip() if script_match else "Unknown"
79
+
80
+ doc = Document(
81
+ page_content=tokenize(index_content + dir_structure),
82
+ metadata={
83
+ "full_content": full_content,
84
+ "case_name": case_name,
85
+ "case_domain": case_domain,
86
+ "case_category": case_category,
87
+ "case_solver": case_solver,
88
+ "dir_structure": dir_structure,
89
+ "allrun_script": case_allrun_script,
90
+ },
91
+ )
92
+ documents.append(doc)
93
+
94
+ # Step 4: Compute embeddings and store in FAISS
95
+ embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
96
+ vectordb = FAISS.from_documents(documents, embeddings)
97
+
98
+ # Step 5: Save the FAISS index locally
99
+ persist_directory = os.path.join(database_path, "faiss/openfoam_allrun_scripts")
100
+ vectordb.save_local(persist_directory)
101
+
102
+ print(f"{len(documents)} cases indexed successfully with metadata! Saved at: {persist_directory}")
103
+
104
+
105
+ if __name__ == "__main__":
106
+ main()
Foam-Agent/source/database/script/faiss_command_help.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import argparse
4
+ from pathlib import Path
5
+
6
+ from langchain_community.vectorstores import FAISS
7
+ from langchain_openai.embeddings import OpenAIEmbeddings
8
+ from langchain_core.documents import Document
9
+
10
+ def tokenize(text: str) -> str:
11
+ # Replace underscores with spaces
12
+ text = text.replace('_', ' ')
13
+ # Insert a space between a lowercase letter and an uppercase letter (global match)
14
+ text = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ', text)
15
+ return text.lower()
16
+
17
+ def main():
18
+ # Step 1: Parse command-line arguments
19
+ parser = argparse.ArgumentParser(
20
+ description="Process OpenFOAM case data and store embeddings in FAISS."
21
+ )
22
+ parser.add_argument(
23
+ "--database_path",
24
+ type=str,
25
+ default=Path(__file__).resolve().parent.parent,
26
+ help="Path to the database directory (default: '../../')",
27
+ )
28
+
29
+ args = parser.parse_args()
30
+ database_path = args.database_path
31
+ print(f"Database path: {database_path}")
32
+
33
+ # Step 2: Read the input file
34
+ database_allrun_path = os.path.join(database_path, "raw/openfoam_command_help.txt")
35
+ if not os.path.exists(database_allrun_path):
36
+ raise FileNotFoundError(f"File not found: {database_allrun_path}")
37
+
38
+ with open(database_allrun_path, "r", encoding="utf-8") as file:
39
+ file_content = file.read()
40
+
41
+ # Step 3: Extract `<command_begin> ... </command_end>` segments using regex
42
+ pattern = re.compile(r"<command_begin>(.*?)</command_end>", re.DOTALL)
43
+ matches = pattern.findall(file_content)
44
+
45
+ if not matches:
46
+ raise ValueError("No cases found in the input file. Please check the file content.")
47
+
48
+ documents = []
49
+
50
+ for match in matches:
51
+ command = re.search(r"<command>(.*?)</command>", match, re.DOTALL).group(1).strip()
52
+ help_text = re.search(r"<help_text>(.*?)</help_text>", match, re.DOTALL).group(1).strip()
53
+ full_content = match.strip() # Store the complete case
54
+
55
+ # Create a Document instance
56
+ documents.append(Document(
57
+ page_content=tokenize(command),
58
+ metadata={
59
+ "full_content": full_content,
60
+ "command": command,
61
+ "help_text": help_text
62
+ }
63
+ ))
64
+
65
+ # Step 4: Compute embeddings and store them in FAISS
66
+ embedding_model = OpenAIEmbeddings(model="text-embedding-3-small")
67
+ vectordb = FAISS.from_documents(documents, embedding_model)
68
+
69
+ # Step 5: Save FAISS index locally
70
+ persist_directory = os.path.join(database_path, "faiss/openfoam_command_help")
71
+ vectordb.save_local(persist_directory)
72
+
73
+ print(f"{len(documents)} cases indexed successfully with metadata! Saved at: {persist_directory}")
74
+
75
+ if __name__ == "__main__":
76
+ main()
77
+
Foam-Agent/source/database/script/faiss_tutorials_details.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import argparse
4
+ from pathlib import Path
5
+
6
+ from langchain_community.vectorstores import FAISS
7
+ from langchain_openai.embeddings import OpenAIEmbeddings
8
+ from langchain_core.documents import Document
9
+
10
+ # Function to extract specific fields from text
11
+ def extract_field(field_name: str, text: str) -> str:
12
+ """Extracts the specified field from the given text."""
13
+ match = re.search(fr"{field_name}:\s*(.*)", text)
14
+ return match.group(1).strip() if match else "Unknown"
15
+
16
+ def tokenize(text: str) -> str:
17
+ # Replace underscores with spaces
18
+ text = text.replace('_', ' ')
19
+ # Insert a space between a lowercase letter and an uppercase letter (global match)
20
+ text = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ', text)
21
+ return text.lower()
22
+
23
+ def main():
24
+ # Step 1: Parse command-line arguments
25
+ parser = argparse.ArgumentParser(
26
+ description="Process OpenFOAM case data and store embeddings in FAISS."
27
+ )
28
+ parser.add_argument(
29
+ "--database_path",
30
+ type=str,
31
+ default=Path(__file__).resolve().parent.parent,
32
+ help="Path to the database directory (default: '../../')",
33
+ )
34
+
35
+ args = parser.parse_args()
36
+ database_path = args.database_path
37
+ print(f"Database path: {database_path}")
38
+
39
+ # Step 2: Read the input file
40
+ database_allrun_path = os.path.join(database_path, "raw/openfoam_tutorials_details.txt")
41
+ if not os.path.exists(database_allrun_path):
42
+ raise FileNotFoundError(f"File not found: {database_allrun_path}")
43
+
44
+ with open(database_allrun_path, "r", encoding="utf-8") as file:
45
+ file_content = file.read()
46
+
47
+ # Step 3: Extract `<case_begin> ... </case_end>` segments using regex
48
+ pattern = re.compile(r"<case_begin>(.*?)</case_end>", re.DOTALL)
49
+ matches = pattern.findall(file_content)
50
+
51
+ if not matches:
52
+ raise ValueError("No cases found in the input file. Please check the file content.")
53
+
54
+ documents = []
55
+
56
+ for match in matches:
57
+ full_content = match.strip() # Store the complete case
58
+
59
+ index_match = re.search(r"<index>(.*?)</index>", match, re.DOTALL)
60
+ index_content = index_match.group(1).strip() # Extract `<index>` content
61
+
62
+ # Extract metadata fields
63
+ case_name = extract_field("case name", index_content)
64
+ case_domain = extract_field("case domain", index_content)
65
+ case_category = extract_field("case category", index_content)
66
+ case_solver = extract_field("case solver", index_content)
67
+ case_directory_structure = re.search(r"<directory_structure>([\s\S]*?)</directory_structure>", full_content).group(1)
68
+ detailed_tutorial = re.search(r"<tutorials>([\s\S]*?)</tutorials>", full_content).group(1)
69
+
70
+ # Create a Document instance
71
+ documents.append(Document(
72
+ page_content=tokenize(index_content+case_directory_structure),
73
+ metadata={
74
+ "full_content": full_content, # Store full `<case_begin> ... </case_end>`
75
+ "case_name": case_name,
76
+ "case_domain": case_domain,
77
+ "case_category": case_category,
78
+ "case_solver": case_solver,
79
+ 'dir_structure': case_directory_structure,
80
+ 'tutorials': detailed_tutorial
81
+ }
82
+ ))
83
+
84
+ # Step 4: Compute embeddings and store them in FAISS
85
+ embedding_model = OpenAIEmbeddings(model="text-embedding-3-small")
86
+ vectordb = FAISS.from_documents(documents, embedding_model)
87
+
88
+ # Step 5: Save FAISS index locally
89
+ persist_directory = os.path.join(database_path, "faiss/openfoam_tutorials_details")
90
+ vectordb.save_local(persist_directory)
91
+
92
+ print(f"{len(documents)} cases indexed successfully with metadata! Saved at: {persist_directory}")
93
+
94
+ if __name__ == "__main__":
95
+ main()
96
+
Foam-Agent/source/database/script/faiss_tutorials_structure.py ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import argparse
4
+ from pathlib import Path
5
+
6
+ from langchain_community.vectorstores import FAISS
7
+ from langchain_openai.embeddings import OpenAIEmbeddings
8
+ from langchain_core.documents import Document
9
+
10
+ # Function to extract specific fields from text
11
+ def extract_field(field_name: str, text: str) -> str:
12
+ """Extracts the specified field from the given text."""
13
+ match = re.search(fr"{field_name}:\s*(.*)", text)
14
+ return match.group(1).strip() if match else "Unknown"
15
+
16
+ def tokenize(text: str) -> str:
17
+ # Replace underscores with spaces
18
+ text = text.replace('_', ' ')
19
+ # Insert a space between a lowercase letter and an uppercase letter (global match)
20
+ text = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ', text)
21
+ return text.lower()
22
+
23
+ def main():
24
+ # Step 1: Parse command-line arguments
25
+ parser = argparse.ArgumentParser(
26
+ description="Process OpenFOAM case data and store embeddings in FAISS."
27
+ )
28
+ parser.add_argument(
29
+ "--database_path",
30
+ type=str,
31
+ default=Path(__file__).resolve().parent.parent,
32
+ help="Path to the database directory (default: '../../')",
33
+ )
34
+
35
+ args = parser.parse_args()
36
+ database_path = args.database_path
37
+ print(f"Database path: {database_path}")
38
+
39
+ # Step 2: Read the input file
40
+ database_allrun_path = os.path.join(database_path, "raw/openfoam_tutorials_structure.txt")
41
+ if not os.path.exists(database_allrun_path):
42
+ raise FileNotFoundError(f"File not found: {database_allrun_path}")
43
+
44
+ with open(database_allrun_path, "r", encoding="utf-8") as file:
45
+ file_content = file.read()
46
+
47
+ # Step 3: Extract `<case_begin> ... </case_end>` segments using regex
48
+ pattern = re.compile(r"<case_begin>(.*?)</case_end>", re.DOTALL)
49
+ matches = pattern.findall(file_content)
50
+
51
+ if not matches:
52
+ raise ValueError("No cases found in the input file. Please check the file content.")
53
+
54
+ documents = []
55
+
56
+
57
+ for match in matches:
58
+ full_content = match.strip() # Store the complete case
59
+
60
+ index_match = re.search(r"<index>(.*?)</index>", match, re.DOTALL)
61
+ index_content = index_match.group(1).strip() # Extract `<index>` content
62
+
63
+ # Extract metadata fields
64
+ case_name = extract_field("case name", index_content)
65
+ case_domain = extract_field("case domain", index_content)
66
+ case_category = extract_field("case category", index_content)
67
+ case_solver = extract_field("case solver", index_content)
68
+ case_directory_structure = re.search(r"<directory_structure>([\s\S]*?)</directory_structure>", full_content).group(1)
69
+
70
+ # Create a Document instance
71
+ documents.append(Document(
72
+ page_content=tokenize(index_content), # Use `<index>` content for embedding
73
+ metadata={
74
+ "full_content": full_content, # Store full `<case_begin> ... </case_end>`
75
+ "case_name": case_name,
76
+ "case_domain": case_domain,
77
+ "case_category": case_category,
78
+ "case_solver": case_solver,
79
+ 'dir_structure': case_directory_structure
80
+ }
81
+ ))
82
+
83
+ # Step 4: Compute embeddings and store them in FAISS
84
+ embedding_model = OpenAIEmbeddings(model="text-embedding-3-small")
85
+ vectordb = FAISS.from_documents(documents, embedding_model)
86
+
87
+ # Step 5: Save FAISS index locally
88
+ persist_directory = os.path.join(database_path, "faiss/openfoam_tutorials_structure")
89
+ vectordb.save_local(persist_directory)
90
+
91
+ print(f"{len(documents)} cases indexed successfully with metadata! Saved at: {persist_directory}")
92
+
93
+ if __name__ == "__main__":
94
+ main()
95
+
Foam-Agent/source/database/script/tutorial_parser.py ADDED
@@ -0,0 +1,376 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import subprocess
3
+ import argparse
4
+ import concurrent.futures
5
+ from pathlib import Path
6
+ import re
7
+ import json
8
+
9
+ def read_files_into_dict(base_path, stats=None):
10
+ """
11
+ Reads files from the given base_path directory and stores their content in a dictionary.
12
+ """
13
+ if stats is None:
14
+ stats = {
15
+ "files_total_scanned": 0,
16
+ "files_skipped_encoding": 0,
17
+ "files_skipped_large": 0,
18
+ "files_read_success": 0,
19
+ "allrun_read_success": 0,
20
+ "allrun_read_fail": 0
21
+ }
22
+
23
+ file_contents, file_names, folder_names = {}, [], {}
24
+ base_depth = base_path.rstrip(os.sep).count(os.sep)
25
+
26
+ # Read 'Allrun' file
27
+ allrun_path = os.path.join(base_path, "Allrun")
28
+ allrun_content = "None"
29
+
30
+ # Check if "Allrun" exists and attempt to read it
31
+ if os.path.isfile(allrun_path):
32
+ stats["files_total_scanned"] += 1 # We are scanning the Allrun file
33
+
34
+ try:
35
+ with open(allrun_path, "r") as file_handle:
36
+ allrun_content = file_handle.read()
37
+ stats["allrun_read_success"] += 1
38
+ except UnicodeDecodeError:
39
+ print(f"Skipping file due to encoding error: {allrun_path}")
40
+ stats["files_skipped_encoding"] += 1
41
+ stats["allrun_read_fail"] += 1
42
+ except Exception as e:
43
+ print(f"Error reading file {allrun_path}: {e}")
44
+ stats["allrun_read_fail"] += 1
45
+
46
+ # Traverse the base_path directory to read files
47
+ for root, _, files in os.walk(base_path):
48
+ # Only read files one level below the base_path
49
+ if root.rstrip(os.sep).count(os.sep) == base_depth + 1:
50
+ for file in files:
51
+ file_path = os.path.join(root, file)
52
+
53
+ stats["files_total_scanned"] += 1 # We are scanning this file
54
+
55
+ try:
56
+ with open(file_path, "r") as file_handle:
57
+ lines = file_handle.readlines()
58
+
59
+ file_contents[file] = "".join(lines)
60
+ stats["files_read_success"] += 1
61
+
62
+ folder_names[file] = os.path.relpath(root, base_path)
63
+ file_names.append(file)
64
+ except UnicodeDecodeError:
65
+ print(f"Skipping file due to encoding error: {file_path}")
66
+ stats["files_skipped_encoding"] += 1
67
+ except Exception as e:
68
+ print(f"Error reading file {file_path}: {e}")
69
+
70
+ return allrun_content, file_contents, file_names, folder_names, stats
71
+
72
+
73
+ def find_cases(root_dir):
74
+ """
75
+ Traverse the directory tree under 'root_dir' and look for cases containing a 'system' folder.
76
+ For each case found, extract metadata such as case name, solver, category, and domain.
77
+
78
+ Additionally, collect statistics in a "funnel-like" manner to see how many directories
79
+ and files are processed, skipped due to encoding issues, skipped due to large size, etc.
80
+ """
81
+ cases = []
82
+
83
+ # Initialize statistics dictionary
84
+ stats = {
85
+ "directories_scanned": 0,
86
+ "directories_with_system": 0,
87
+ "files_total_scanned": 0,
88
+ "files_skipped_encoding": 0,
89
+ "files_skipped_large": 0,
90
+ "files_read_success": 0,
91
+ "allrun_read_success": 0,
92
+ "allrun_read_fail": 0
93
+ }
94
+
95
+ # Get FOAM_TUTORIALS from environment or fallback
96
+ FOAM_TUTORIALS = os.environ.get("FOAM_TUTORIALS", "/home/somasn/Documents/LLM/OpenFOAM-10/tutorials")
97
+ blockmesh_resource_dir = os.path.join(FOAM_TUTORIALS, "resources", "blockMesh")
98
+
99
+ for root, dirs, files in os.walk(root_dir):
100
+ stats["directories_scanned"] += 1 # Scanning this directory
101
+
102
+ # Check if the current directory contains a 'system' folder
103
+ if "system" in dirs:
104
+ stats["directories_with_system"] += 1
105
+
106
+ # Read files in the current directory (root)
107
+ allrun_content, file_contents, file_names, folder_names, file_stats = read_files_into_dict(root, stats={
108
+ "files_total_scanned": 0,
109
+ "files_skipped_encoding": 0,
110
+ "files_skipped_large": 0,
111
+ "files_read_success": 0,
112
+ "allrun_read_success": 0,
113
+ "allrun_read_fail": 0
114
+ })
115
+
116
+ # Merge file_stats into the global stats
117
+ stats["files_total_scanned"] += file_stats["files_total_scanned"]
118
+ stats["files_skipped_encoding"] += file_stats["files_skipped_encoding"]
119
+ stats["files_skipped_large"] += file_stats["files_skipped_large"]
120
+ stats["files_read_success"] += file_stats["files_read_success"]
121
+ stats["allrun_read_success"] += file_stats["allrun_read_success"]
122
+ stats["allrun_read_fail"] += file_stats["allrun_read_fail"]
123
+
124
+ # The case name is the name of the current directory
125
+ case_name = os.path.basename(root)
126
+
127
+ # Initialize solver, category, and domain
128
+ solver, category, domain = None, None, None
129
+
130
+ # Move up to the parent directory and search up to 3 levels
131
+ current_path = os.path.dirname(root)
132
+ found_foam = False
133
+
134
+ for level in range(3):
135
+ # Stop if the path is empty or if we have reached the root_dir
136
+ if (not current_path) or (os.path.basename(current_path) == os.path.basename(root_dir)):
137
+ break
138
+
139
+ dir_name = os.path.basename(current_path)
140
+
141
+ # If the directory name ends with 'Foam', treat it as the solver
142
+ if dir_name.endswith("Foam"):
143
+ solver = dir_name
144
+ # The parent of the solver directory is considered the domain
145
+ domain = os.path.basename(os.path.dirname(current_path))
146
+ found_foam = True
147
+ break
148
+ elif level == 0:
149
+ category = dir_name
150
+
151
+ # Move one level up
152
+ current_path = os.path.dirname(current_path)
153
+
154
+ # If no solver directory ending with 'Foam' was found, use the relative path logic
155
+ if not found_foam:
156
+ category = None # Reset category in case it was partially set above
157
+ relative_path = os.path.relpath(root, root_dir)
158
+ path_components = relative_path.split(os.sep)
159
+
160
+ # If the relative path has exactly 3 components: domain/solver/caseName
161
+ if len(path_components) == 3:
162
+ domain, solver = path_components[0], path_components[1]
163
+ # If the relative path has exactly 4 components: domain/solver/category/caseName
164
+ elif len(path_components) == 4:
165
+ domain, solver, category = path_components[0], path_components[1], path_components[2]
166
+
167
+ # --- NEW LOGIC: Check for missing blockMeshDict and copy if referenced in Allrun ---
168
+ system_dir = os.path.join(root, "system")
169
+ blockmeshdict_path = os.path.join(system_dir, "blockMeshDict")
170
+ if not os.path.isfile(blockmeshdict_path):
171
+ # Only try if Allrun exists and was read
172
+ if allrun_content != "None":
173
+ # Look for blockMesh -dict $FOAM_TUTORIALS/resources/blockMesh/<name>
174
+ pattern = r"blockMesh\s+-dict\s+\$FOAM_TUTORIALS/resources/blockMesh/([\w\d_]+)"
175
+ match = re.search(pattern, allrun_content)
176
+ if match:
177
+ referenced_file = match.group(1)
178
+ src_blockmeshdict = os.path.join(blockmesh_resource_dir, referenced_file)
179
+ if os.path.isfile(src_blockmeshdict):
180
+ # Copy to system/blockMeshDict
181
+ try:
182
+ with open(src_blockmeshdict, "r") as src_f:
183
+ blockmesh_content = src_f.read()
184
+ # Save to the case's system dir
185
+ os.makedirs(system_dir, exist_ok=True)
186
+ with open(blockmeshdict_path, "w") as dst_f:
187
+ dst_f.write(blockmesh_content)
188
+ # Add to in-memory structures for output
189
+ file_contents["blockMeshDict"] = blockmesh_content
190
+ file_names.append("blockMeshDict")
191
+ folder_names["blockMeshDict"] = "system"
192
+ print(f"[INFO] Copied {src_blockmeshdict} to {blockmeshdict_path} for case {case_name}")
193
+ except Exception as e:
194
+ print(f"[WARNING] Failed to copy {src_blockmeshdict} to {blockmeshdict_path}: {e}")
195
+ else:
196
+ print(f"[WARNING] Referenced blockMeshDict {src_blockmeshdict} not found for case {case_name}")
197
+ else:
198
+ print(f"[INFO] No blockMesh -dict reference found in Allrun for case {case_name}")
199
+ else:
200
+ print(f"[INFO] No Allrun file to check for blockMeshDict reference in case {case_name}")
201
+ # --- END NEW LOGIC ---
202
+
203
+ # Append the extracted metadata to the 'cases' list
204
+ cases.append({
205
+ "case_name": case_name,
206
+ "solver": solver,
207
+ "category": category,
208
+ "domain": domain,
209
+ "folder_names": folder_names,
210
+ "file_names": file_names,
211
+ "file_contents": file_contents,
212
+ "allrun": allrun_content
213
+ })
214
+
215
+ return cases, stats
216
+
217
+
218
+
219
+ def save_cases_to_file(cases, output_dir):
220
+ """
221
+ Saves case details, summary, or Allrun content to a file.
222
+ """
223
+
224
+ allrun_filepath = f"{output_dir}/openfoam_allrun_scripts.txt"
225
+ tutorials_summary_filepath = f"{output_dir}/openfoam_tutorials_structure.txt"
226
+ tutorial_filepath = f"{output_dir}/openfoam_tutorials_details.txt"
227
+ case_stats_filepath = f"{output_dir}/openfoam_case_stats.json"
228
+
229
+ allrun_text = ''
230
+ tutorials_summary_text = ''
231
+ tutorials_text = ''
232
+
233
+ case_stats = {
234
+ 'case_domain': set(),
235
+ 'case_category': set(),
236
+ 'case_solver': set()
237
+ }
238
+
239
+ for case in cases:
240
+ case_name, case_domain, case_category, case_solver = (
241
+ case["case_name"], case["domain"], case["category"], case["solver"]
242
+ )
243
+
244
+ if case_domain:
245
+ case_stats['case_domain'].add(case_domain)
246
+ if case_category:
247
+ case_stats['case_category'].add(case_category)
248
+ if case_solver:
249
+ case_stats['case_solver'].add(case_solver)
250
+
251
+ # Save the case index
252
+ case_index_text = "<index>\n"
253
+ case_index_text += f"case name: {case_name}\n"
254
+ case_index_text += f"case domain: {case_domain}\n"
255
+ case_index_text += f"case category: {case_category}\n"
256
+ case_index_text += f"case solver: {case_solver}\n"
257
+ case_index_text += "</index>\n\n"
258
+
259
+ # Save the directory structure
260
+ folder_file_dict = {}
261
+ for file_name, folder_name in case["folder_names"].items():
262
+ if folder_name not in folder_file_dict:
263
+ folder_file_dict[folder_name] = []
264
+ folder_file_dict[folder_name].append(file_name)
265
+
266
+ dir_structure_text = "<directory_structure>\n"
267
+ for folder_name, file_names in folder_file_dict.items():
268
+ dir_structure_text += f"<dir>directory name: {folder_name}. "
269
+ dir_structure_text += f"File names in this directory: [{', '.join(file_names)}]</dir>\n"
270
+ dir_structure_text += "</directory_structure>\n\n"
271
+
272
+
273
+ if case["allrun"] != "None":
274
+ # Save the Allrun content
275
+ allrun_text += f'''
276
+ <case_begin>
277
+ {case_index_text}
278
+ {dir_structure_text}
279
+ <allrun_script>
280
+ {case["allrun"]}
281
+ </allrun_script>
282
+ </case_end>\n\n\n
283
+ '''
284
+
285
+ # Save the tutorials summary
286
+ tutorials_summary_text += f"<case_begin>\n{case_index_text}\n{dir_structure_text}\n</case_end>\n\n"
287
+
288
+ # Save the detailed tutorials
289
+ tutorials_text += f"<case_begin>\n{case_index_text}\n{dir_structure_text}\n<tutorials>\n"
290
+
291
+ for folder_name, file_names in folder_file_dict.items():
292
+ tutorials_text += f"<directory_begin>directory name: {folder_name}\n"
293
+ for file_name in file_names:
294
+ tutorials_text += f"<file_begin>file name: {file_name}\n"
295
+
296
+ # Delete comments, such as license information, from the file contents
297
+ cleaned_text = re.sub(r'/\*.*?\*/', '', case['file_contents'][file_name], flags=re.DOTALL)
298
+ cleaned_text = re.sub(r'//.*', '', cleaned_text)
299
+
300
+ tutorials_text += f"<file_content>{cleaned_text}</file_content>\n"
301
+ tutorials_text += f"</file_end>\n\n"
302
+
303
+ tutorials_text += f"</directory_end>\n\n"
304
+
305
+ tutorials_text += "</tutorials>\n</case_end>\n\n\n"
306
+
307
+ with open(allrun_filepath, "w", encoding="utf-8") as file:
308
+ file.write(allrun_text)
309
+
310
+ with open(tutorials_summary_filepath, "w", encoding="utf-8") as file:
311
+ file.write(tutorials_summary_text)
312
+
313
+ with open(tutorial_filepath, "w", encoding="utf-8") as file:
314
+ file.write(tutorials_text)
315
+
316
+ case_stats['case_category'].add("None")
317
+ case_stats['case_category'] = list(case_stats['case_category'])
318
+ case_stats['case_domain'] = list(case_stats['case_domain'])
319
+ case_stats['case_solver'] = list(case_stats['case_solver'])
320
+
321
+ with open(case_stats_filepath, "w", encoding="utf-8") as file:
322
+ json.dump(case_stats, file, ensure_ascii=False, indent=4)
323
+
324
+
325
+ def get_commands_from_directory(directory_path):
326
+ """Retrieves all command file names from a specified directory using os.scandir."""
327
+ if not os.path.exists(directory_path):
328
+ raise FileNotFoundError(f"The directory {directory_path} does not exist.")
329
+ return [entry.name for entry in os.scandir(directory_path) if entry.is_file()]
330
+
331
+ def get_command_help(command, directory_path):
332
+ """Retrieves the help message for a given command."""
333
+ try:
334
+ result = subprocess.run(
335
+ f"{os.path.join(directory_path, command)} -help", shell=True, capture_output=True, text=True
336
+ )
337
+ return result.stdout if result.returncode == 0 else result.stderr
338
+ except Exception as e:
339
+ return str(e)
340
+
341
+ def fetch_command_helps(commands, directory_path):
342
+ """Fetch help messages in parallel."""
343
+ with concurrent.futures.ThreadPoolExecutor() as executor:
344
+ return dict(zip(commands, executor.map(lambda cmd: get_command_help(cmd, directory_path), commands)))
345
+
346
+ if __name__ == "__main__":
347
+ # python ./database/script/tutorial_parser.py --output_dir=./database/raw --wm_project_dir=$WM_PROJECT_DIR
348
+
349
+ parser = argparse.ArgumentParser()
350
+ parser.add_argument("--wm_project_dir", required=True, help="Path to WM_PROJECT_DIR")
351
+ parser.add_argument("--output_dir", default='./database', help="Directory to save output files")
352
+ args = parser.parse_args()
353
+
354
+ print(args)
355
+
356
+ tutorial_path = os.path.join(args.wm_project_dir, "tutorials")
357
+ cases_info, case_stats = find_cases(tutorial_path)
358
+ print(f"Statistics: {case_stats}")
359
+ print(f"Found {len(cases_info)} cases in {tutorial_path}")
360
+
361
+
362
+ output_dir = Path(args.output_dir)
363
+ output_dir.mkdir(parents=True, exist_ok=True)
364
+
365
+ save_cases_to_file(cases_info, output_dir)
366
+
367
+ commands_path = Path(args.wm_project_dir) / "platforms/linux64GccDPInt32Opt/bin"
368
+ commands = get_commands_from_directory(commands_path)
369
+ command_help_data = fetch_command_helps(commands, commands_path)
370
+
371
+ with open(output_dir / "openfoam_commands.txt", "w", encoding="utf-8") as f:
372
+ f.write("\n".join(commands) + "\n")
373
+
374
+ with open(output_dir / "openfoam_command_help.txt", "w", encoding="utf-8") as f:
375
+ for cmd, help_text in command_help_data.items():
376
+ f.write(f"<command_begin><command>{cmd}</command><help_text>{help_text}</help_text></command_end>\n\n")
Foam-Agent/source/environment.yml ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: openfoamAgent
2
+ channels:
3
+ - conda-forge
4
+ - defaults
5
+ dependencies:
6
+ - _libgcc_mutex=0.1
7
+ - _openmp_mutex=4.5
8
+ - bzip2=1.0.8
9
+ - ca-certificates=2025.1.31
10
+ - expat=2.6.4
11
+ - ld_impl_linux-64=2.43
12
+ - libexpat=2.6.4
13
+ - libffi=3.4.6
14
+ - libgcc=14.2.0
15
+ - libgcc-ng=14.2.0
16
+ - libgomp=14.2.0
17
+ - liblzma=5.6.4
18
+ - liblzma-devel=5.6.4
19
+ - libnsl=2.0.1
20
+ - libsqlite=3.46.0
21
+ - libuuid=1.41.5
22
+ - libzlib=1.2.13
23
+ - ncurses=6.5
24
+ - openssl=3.4.1
25
+ - pip=25.0.1
26
+ - python=3.12.9
27
+ - readline=8.2
28
+ - setuptools=75.8.0
29
+ - sqlite=3.46.0
30
+ - tk=8.6.14
31
+ - tzdata=2025a
32
+ - wheel=0.45.1
33
+ - xz=5.6.4
34
+ - xz-gpl-tools=5.6.4
35
+ - xz-tools=5.6.4
36
+ - zlib=1.2.13
37
+ - pip:
38
+ - aiohappyeyeballs==2.4.6
39
+ - aiohttp==3.11.12
40
+ - aiosignal==1.3.2
41
+ - annotated-types==0.7.0
42
+ - anthropic==0.45.2
43
+ - anyio==4.8.0
44
+ - attrs==25.1.0
45
+ - certifi==2025.1.31
46
+ - charset-normalizer==3.4.1
47
+ - click==8.1.8
48
+ - dataclasses-json==0.6.7
49
+ - defusedxml==0.7.1
50
+ - distro==1.9.0
51
+ - faiss-cpu==1.10.0
52
+ - frozenlist==1.5.0
53
+ - gitingest==0.1.2
54
+ - greenlet==3.1.1
55
+ - h11==0.14.0
56
+ - httpcore==1.0.7
57
+ - httpx==0.28.1
58
+ - httpx-sse==0.4.0
59
+ - idna==3.10
60
+ - jiter==0.8.2
61
+ - jsonpatch==1.33
62
+ - jsonpointer==3.0.0
63
+ - langchain==0.3.18
64
+ - langchain-anthropic==0.2.4
65
+ - langchain-community==0.3.17
66
+ - langchain-core==0.3.34
67
+ - langchain-experimental==0.3.4
68
+ - langchain-openai==0.2.14
69
+ - langchain-text-splitters==0.3.6
70
+ - langchain-ollama==0.1.1
71
+ - langgraph==0.2.71
72
+ - langgraph-checkpoint==2.0.12
73
+ - langgraph-sdk==0.1.51
74
+ - langserve==0.3.1
75
+ - langsmith==0.3.8
76
+ - marshmallow==3.26.1
77
+ - msgpack==1.1.0
78
+ - multidict==6.1.0
79
+ - mypy-extensions==1.0.0
80
+ - numpy==2.2.2
81
+ - openai==1.61.1
82
+ - orjson==3.10.15
83
+ - packaging==24.2
84
+ - propcache==0.2.1
85
+ - pydantic==2.10.6
86
+ - pydantic-core==2.27.2
87
+ - pydantic-settings==2.7.1
88
+ - python-dotenv==1.0.1
89
+ - pyyaml==6.0.2
90
+ - regex==2024.11.6
91
+ - requests==2.32.3
92
+ - requests-toolbelt==1.0.0
93
+ - sniffio==1.3.1
94
+ - sqlalchemy==2.0.38
95
+ - tenacity==9.0.0
96
+ - tiktoken==0.8.0
97
+ - tqdm==4.67.1
98
+ - typing-extensions==4.12.2
99
+ - typing-inspect==0.9.0
100
+ - urllib3==2.3.0
101
+ - yarl==1.18.3
102
+ - zstandard==0.23.0
103
+ - boto3==1.36.26
104
+ - langchain_aws==0.2.13
Foam-Agent/source/foambench_main.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import subprocess
3
+ import sys
4
+ import argparse
5
+ import shlex
6
+
7
+ def parse_args():
8
+ parser = argparse.ArgumentParser(description="Benchmark Workflow Interface")
9
+ parser.add_argument(
10
+ '--openfoam_path',
11
+ type=str,
12
+ required=True,
13
+ help="Path to OpenFOAM installation (WM_PROJECT_DIR)"
14
+ )
15
+ parser.add_argument(
16
+ '--output',
17
+ type=str,
18
+ required=True,
19
+ help="Base output directory for benchmark results"
20
+ )
21
+ parser.add_argument(
22
+ '--prompt_path',
23
+ type=str,
24
+ required=True,
25
+ help="User requirement file path for the benchmark"
26
+ )
27
+ parser.add_argument(
28
+ '--custom_mesh_path',
29
+ type=str,
30
+ default=None,
31
+ help="Path to custom mesh file (e.g., .msh, .stl, .obj). If not provided, no custom mesh will be used."
32
+ )
33
+ return parser.parse_args()
34
+
35
+ def run_command(command_str):
36
+ """
37
+ Execute a command string using the current terminal's input/output,
38
+ with the working directory set to the directory of the current file.
39
+
40
+ Parameters:
41
+ command_str (str): The command to execute, e.g. "python main.py --output_dir xxxx"
42
+ or "bash xxxxx.sh".
43
+ """
44
+ # Split the command string into a list of arguments
45
+ args = shlex.split(command_str)
46
+ # Set the working directory to the directory of the current file
47
+ cwd = os.path.dirname(os.path.abspath(__file__))
48
+
49
+ try:
50
+ result = subprocess.run(
51
+ args,
52
+ cwd=cwd,
53
+ check=True,
54
+ stdout=sys.stdout,
55
+ stderr=sys.stderr,
56
+ stdin=sys.stdin
57
+ )
58
+ print(f"Finished command: Return Code {result.returncode}")
59
+ except subprocess.CalledProcessError as e:
60
+ print(f"Error running command: {e}")
61
+ sys.exit(e.returncode)
62
+
63
+ def main():
64
+ args = parse_args()
65
+ print(args)
66
+
67
+ # Set environment variables
68
+ WM_PROJECT_DIR = args.openfoam_path
69
+ # Check if OPENAI_API_KEY is available in the environment
70
+ openai_api_key = os.getenv("OPENAI_API_KEY")
71
+ if not openai_api_key:
72
+ print("Error: OPENAI_API_KEY is not set in the environment.")
73
+ sys.exit(1)
74
+
75
+ # Create the output folder
76
+ os.makedirs(args.output, exist_ok=True)
77
+
78
+ # Define the list of scripts to be executed.
79
+ # Each tuple consists of (script_path, list_of_arguments).
80
+ # Scripts can be Python or shell scripts.
81
+
82
+ # Get the directory where this script is located
83
+ script_dir = os.path.dirname(os.path.abspath(__file__))
84
+ print(f"script_dir: {script_dir}")
85
+
86
+ SCRIPTS = []
87
+
88
+ # Preprocess the OpenFOAM tutorials
89
+ if not os.path.exists(f"{script_dir}/database/raw/openfoam_tutorials_details.txt"):
90
+ SCRIPTS.append(f"python database/script/tutorial_parser.py --output_dir=./database/raw --wm_project_dir={WM_PROJECT_DIR}")
91
+ if not os.path.exists(f"{script_dir}/database/faiss/openfoam_command_help"):
92
+ SCRIPTS.append(f"python database/script/faiss_command_help.py --database_path=./database")
93
+ if not os.path.exists(f"{script_dir}/database/faiss/openfoam_allrun_scripts"):
94
+ SCRIPTS.append(f"python database/script/faiss_allrun_scripts.py --database_path=./database")
95
+ if not os.path.exists(f"{script_dir}/database/faiss/openfoam_tutorials_structure"):
96
+ SCRIPTS.append(f"python database/script/faiss_tutorials_structure.py --database_path=./database")
97
+ if not os.path.exists(f"{script_dir}/database/faiss/openfoam_tutorials_details"):
98
+ SCRIPTS.append(f"python database/script/faiss_tutorials_details.py --database_path=./database")
99
+
100
+ # Build main workflow command with optional custom mesh path
101
+ main_cmd = f"python src/main.py --prompt_path='{args.prompt_path}' --output_dir='{args.output}'"
102
+ if args.custom_mesh_path:
103
+ main_cmd += f" --custom_mesh_path='{args.custom_mesh_path}'"
104
+
105
+ print(f"Main workflow command: {main_cmd}")
106
+ # Main workflow
107
+ SCRIPTS.extend([
108
+ main_cmd
109
+ ])
110
+
111
+ print("Starting workflow...")
112
+ for script in SCRIPTS:
113
+ run_command(script)
114
+ print("Workflow completed successfully.")
115
+
116
+ if __name__ == "__main__":
117
+ ## python foambench_main.py --openfoam_path $WM_PROJECT_DIR --output ./output --prompt_path "./user_requirement.txt"
118
+ ## python foambench_main.py --openfoam_path $WM_PROJECT_DIR --output ./output --prompt_path "./user_requirement.txt" --custom_mesh_path "./my_mesh.msh"
119
+ main()
Foam-Agent/source/src/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """
3
+ src 包初始化文件
4
+ """
Foam-Agent/source/src/config.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # config.py
2
+ from dataclasses import dataclass
3
+ from pathlib import Path
4
+
5
+ @dataclass
6
+ class Config:
7
+ max_loop: int = 10
8
+
9
+ batchsize: int = 10
10
+ searchdocs: int = 2
11
+ run_times: int = 1 # current run number (for directory naming)
12
+ database_path: str = Path(__file__).resolve().parent.parent / "database"
13
+ run_directory: str = Path(__file__).resolve().parent.parent / "runs"
14
+ case_dir: str = ""
15
+ max_time_limit = 36000 # Max time limit after which the openfoam run will be terminated
16
+ model_provider: str = "openai"# [openai, ollama, bedrock]
17
+ # model_version should be in ["gpt-4o", "deepseek-r1:32b-qwen-distill-fp16", "qwen2.5:32b-instruct"]
18
+ model_version: str = "gpt-4o"
19
+ temperature: float = 0.6
20
+
Foam-Agent/source/src/main.py ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, field
2
+ from typing import List, Optional, TypedDict, Literal
3
+ from langgraph.graph import StateGraph, START, END
4
+ from langgraph.types import Command
5
+ import argparse
6
+ from pathlib import Path
7
+ from utils import LLMService, GraphState
8
+
9
+ from config import Config
10
+ from nodes.architect_node import architect_node
11
+ from nodes.meshing_node import meshing_node
12
+ from nodes.input_writer_node import input_writer_node
13
+ from nodes.local_runner_node import local_runner_node
14
+ from nodes.reviewer_node import reviewer_node
15
+ from nodes.visualization_node import visualization_node
16
+ from nodes.hpc_runner_node import hpc_runner_node
17
+ from router_func import (
18
+ route_after_architect,
19
+ route_after_input_writer,
20
+ route_after_runner,
21
+ route_after_reviewer
22
+ )
23
+ import json
24
+
25
+ def create_foam_agent_graph() -> StateGraph:
26
+ """Create the OpenFOAM agent workflow graph."""
27
+
28
+ # Create the graph
29
+ workflow = StateGraph(GraphState)
30
+
31
+ # Add nodes
32
+ workflow.add_node("architect", architect_node)
33
+ workflow.add_node("meshing", meshing_node)
34
+ workflow.add_node("input_writer", input_writer_node)
35
+ workflow.add_node("local_runner", local_runner_node)
36
+ workflow.add_node("hpc_runner", hpc_runner_node)
37
+ workflow.add_node("reviewer", reviewer_node)
38
+ workflow.add_node("visualization", visualization_node)
39
+
40
+ # Add edges
41
+ workflow.add_edge(START, "architect")
42
+ workflow.add_conditional_edges("architect", route_after_architect)
43
+ workflow.add_edge("meshing", "input_writer")
44
+ workflow.add_conditional_edges("input_writer", route_after_input_writer)
45
+ workflow.add_conditional_edges("hpc_runner", route_after_runner)
46
+ workflow.add_conditional_edges("local_runner", route_after_runner)
47
+ workflow.add_conditional_edges("reviewer", route_after_reviewer)
48
+ workflow.add_edge("visualization", END)
49
+
50
+ return workflow
51
+
52
+ def initialize_state(user_requirement: str, config: Config, custom_mesh_path: Optional[str] = None) -> GraphState:
53
+ case_stats = json.load(open(f"{config.database_path}/raw/openfoam_case_stats.json", "r"))
54
+ # mesh_type = "custom_mesh" if custom_mesh_path else "standard_mesh"
55
+ state = GraphState(
56
+ user_requirement=user_requirement,
57
+ config=config,
58
+ case_dir="",
59
+ tutorial="",
60
+ case_name="",
61
+ subtasks=[],
62
+ current_subtask_index=0,
63
+ error_command=None,
64
+ error_content=None,
65
+ loop_count=0,
66
+ llm_service=LLMService(config),
67
+ case_stats=case_stats,
68
+ tutorial_reference=None,
69
+ case_path_reference=None,
70
+ dir_structure_reference=None,
71
+ case_info=None,
72
+ allrun_reference=None,
73
+ dir_structure=None,
74
+ commands=None,
75
+ foamfiles=None,
76
+ error_logs=None,
77
+ history_text=None,
78
+ case_domain=None,
79
+ case_category=None,
80
+ case_solver=None,
81
+ mesh_info=None,
82
+ mesh_commands=None,
83
+ custom_mesh_used=None,
84
+ mesh_type=None,
85
+ custom_mesh_path=custom_mesh_path,
86
+ review_analysis=None,
87
+ input_writer_mode="initial",
88
+ job_id=None,
89
+ cluster_info=None,
90
+ slurm_script_path=None
91
+ )
92
+ if custom_mesh_path:
93
+ print(f"Custom mesh path: {custom_mesh_path}")
94
+ else:
95
+ print("No custom mesh path provided.")
96
+ return state
97
+
98
+ def main(user_requirement: str, config: Config, custom_mesh_path: Optional[str] = None):
99
+ """Main function to run the OpenFOAM workflow."""
100
+
101
+ # Create and compile the graph
102
+ workflow = create_foam_agent_graph()
103
+ app = workflow.compile()
104
+
105
+ # Initialize the state
106
+ initial_state = initialize_state(user_requirement, config, custom_mesh_path)
107
+
108
+ print("Starting Foam-Agent...")
109
+
110
+ # Invoke the graph
111
+ try:
112
+ result = app.invoke(initial_state)
113
+ print("Workflow completed successfully!")
114
+
115
+ # Print final statistics
116
+ if result.get("llm_service"):
117
+ result["llm_service"].print_statistics()
118
+
119
+ # print(f"Final state: {result}")
120
+
121
+ except Exception as e:
122
+ print(f"Workflow failed with error: {e}")
123
+ raise
124
+
125
+ if __name__ == "__main__":
126
+ # python main.py
127
+ parser = argparse.ArgumentParser(
128
+ description="Run the OpenFOAM workflow"
129
+ )
130
+ parser.add_argument(
131
+ "--prompt_path",
132
+ type=str,
133
+ default=f"{Path(__file__).parent.parent}/user_requirement.txt",
134
+ help="User requirement file path for the workflow.",
135
+ )
136
+ parser.add_argument(
137
+ "--output_dir",
138
+ type=str,
139
+ default="",
140
+ help="Output directory for the workflow.",
141
+ )
142
+ parser.add_argument(
143
+ "--custom_mesh_path",
144
+ type=str,
145
+ default=None,
146
+ help="Path to custom mesh file (e.g., .msh, .stl, .obj). If not provided, no custom mesh will be used.",
147
+ )
148
+
149
+ args = parser.parse_args()
150
+ print(args)
151
+
152
+ # Initialize configuration.
153
+ config = Config()
154
+ if args.output_dir != "":
155
+ config.case_dir = args.output_dir
156
+
157
+ with open(args.prompt_path, 'r') as f:
158
+ user_requirement = f.read()
159
+
160
+ main(user_requirement, config, args.custom_mesh_path)
Foam-Agent/source/src/nodes/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # nodes package
Foam-Agent/source/src/nodes/architect_node.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from src.utils import retrieve_faiss, parse_directory_structure
2
+
3
+ def save_file(*args, **kwargs):
4
+ raise NotImplementedError("save_file function is not implemented yet.")
Foam-Agent/source/src/nodes/hpc_runner_node.py ADDED
@@ -0,0 +1,463 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # hpc_runner_node.py
2
+ from typing import List
3
+ import os
4
+ import subprocess
5
+ import json
6
+ from pydantic import BaseModel, Field
7
+ import re
8
+ from utils import (
9
+ save_file, remove_files, remove_file,
10
+ run_command, check_foam_errors, retrieve_faiss, remove_numeric_folders
11
+ )
12
+
13
+
14
+ def extract_cluster_info(state) -> dict:
15
+ """
16
+ Extract cluster information from user requirement using LLM.
17
+
18
+ Args:
19
+ state: Current graph state containing user requirement and LLM service
20
+
21
+ Returns:
22
+ dict: Dictionary containing cluster_name, account_number, and other cluster details
23
+ """
24
+ user_requirement = state["user_requirement"]
25
+ case_dir = state["case_dir"]
26
+
27
+ # Check if decomposeParDict exists and read its content
28
+ decompose_par_dict_content = ""
29
+ decompose_par_dict_path = os.path.join(case_dir, "system", "decomposeParDict")
30
+ if os.path.exists(decompose_par_dict_path):
31
+ try:
32
+ with open(decompose_par_dict_path, 'r') as f:
33
+ decompose_par_dict_content = f.read()
34
+ except Exception as e:
35
+ print(f"Warning: Could not read decomposeParDict: {e}")
36
+
37
+ system_prompt = (
38
+ "You are an expert in HPC cluster analysis. "
39
+ "Analyze the user requirement to extract cluster information. "
40
+ "Look for keywords like: cluster name, account number, partition, queue, "
41
+ "specific cluster names (e.g., Stampede2, Frontera, Summit, etc.), "
42
+ "account numbers, project codes, or any mention of specific HPC systems. "
43
+ ""
44
+ "IMPORTANT: If a decomposeParDict file is provided, analyze it to determine "
45
+ "the appropriate number of tasks per node (ntasks_per_node) based on the "
46
+ "decomposition settings. The number of tasks should match the total number "
47
+ "of subdomains or processes specified in the decomposeParDict."
48
+ ""
49
+ "Return a JSON object with the following structure: "
50
+ "{"
51
+ " 'cluster_name': 'name of the cluster or HPC system', "
52
+ " 'account_number': 'account number or project code', "
53
+ " 'partition': 'partition name (e.g., normal, debug, gpu)', "
54
+ " 'nodes': 'number of nodes (default: 1)', "
55
+ " 'ntasks_per_node': 'number of tasks per node (determine from decomposeParDict if available)', "
56
+ " 'time_limit': 'time limit in hours (default: 24)', "
57
+ " 'memory': 'memory per node in GB (default: 64)'"
58
+ "}"
59
+ "If any information is not specified, use reasonable defaults based on your expertise. "
60
+ "Only return valid JSON. Don't include any other text."
61
+ )
62
+
63
+ user_prompt = (
64
+ f"User requirement: {user_requirement}\n\n"
65
+ )
66
+
67
+ if decompose_par_dict_content:
68
+ user_prompt += (
69
+ f"decomposeParDict content:\n{decompose_par_dict_content}\n\n"
70
+ "Analyze the decomposeParDict to determine the appropriate number of tasks per node "
71
+ "based on the decomposition settings. "
72
+ )
73
+
74
+ user_prompt += "Extract cluster information and return as JSON object."
75
+
76
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
77
+
78
+ # Try to parse the JSON response
79
+ try:
80
+ # Clean up the response to extract JSON
81
+ response = response.strip()
82
+ if response.startswith('```json'):
83
+ response = response[7:]
84
+ if response.endswith('```'):
85
+ response = response[:-3]
86
+ response = response.strip()
87
+
88
+ cluster_info = json.loads(response)
89
+
90
+ # Set defaults for missing values
91
+ defaults = {
92
+ 'cluster_name': 'default_cluster',
93
+ 'account_number': 'default_account',
94
+ 'partition': 'normal',
95
+ 'nodes': 1,
96
+ 'ntasks_per_node': 1,
97
+ 'time_limit': 24,
98
+ 'memory': 64
99
+ }
100
+
101
+ for key, default_value in defaults.items():
102
+ if key not in cluster_info or cluster_info[key] is None:
103
+ cluster_info[key] = default_value
104
+
105
+ return cluster_info
106
+
107
+ except (json.JSONDecodeError, KeyError) as e:
108
+ print(f"Error parsing cluster info from LLM response: {e}")
109
+ print(f"LLM response: {response}")
110
+ # Return default values if parsing fails
111
+ return {
112
+ 'cluster_name': 'default_cluster',
113
+ 'account_number': 'default_account',
114
+ 'partition': 'normal',
115
+ 'nodes': 1,
116
+ 'ntasks_per_node': 1,
117
+ 'time_limit': 24,
118
+ 'memory': 64
119
+ }
120
+
121
+
122
+ def create_slurm_script(case_dir: str, cluster_info: dict, state) -> str:
123
+ """
124
+ Create a SLURM script for OpenFOAM simulation using LLM.
125
+
126
+ Args:
127
+ case_dir: Directory containing the OpenFOAM case
128
+ cluster_info: Dictionary containing cluster configuration
129
+ state: Current graph state containing LLM service
130
+
131
+ Returns:
132
+ str: Path to the created SLURM script
133
+ """
134
+ system_prompt = (
135
+ "You are an expert in HPC cluster job submission and SLURM scripting. "
136
+ "Create a complete SLURM script for running OpenFOAM simulations. "
137
+ "The script should include:"
138
+ "1. Proper SLURM directives (#SBATCH) based on the cluster information provided"
139
+ "2. Do not load openfoam"
140
+ "3. Load libaraies for openfoam for run in parallel"
141
+ "4. Directory navigation and execution of the Allrun script"
142
+ "5. Error handling and status reporting"
143
+ "6. Any cluster-specific optimizations or requirements"
144
+ "7. Use your understanding of the documentation of the cluster and figure out the syntax of their jobscript."
145
+ ""
146
+ "Return ONLY the complete SLURM script content. Do not include any explanations or markdown formatting."
147
+ "Make sure the script is executable and follows best practices for the specified cluster."
148
+ )
149
+
150
+ user_prompt = (
151
+ f"Create a SLURM script for OpenFOAM simulation with the following parameters:\n"
152
+ f"Cluster: {cluster_info['cluster_name']}\n"
153
+ f"Account: {cluster_info['account_number']}\n"
154
+ f"Partition: {cluster_info['partition']}\n"
155
+ f"Nodes: {cluster_info['nodes']}\n"
156
+ f"Tasks per node: {cluster_info['ntasks_per_node']}\n"
157
+ f"Time limit: {cluster_info['time_limit']} hours\n"
158
+ f"Memory: {cluster_info['memory']} GB per node\n"
159
+ f"Case directory: {case_dir}\n"
160
+ f""
161
+ f"Generate a complete SLURM script that will run the OpenFOAM simulation using the Allrun script."
162
+ )
163
+
164
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
165
+
166
+ # Clean up the response to extract just the script content
167
+ script_content = response.strip()
168
+ if script_content.startswith('```bash'):
169
+ script_content = script_content[7:]
170
+ elif script_content.startswith('```'):
171
+ script_content = script_content[3:]
172
+ if script_content.endswith('```'):
173
+ script_content = script_content[:-3]
174
+ script_content = script_content.strip()
175
+
176
+ # Ensure the script starts with shebang
177
+ if not script_content.startswith('#!/bin/bash'):
178
+ script_content = '#!/bin/bash\n' + script_content
179
+
180
+ script_path = os.path.join(case_dir, "submit_job.slurm")
181
+ save_file(script_path, script_content)
182
+ return script_path
183
+
184
+
185
+ def create_slurm_script_with_error_context(case_dir: str, cluster_info: dict, state, error_message: str = "", previous_script_content: str = "") -> str:
186
+ """
187
+ Create a SLURM script for OpenFOAM simulation using LLM, with error context for retries.
188
+
189
+ Args:
190
+ case_dir: Directory containing the OpenFOAM case
191
+ cluster_info: Dictionary containing cluster configuration
192
+ state: Current graph state containing LLM service
193
+ error_message: Error message from previous submission attempt
194
+ previous_script_content: Content of the previous failed SLURM script
195
+
196
+ Returns:
197
+ str: Path to the created SLURM script
198
+ """
199
+ system_prompt = (
200
+ "You are an expert in HPC cluster job submission and SLURM scripting. "
201
+ "Create a complete SLURM script for running OpenFOAM simulations. "
202
+ "The script should include:"
203
+ "1. Proper SLURM directives (#SBATCH) based on the cluster information provided"
204
+ "2. Do not load OpenFOAM"
205
+ "3. Load libaraies for openfoam for run in parallel"
206
+ "4. Directory navigation and execution of the Allrun script"
207
+ "5. Error handling and status reporting"
208
+ "6. Any cluster-specific optimizations or requirements"
209
+ "7. Use your understanding of the documentation of the cluster and figure out the syntax of their jobscript."
210
+ ""
211
+ "If a previous script and error message are provided, analyze the error and the script "
212
+ "to identify what went wrong and fix it. Common issues to consider:"
213
+ "- Invalid account numbers or partitions"
214
+ "- Insufficient resources (memory, time, nodes)"
215
+ "- Missing modules or environment variables"
216
+ "- Incorrect file paths or permissions"
217
+ "- Cluster-specific requirements or restrictions"
218
+ "- Syntax errors in SLURM directives"
219
+ "- Incorrect module names or versions"
220
+ ""
221
+ "Compare the previous script with the error message to identify the specific issue "
222
+ "and create a corrected version."
223
+ ""
224
+ "Return ONLY the complete SLURM script content. Do not include any explanations or markdown formatting."
225
+ "Make sure the script is executable and follows best practices for the specified cluster."
226
+ )
227
+
228
+ user_prompt = (
229
+ f"Create a SLURM script for OpenFOAM simulation with the following parameters:\n"
230
+ f"Cluster: {cluster_info['cluster_name']}\n"
231
+ f"Account: {cluster_info['account_number']}\n"
232
+ f"Partition: {cluster_info['partition']}\n"
233
+ f"Nodes: {cluster_info['nodes']}\n"
234
+ f"Tasks per node: {cluster_info['ntasks_per_node']}\n"
235
+ f"Time limit: {cluster_info['time_limit']} hours\n"
236
+ f"Memory: {cluster_info['memory']} GB per node\n"
237
+ f"Case directory: {case_dir}\n"
238
+ )
239
+
240
+ if error_message and previous_script_content:
241
+ user_prompt += f"\nPrevious submission failed with error: {error_message}\n"
242
+ user_prompt += f"Previous SLURM script that failed:\n```bash\n{previous_script_content}\n```\n"
243
+ user_prompt += "Please analyze this error and the previous script to identify the issue and create a corrected version."
244
+
245
+ user_prompt += f"\nGenerate a complete SLURM script that will run the OpenFOAM simulation using the Allrun script. Return ONLY the complete SLURM script content. Do not include any explanations or markdown formatting."
246
+
247
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
248
+
249
+ # Clean up the response to extract just the script content
250
+ script_content = response.strip()
251
+ if script_content.startswith('```bash'):
252
+ script_content = script_content[7:]
253
+ elif script_content.startswith('```'):
254
+ script_content = script_content[3:]
255
+ if script_content.endswith('```'):
256
+ script_content = script_content[:-3]
257
+ script_content = script_content.strip()
258
+
259
+ # Ensure the script starts with shebang
260
+ if not script_content.startswith('#!/bin/bash'):
261
+ script_content = '#!/bin/bash\n' + script_content
262
+
263
+ script_path = os.path.join(case_dir, "submit_job.slurm")
264
+ save_file(script_path, script_content)
265
+ return script_path
266
+
267
+
268
+ def submit_slurm_job(script_path: str) -> tuple:
269
+ """
270
+ Submit a SLURM job and return job ID.
271
+
272
+ Args:
273
+ script_path: Path to the SLURM script
274
+
275
+ Returns:
276
+ tuple: (job_id, success, error_message)
277
+ """
278
+ try:
279
+ # Submit the job
280
+ result = subprocess.run(
281
+ ["sbatch", script_path],
282
+ capture_output=True,
283
+ text=True,
284
+ check=True
285
+ )
286
+
287
+ # Extract job ID from output
288
+ output = result.stdout.strip()
289
+ job_id_match = re.search(r'Submitted batch job (\d+)', output)
290
+
291
+ if job_id_match:
292
+ job_id = job_id_match.group(1)
293
+ return job_id, True, ""
294
+ else:
295
+ return None, False, f"Could not extract job ID from output: {output}"
296
+
297
+ except subprocess.CalledProcessError as e:
298
+ return None, False, f"Failed to submit job: {e.stderr}"
299
+ except Exception as e:
300
+ return None, False, f"Unexpected error: {str(e)}"
301
+
302
+
303
+ def check_job_status(job_id: str) -> tuple:
304
+ """
305
+ Check the status of a SLURM job.
306
+
307
+ Args:
308
+ job_id: SLURM job ID
309
+
310
+ Returns:
311
+ tuple: (status, success, error_message)
312
+ """
313
+ try:
314
+ result = subprocess.run(
315
+ ["squeue", "-j", job_id, "--noheader", "-o", "%T"],
316
+ capture_output=True,
317
+ text=True,
318
+ check=True
319
+ )
320
+
321
+ status = result.stdout.strip()
322
+ if status:
323
+ return status, True, ""
324
+ else:
325
+ return "COMPLETED", True, "" # Job not in queue, likely completed
326
+
327
+ except subprocess.CalledProcessError as e:
328
+ return None, False, f"Failed to check job status: {e.stderr}"
329
+ except Exception as e:
330
+ return None, False, f"Unexpected error: {str(e)}"
331
+
332
+
333
+ def hpc_runner_node(state):
334
+ """
335
+ HPC Runner node: Extract cluster info from user requirement, create SLURM script,
336
+ submit job to cluster, wait for completion, and check for errors.
337
+ Retries submission on failure up to max_loop times, regenerating script based on errors.
338
+ """
339
+ config = state["config"]
340
+ case_dir = state["case_dir"]
341
+ allrun_file_path = os.path.join(case_dir, "Allrun")
342
+ max_loop = config.max_loop
343
+ current_attempt = 0
344
+
345
+ print(f"============================== HPC Runner ==============================")
346
+
347
+ # Clean up any previous log and error files.
348
+ out_file = os.path.join(case_dir, "Allrun.out")
349
+ err_file = os.path.join(case_dir, "Allrun.err")
350
+ remove_files(case_dir, prefix="log")
351
+ remove_file(err_file)
352
+ remove_file(out_file)
353
+ remove_numeric_folders(case_dir)
354
+
355
+ # Extract cluster information from user requirement
356
+ print("Extracting cluster information from user requirement...")
357
+ cluster_info = extract_cluster_info(state)
358
+ print(f"Cluster info extracted: {cluster_info}")
359
+
360
+ # Submit the job with retry logic
361
+ while current_attempt < max_loop:
362
+ current_attempt += 1
363
+ print(f"Attempt {current_attempt}/{max_loop}: Creating and submitting SLURM job...")
364
+
365
+ # Create SLURM script (regenerate on retry with error context)
366
+ if current_attempt == 1:
367
+ print("Creating initial SLURM script...")
368
+ script_path = create_slurm_script(case_dir, cluster_info, state)
369
+ else:
370
+ print(f"Regenerating SLURM script based on previous error...")
371
+ # Read the previous failed script content
372
+ previous_script_content = ""
373
+ try:
374
+ with open(script_path, 'r') as f:
375
+ previous_script_content = f.read()
376
+ except Exception as e:
377
+ print(f"Warning: Could not read previous script: {e}")
378
+
379
+ script_path = create_slurm_script_with_error_context(case_dir, cluster_info, state, last_error_msg, previous_script_content)
380
+
381
+ print(f"SLURM script created at: {script_path}")
382
+
383
+ job_id, success, error_msg = submit_slurm_job(script_path)
384
+
385
+ if success:
386
+ print(f"Job submitted successfully with ID: {job_id}")
387
+ break
388
+ else:
389
+ print(f"Attempt {current_attempt} failed: {error_msg}")
390
+ last_error_msg = error_msg # Store error for next iteration
391
+ if current_attempt < max_loop:
392
+ print(f"Retrying in 5 seconds...")
393
+ import time
394
+ time.sleep(5)
395
+ else:
396
+ print(f"Maximum attempts ({max_loop}) reached. Job submission failed.")
397
+ error_logs = [f"Job submission failed after {max_loop} attempts. Last error: {error_msg}"]
398
+ return {
399
+ **state,
400
+ "error_logs": error_logs,
401
+ "job_id": None,
402
+ "cluster_info": cluster_info,
403
+ "slurm_script_path": script_path
404
+ }
405
+
406
+ # Wait for job completion
407
+ print("Waiting for job completion...")
408
+ import time
409
+ max_wait_time = 3600 # 1 hour timeout
410
+ wait_interval = 30 # Check every 30 seconds
411
+ elapsed_time = 0
412
+
413
+ while elapsed_time < max_wait_time:
414
+ status, status_success, status_error = check_job_status(job_id)
415
+
416
+ if not status_success:
417
+ print(f"Failed to check job status: {status_error}")
418
+ error_logs = [f"Status check failed: {status_error}"]
419
+ return {
420
+ **state,
421
+ "error_logs": error_logs,
422
+ "job_id": job_id,
423
+ "cluster_info": cluster_info,
424
+ "slurm_script_path": script_path
425
+ }
426
+
427
+ print(f"Job status: {status}")
428
+
429
+ # Check if job is completed (either successfully or with error)
430
+ if status in ["COMPLETED", "FAILED", "CANCELLED", "TIMEOUT"]:
431
+ print(f"Job finished with status: {status}")
432
+ break
433
+
434
+ # Wait before checking again
435
+ time.sleep(wait_interval)
436
+ elapsed_time += wait_interval
437
+
438
+ if elapsed_time % 300 == 0: # Print progress every 5 minutes
439
+ print(f"Still waiting... ({elapsed_time//60} minutes elapsed)")
440
+
441
+ if elapsed_time >= max_wait_time:
442
+ print("Job timeout reached. Assuming job completed.")
443
+
444
+ # Check for errors in log files (similar to local_runner)
445
+ print("Checking for errors in log files...")
446
+ error_logs = check_foam_errors(case_dir)
447
+
448
+ if len(error_logs) > 0:
449
+ print("Errors detected in the HPC Allrun execution.")
450
+ print(error_logs)
451
+ else:
452
+ print("HPC Allrun executed successfully without errors.")
453
+
454
+ state['loop_count'] += 1
455
+
456
+ # Return updated state
457
+ return {
458
+ **state,
459
+ "error_logs": error_logs,
460
+ "job_id": job_id,
461
+ "cluster_info": cluster_info,
462
+ "slurm_script_path": script_path
463
+ }
Foam-Agent/source/src/nodes/input_writer_node.py ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # input_writer_node.py
2
+ import os
3
+ from utils import save_file, parse_context, retrieve_faiss, FoamPydantic, FoamfilePydantic
4
+ import re
5
+ from typing import List
6
+ from pydantic import BaseModel, Field
7
+
8
+ # System prompts for different modes
9
+ INITIAL_WRITE_SYSTEM_PROMPT = (
10
+ "You are an expert in OpenFOAM simulation and numerical modeling."
11
+ f"Your task is to generate a complete and functional file named: <file_name>{{file_name}}</file_name> within the <folder_name>{{folder_name}}</folder_name> directory. "
12
+ "Ensure all required values are present and match with the files content already generated."
13
+ "Before finalizing the output, ensure:\n"
14
+ "- All necessary fields exist (e.g., if `nu` is defined in `constant/transportProperties`, it must be used correctly in `0/U`).\n"
15
+ "- Cross-check field names between different files to avoid mismatches.\n"
16
+ "- Ensure units and dimensions are correct** for all physical variables.\n"
17
+ f"- Ensure case solver settings are consistent with the user's requirements. Available solvers are: {{case_solver}}.\n"
18
+ "Provide only the code—no explanations, comments, or additional text."
19
+ )
20
+
21
+ REWRITE_SYSTEM_PROMPT = (
22
+ "You are an expert in OpenFOAM simulation and numerical modeling. "
23
+ "Your task is to modify and rewrite the necessary OpenFOAM files to fix the reported error. "
24
+ "Please do not propose solutions that require modifying any parameters declared in the user requirement, try other approaches instead."
25
+ "The user will provide the error content, error command, reviewer's suggestions, and all relevant foam files. "
26
+ "Only return files that require rewriting, modification, or addition; do not include files that remain unchanged. "
27
+ "Return the complete, corrected file contents in the following JSON format: "
28
+ "list of foamfile: [{file_name: 'file_name', folder_name: 'folder_name', content: 'content'}]. "
29
+ "Ensure your response includes only the modified file content with no extra text, as it will be parsed using Pydantic."
30
+ )
31
+
32
+ def compute_priority(subtask):
33
+ if subtask["folder_name"] == "system":
34
+ return 0
35
+ elif subtask["folder_name"] == "constant":
36
+ return 1
37
+ elif subtask["folder_name"] == "0":
38
+ return 2
39
+ else:
40
+ return 3
41
+
42
+
43
+ def parse_allrun(text: str) -> str:
44
+ match = re.search(r'```(.*?)```', text, re.DOTALL)
45
+
46
+ return match.group(1).strip()
47
+
48
+ def retrieve_commands(command_path) -> str:
49
+ with open(command_path, 'r') as file:
50
+ commands = file.readlines()
51
+
52
+ return f"[{', '.join([command.strip() for command in commands])}]"
53
+
54
+ class CommandsPydantic(BaseModel):
55
+ commands: List[str] = Field(description="List of commands")
56
+
57
+ def input_writer_node(state):
58
+ """
59
+ InputWriter node: Generate the complete OpenFOAM foamfile.
60
+
61
+ Args:
62
+ state: The current state containing all necessary information
63
+ """
64
+
65
+ mode = state["input_writer_mode"]
66
+
67
+ if mode == "rewrite":
68
+ return _rewrite_mode(state)
69
+ else:
70
+ return _initial_write_mode(state)
71
+
72
+ def _rewrite_mode(state):
73
+ """
74
+ Rewrite mode: Fix errors based on reviewer analysis
75
+ """
76
+ print(f"============================== Rewrite Mode ==============================")
77
+
78
+ config = state["config"]
79
+
80
+ if not state.get("review_analysis"):
81
+ print("No review analysis available for rewrite mode.")
82
+ return state
83
+
84
+ rewrite_user_prompt = (
85
+ f"<foamfiles>{str(state['foamfiles'])}</foamfiles>\n"
86
+ f"<error_logs>{state['error_logs']}</error_logs>\n"
87
+ f"<reviewer_analysis>{state['review_analysis']}</reviewer_analysis>\n\n"
88
+ f"<user_requirement>{state['user_requirement']}</user_requirement>\n\n"
89
+ "Please update the relevant OpenFOAM files to resolve the reported errors, ensuring that all modifications strictly adhere to the specified formats. Ensure all modifications adhere to user requirement."
90
+ )
91
+ rewrite_response = state["llm_service"].invoke(rewrite_user_prompt, REWRITE_SYSTEM_PROMPT, pydantic_obj=FoamPydantic)
92
+
93
+ print(f"============================== Rewrite ==============================")
94
+ # Prepare updated dir_structure and foamfiles without mutating state
95
+ dir_structure = dict(state["dir_structure"]) if state.get("dir_structure") else {}
96
+ foamfiles_list = list(state["foamfiles"].list_foamfile) if state.get("foamfiles") and hasattr(state["foamfiles"], "list_foamfile") else []
97
+
98
+ for foamfile in rewrite_response.list_foamfile:
99
+ print(f"Modified the file: {foamfile.file_name} in folder: {foamfile.folder_name}")
100
+ file_path = os.path.join(state["case_dir"], foamfile.folder_name, foamfile.file_name)
101
+ save_file(file_path, foamfile.content)
102
+
103
+ if foamfile.folder_name not in dir_structure:
104
+ dir_structure[foamfile.folder_name] = []
105
+ if foamfile.file_name not in dir_structure[foamfile.folder_name]:
106
+ dir_structure[foamfile.folder_name].append(foamfile.file_name)
107
+
108
+ foamfiles_list = [f for f in foamfiles_list if not (f.folder_name == foamfile.folder_name and f.file_name == foamfile.file_name)]
109
+ foamfiles_list.append(foamfile)
110
+
111
+ foamfiles = FoamPydantic(list_foamfile=foamfiles_list)
112
+ return {
113
+ "dir_structure": dir_structure,
114
+ "foamfiles": foamfiles,
115
+ "error_logs": []
116
+ }
117
+
118
+ def _initial_write_mode(state):
119
+ """
120
+ Initial write mode: Generate files from scratch
121
+ """
122
+ print(f"============================== Initial Write Mode ==============================")
123
+
124
+ config = state["config"]
125
+ subtasks = state["subtasks"]
126
+ subtasks = sorted(subtasks, key=compute_priority)
127
+
128
+ writed_files = []
129
+ dir_structure = {}
130
+
131
+ for subtask in subtasks:
132
+ file_name = subtask["file_name"]
133
+ folder_name = subtask["folder_name"]
134
+
135
+ if folder_name not in dir_structure:
136
+ dir_structure[folder_name] = []
137
+ dir_structure[folder_name].append(file_name)
138
+
139
+ print(f"Generating file: {file_name} in folder: {folder_name}")
140
+
141
+ if not file_name or not folder_name:
142
+ raise ValueError(f"Invalid subtask format: {subtask}")
143
+
144
+ file_path = os.path.join(state["case_dir"], folder_name, file_name)
145
+ os.makedirs(os.path.dirname(file_path), exist_ok=True)
146
+
147
+ # Retrieve a similar reference foamfile from the tutorial.
148
+ similar_file_text = state["tutorial_reference"]
149
+
150
+ # Generate the complete foamfile.
151
+ code_system_prompt = INITIAL_WRITE_SYSTEM_PROMPT.format(
152
+ file_name=file_name,
153
+ folder_name=folder_name,
154
+ case_solver=state['case_stats']['case_solver']
155
+ )
156
+
157
+ code_user_prompt = (
158
+ f"User requirement: {state['user_requirement']}\n"
159
+ f"Refer to the following similar case file content to ensure the generated file aligns with the user requirement:\n<similar_case_reference>{similar_file_text}</similar_case_reference>\n"
160
+ f"Similar case reference is always correct. If you find the user requirement is very consistent with the similar case reference, you should use the similar case reference as the template to generate the file."
161
+ f"Just modify the necessary parts to make the file complete and functional."
162
+ "Please ensure that the generated file is complete, functional, and logically sound."
163
+ "Additionally, apply your domain expertise to verify that all numerical values are consistent with the user's requirements, maintaining accuracy and coherence."
164
+ "When generating controlDict, do not include anything to preform post processing. Just include the necessary settings to run the simulation."
165
+ )
166
+ if len(writed_files) > 0:
167
+ code_user_prompt += f"The following are files content already generated: {str(writed_files)}\n\n\nYou should ensure that the new file is consistent with the previous files. Such as boundary conditions, mesh settings, etc."
168
+
169
+ generation_response = state["llm_service"].invoke(code_user_prompt, code_system_prompt)
170
+
171
+ code_context = parse_context(generation_response)
172
+ save_file(file_path, code_context)
173
+
174
+ writed_files.append(FoamfilePydantic(file_name=file_name, folder_name=folder_name, content=code_context))
175
+
176
+ # Write the Allrun script.
177
+ case_dir = state["case_dir"]
178
+ allrun_file_path = os.path.join(case_dir, "Allrun")
179
+ if os.path.exists(allrun_file_path):
180
+ print("Warning: Allrun file exists. Overwriting.")
181
+
182
+ # Retrieve available commands from the FAISS "Commands" database.
183
+ commands = retrieve_commands(f"{config.database_path}/raw/openfoam_commands.txt")
184
+
185
+ # Include mesh commands if custom mesh is used
186
+ mesh_commands_info = ""
187
+ if state.get("custom_mesh_used") and state.get("mesh_commands"):
188
+ mesh_commands_info = f"\nCustom mesh commands to include: {state['mesh_commands']}"
189
+ print(f"Including custom mesh commands: {state['mesh_commands']}")
190
+
191
+ command_system_prompt = (
192
+ "You are an expert in OpenFOAM. The user will provide a list of available commands. "
193
+ "Your task is to generate only the necessary OpenFOAM commands required to create an Allrun script for the given user case, based on the provided directory structure. "
194
+ "Return only the list of commands—no explanations, comments, or additional text."
195
+ )
196
+
197
+ if state.get("mesh_type") == "custom_mesh":
198
+ command_system_prompt += "If custom mesh commands are provided, include them in the appropriate order (typically after blockMesh or instead of blockMesh if custom mesh is used). "
199
+
200
+ command_user_prompt = (
201
+ f"Available OpenFOAM commands for the Allrun script: {commands}\n"
202
+ f"Case directory structure: {dir_structure}\n"
203
+ f"User case information: {state['case_info']}\n"
204
+ f"Reference Allrun scripts from similar cases: {state['allrun_reference']}\n"
205
+ "Generate only the required OpenFOAM command list—no extra text."
206
+ )
207
+
208
+ if state.get("mesh_type") == "custom_mesh":
209
+ command_user_prompt += f"{mesh_commands_info}\n"
210
+
211
+ command_response = state["llm_service"].invoke(command_user_prompt, command_system_prompt, pydantic_obj=CommandsPydantic)
212
+
213
+ if len(command_response.commands) == 0:
214
+ print("Failed to generate subtasks.")
215
+ raise ValueError("Failed to generate subtasks.")
216
+
217
+ print(f"Need {len(command_response.commands)} commands.")
218
+
219
+ commands_help = []
220
+ for command in command_response.commands:
221
+ command_help = retrieve_faiss("openfoam_command_help", command, topk=config.searchdocs)
222
+ commands_help.append(command_help[0]['full_content'])
223
+ commands_help = "\n".join(commands_help)
224
+
225
+
226
+ allrun_system_prompt = (
227
+ "You are an expert in OpenFOAM. Generate an Allrun script based on the provided details."
228
+ f"Available commands with descriptions: {commands_help}\n\n"
229
+ f"Reference Allrun scripts from similar cases: {state['allrun_reference']}\n\n"
230
+ "If custom mesh commands are provided, make sure to include them in the appropriate order in the Allrun script. "
231
+ "CRITICAL: Do not include any post processing commands in the Allrun script."
232
+ "CRITICAL: Do not include any commands to convert mesh to foam format like gmshToFoam or others."
233
+ )
234
+
235
+ if state.get("mesh_mode") == "custom":
236
+ allrun_system_prompt += "CRITICAL: Do not include any other mesh commands other than the custom mesh commands.\n"
237
+ allrun_system_prompt += "CRITICAL: Do not include any gmshToFoam commands in the Allrun script."
238
+
239
+ allrun_user_prompt = (
240
+ f"User requirement: {state['user_requirement']}\n"
241
+ f"Case directory structure: {dir_structure}\n"
242
+ f"User case infomation: {state['case_info']}\n"
243
+ f"{mesh_commands_info}\n"
244
+ "All run scripts for these similar cases are for reference only and may not be correct, as you might be a different case solver or have a different directory structure. "
245
+ "You need to rely on your OpenFOAM and physics knowledge to discern this, and pay more attention to user requirements, "
246
+ "as your ultimate goal is to fulfill the user's requirements and generate an allrun script that meets those requirements."
247
+ "CRITICAL: Do not include any post processing commands in the Allrun script."
248
+ "CRITICAL: Do not include any commands to convert mesh to foam format like gmshToFoam or others."
249
+ "CRITICAL: Do not include any commands that run gmsh to create the mesh."
250
+ "Generate the Allrun script strictly based on the above information. Do not include explanations, comments, or additional text. Put the code in ``` tags."
251
+ )
252
+
253
+ if state.get("mesh_mode") == "custom":
254
+ allrun_user_prompt += "CRITICAL: Do not include any other mesh commands other than the custom mesh commands.\n"
255
+ allrun_user_prompt += "CRITICAL: Do not include any gmshToFoam commands in the Allrun script."
256
+
257
+
258
+
259
+ allrun_response = state["llm_service"].invoke(allrun_user_prompt, allrun_system_prompt)
260
+
261
+ allrun_script = parse_allrun(allrun_response)
262
+ save_file(allrun_file_path, allrun_script)
263
+
264
+ writed_files.append(FoamfilePydantic(file_name="Allrun", folder_name="./", content=allrun_script))
265
+ foamfiles = FoamPydantic(list_foamfile=writed_files)
266
+
267
+ # Return updated state
268
+ return {
269
+ "dir_structure": dir_structure,
270
+ "commands": command_response.commands,
271
+ "foamfiles": foamfiles
272
+ }
Foam-Agent/source/src/nodes/local_runner_node.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # runner_node.py
2
+ from typing import List
3
+ import os
4
+ from pydantic import BaseModel, Field
5
+ import re
6
+ from utils import (
7
+ save_file, remove_files, remove_file,
8
+ run_command, check_foam_errors, retrieve_faiss, remove_numeric_folders
9
+ )
10
+
11
+
12
+ def local_runner_node(state):
13
+ """
14
+ Runner node: Execute an Allrun script, and check for errors.
15
+ On error, update state.error_command and state.error_content.
16
+ """
17
+ config = state["config"]
18
+ case_dir = state["case_dir"]
19
+ allrun_file_path = os.path.join(case_dir, "Allrun")
20
+
21
+ print(f"============================== Runner ==============================")
22
+
23
+ # Clean up any previous log and error files.
24
+ out_file = os.path.join(case_dir, "Allrun.out")
25
+ err_file = os.path.join(case_dir, "Allrun.err")
26
+ remove_files(case_dir, prefix="log")
27
+ remove_file(err_file)
28
+ remove_file(out_file)
29
+ remove_numeric_folders(case_dir)
30
+
31
+ # Execute the Allrun script.
32
+ run_command(allrun_file_path, out_file, err_file, case_dir, config)
33
+
34
+ # Check for errors.
35
+ error_logs = check_foam_errors(case_dir)
36
+
37
+ if len(error_logs) > 0:
38
+ print("Errors detected in the Allrun execution.")
39
+ print(error_logs)
40
+ else:
41
+ print("Allrun executed successfully without errors.")
42
+
43
+ state['loop_count'] += 1
44
+ # Return updated state
45
+ return {
46
+ **state,
47
+ "error_logs": error_logs
48
+ }
49
+
Foam-Agent/source/src/nodes/meshing_node.py ADDED
@@ -0,0 +1,926 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import subprocess
4
+ import re
5
+ from typing import List, Optional
6
+ from pydantic import BaseModel, Field
7
+ from utils import save_file, remove_file
8
+
9
+ # System prompts for LLM interactions
10
+ BOUNDARY_SYSTEM_PROMPT = (
11
+ "You are an expert in OpenFOAM mesh processing and simulations. "
12
+ "Your role is to analyze and modify boundary conditions in OpenFOAM polyMesh boundary file. "
13
+ "You understand both 2D and 3D simulations and know how to properly set boundary conditions. "
14
+ "For 2D simulations, you know which boundaries should be set to 'empty' type and 'empty' physicalType. "
15
+ "You are precise and only return the exact boundary file content without any additional text or explanations. "
16
+ "IMPORTANT: Only change the specified boundary to 'empty' type and leave all other boundaries exactly as they are."
17
+ )
18
+
19
+ CONTROLDICT_SYSTEM_PROMPT = (
20
+ "You are an expert in OpenFOAM simulation setup. "
21
+ "Your role is to create a basic controlDict file for mesh conversion. "
22
+ "You understand the minimal requirements needed for gmshToFoam to work. "
23
+ "You are precise and only return the exact controlDict file content without any additional text or explanations."
24
+ )
25
+
26
+ GMSH_PYTHON_SYSTEM_PROMPT = (
27
+ "You are an expert in GMSH Python API and OpenFOAM mesh generation. "
28
+ "Your role is to create Python code that uses the GMSH library to generate meshes based on user requirements. "
29
+ "You understand: "
30
+ "- GMSH Python API for geometry creation "
31
+ "- How to create points, lines, surfaces, and volumes programmatically "
32
+ "- How to assign physical groups for OpenFOAM compatibility "
33
+ "- How to control mesh sizing and refinement "
34
+ "- How to handle 2D and 3D geometries "
35
+ "You can: "
36
+ "- Create complex geometries using GMSH Python API "
37
+ "- Set up proper boundary conditions for OpenFOAM "
38
+ "- Implement mesh refinement strategies "
39
+ "- Generate 3D meshes with correct boundary assignments "
40
+ "CRITICAL REQUIREMENTS: "
41
+ "- Always generate 3D meshes for OpenFOAM simulations "
42
+ "- Set mesh sizes and generate 3D mesh: gmsh.model.mesh.generate(3) "
43
+ "- AFTER 3D mesh generation, identify surfaces using gmsh.model.getEntities(2) "
44
+ "- Use gmsh.model.getBoundingBox(dim, tag) to analyze surface positions and categorize them "
45
+ "- Do not use gmsh.model.getCenterOfMass(dim, tag) function to analyze surface positions "
46
+ "- Create 2D physical groups based on spatial analysis, not during geometry creation "
47
+ "- Use user-specified boundary names "
48
+ "- Create physical groups for all surfaces and the volume domain "
49
+ "- Set gmsh.option.setNumber('Mesh.MshFileVersion', 2.2) for OpenFOAM compatibility "
50
+ "- Save as 'geometry.msh' and finalize GMSH "
51
+ "- Use proper coordinate system - define z_min and z_max variables and use them consistently for boundary detection "
52
+ "- Use bounding box coordinates (x_min, y_min, z_min, x_max, y_max, z_max) directly for boundary detection, NOT center points "
53
+ "- Ensure all boundary types (example: inlet, outlet, top, bottom, cylinder, frontAndBack) are properly detected and created "
54
+ "CRITICAL ORDER: Create geometry then Extrude then Synchronize then Generate mesh then create physical groups "
55
+ "CRITICAL: Use bounding box coordinates consistently for ALL boundaries - do not mix center points and bounding box coordinates in the same boundary detection logic "
56
+ "MOST CRITICAL: NEVER create physical groups before mesh generation. Always create them AFTER gmsh.model.mesh.generate(3) "
57
+ "MOST CRITICAL: Physical groups created before mesh generation will reference wrong surface tags after extrusion and meshing "
58
+ "CRITICAL FACE DETECTION: "
59
+ "- For thin boundary surfaces: use abs(zmin - zmax) < tol AND (abs(zmin - z_min) < tol OR abs(zmin - z_max) < tol) "
60
+ "- Thin surfaces at z_min and z_max are boundary surfaces that need physical groups "
61
+ "- Use tolerance tol = 1e-6 for floating point comparisons "
62
+ "- Ensure ALL user-specified boundaries are detected and assigned to physical groups "
63
+ "IMPORTANT: Use your expertise to create robust, adaptable code that can handle various geometry types and boundary conditions."
64
+ )
65
+
66
+ BOUNDARY_EXTRACTION_SYSTEM_PROMPT = (
67
+ "You are an expert in OpenFOAM mesh generation and boundary condition analysis. "
68
+ "Your role is to extract boundary names from user requirements for mesh generation. "
69
+ "You understand: "
70
+ "- Common OpenFOAM boundary types (inlet, outlet, wall, cylinder, etc.) "
71
+ "- How to identify boundary names from natural language descriptions "
72
+ "- The importance of accurate boundary identification for mesh generation "
73
+ "You can: "
74
+ "- Parse user requirements to identify all mentioned boundaries "
75
+ "- Distinguish between boundary names and other geometric terms "
76
+ "- Handle variations in boundary naming conventions "
77
+ "- Return a clean list of boundary names "
78
+ "IMPORTANT: Return ONLY a comma-separated list of boundary names without any additional text, explanations, or formatting. "
79
+ "Example: inlet,outlet,wall,cylinder "
80
+ "If no boundaries are mentioned, return an empty string."
81
+ )
82
+
83
+ GMSH_PYTHON_ERROR_CORRECTION_SYSTEM_PROMPT = (
84
+ "You are an expert in debugging GMSH Python API code. "
85
+ "Your role is to analyze GMSH Python errors and fix the corresponding code. "
86
+ "You understand common GMSH Python API errors including: "
87
+ "- Geometry definition errors (invalid points, lines, surfaces, volumes) "
88
+ "- Physical group assignment issues "
89
+ "- Mesh generation problems "
90
+ "- API usage errors "
91
+ "- Missing boundary definitions that cause OpenFOAM conversion failures "
92
+ "- Mesh quality issues detected by checkMesh (skewness, aspect ratio, etc.) "
93
+ "You can identify the root cause of errors and provide corrected Python code. "
94
+ "CRITICAL REQUIREMENTS: "
95
+ "- Ensure 3D mesh generation for OpenFOAM compatibility "
96
+ "- Use proper spatial analysis for boundary identification "
97
+ "- Create complete physical group definitions for surfaces and volumes "
98
+ "- Handle various geometry types and boundary conditions "
99
+ "- When missing boundaries are mentioned, ensure they are properly defined "
100
+ "- Do not use gmsh.model.getCenterOfMass(dim, tag) function to analyze surface positions "
101
+ "- Address mesh quality issues by adjusting mesh sizing and refinement strategies "
102
+ "CRITICAL CORRECTIONS: "
103
+ "- Use proper coordinate system variables (z_min, z_max) for boundary detection "
104
+ "- Use bounding box coordinates directly for boundary detection, NOT center points "
105
+ "- Ensure all boundary types are detected: (example: inlet, outlet, top, bottom, cylinder, frontAndBack) "
106
+ "- Check boundary detection logic for coordinate system consistency "
107
+ "- Verify that extrusion creates proper 3D geometry with all expected surfaces "
108
+ "- For mesh quality issues: adjust mesh sizes, add refinement zones, improve geometry definition "
109
+ "CRITICAL ORDER: Create geometry then Extrude then Synchronize then Generate mesh then create physical groups "
110
+ "CRITICAL: Use bounding box coordinates consistently for ALL boundary types - do not mix center points and bounding box coordinates in the same boundary detection logic "
111
+ "MOST CRITICAL FIX: If boundaries are missing after gmshToFoam, move ALL physical group creation to AFTER gmsh.model.mesh.generate(3) "
112
+ "MOST CRITICAL FIX: Physical groups created before mesh generation will have wrong surface tag references "
113
+ "CRITICAL FACE DETECTION FIXES: "
114
+ "- Fix thin boundary detection: use abs(zmin - zmax) < tol AND (abs(zmin - z_min) < tol OR abs(zmin - z_max) < tol) "
115
+ "- Use tolerance tol = 1e-6 for all floating point comparisons "
116
+ "- Ensure thin surfaces at z_min and z_max are properly classified as boundary surfaces "
117
+ "- Check that ALL user-specified boundaries are detected and assigned to physical groups "
118
+ "MESH QUALITY FIXES: "
119
+ "- For high skewness: refine mesh in problematic areas, adjust element sizes "
120
+ "- For poor aspect ratio: use smaller mesh sizes, add refinement zones "
121
+ "- For non-orthogonality: improve geometry definition, use structured meshing where possible "
122
+ "- For negative volume elements: check geometry validity, ensure proper surface orientation "
123
+ "IMPORTANT: Use your expertise to diagnose and fix issues while maintaining code adaptability for different problems."
124
+ )
125
+
126
+ class MeshInfoPydantic(BaseModel):
127
+ mesh_file_path: str = Field(description="Path to the custom mesh file (e.g., .msh, .stl, .obj)")
128
+ mesh_file_type: str = Field(description="Type of mesh file (gmsh, stl, obj, etc.)")
129
+ mesh_description: str = Field(description="Description of the mesh and any specific requirements")
130
+ requires_blockmesh_removal: bool = Field(description="Whether to remove blockMeshDict file", default=True)
131
+
132
+ class MeshCommandsPydantic(BaseModel):
133
+ mesh_commands: List[str] = Field(description="List of OpenFOAM commands needed to process the custom mesh")
134
+ mesh_file_destination: str = Field(description="Destination path for the mesh file in the case directory")
135
+
136
+ class GMSHPythonCode(BaseModel):
137
+ python_code: str = Field(description="Complete Python code using GMSH library")
138
+ mesh_type: str = Field(description="Type of mesh (2D or 3D)")
139
+ geometry_type: str = Field(description="Type of geometry being created")
140
+
141
+ class GMSHPythonCorrection(BaseModel):
142
+ corrected_code: str = Field(description="Corrected GMSH Python code")
143
+ error_analysis: str = Field(description="Analysis of the error and what was fixed")
144
+
145
+ def _correct_gmsh_python_code(state, current_code, error_output):
146
+ """
147
+ Attempt to correct GMSH Python code based on error output or missing boundary info.
148
+
149
+ Args:
150
+ state: State object containing LLM service
151
+ current_code: Current Python code content
152
+ error_output: GMSH Python error output or missing boundary error message
153
+
154
+ Returns:
155
+ Corrected Python code content or None if correction failed
156
+ """
157
+ try:
158
+ # Detect if this is a boundary mismatch error
159
+ is_boundary_mismatch = (
160
+ isinstance(error_output, str) and "Boundary mismatch after gmshToFoam" in error_output
161
+ )
162
+ found_boundaries = state['found_boundaries']
163
+ expected_boundaries = state['expected_boundaries']
164
+ boundary_info = ""
165
+ if is_boundary_mismatch and found_boundaries is not None and expected_boundaries is not None:
166
+ boundary_info = (
167
+ f"\n<boundary_mismatch>Found boundaries in OpenFOAM: {found_boundaries}. "
168
+ f"Expected boundaries: {expected_boundaries}. "
169
+ "Please correct the mesh code so that the boundaries in the OpenFOAM boundary file match the expected boundaries exactly."
170
+ "Note that these boundaries might be present in the msh file, but not in the boundary file after running gmshToFoam to convert the msh file to OpenFOAM format."
171
+ "MOST LIKELY CAUSES: "
172
+ "1. Physical groups were created before mesh generation. Move ALL physical group creation to AFTER gmsh.model.mesh.generate(3). "
173
+ "2. Points created at z=0 instead of z=z_min. Create ALL points at z=z_min for proper boundary detection. "
174
+ "3. Incorrect surface detection logic - surfaces not properly classified by position. "
175
+ "4. If 'defaultFaces' appears, it means some surfaces weren't assigned to any physical group. "
176
+ "5. Check that ALL surfaces are being classified and assigned to the correct user-specified boundary names."
177
+ "</boundary_mismatch>"
178
+ )
179
+ if is_boundary_mismatch:
180
+ correction_prompt = (
181
+ f"<user_requirements>{state['user_requirement']}</user_requirements>{boundary_info}\n"
182
+ f"<current_python_code>{current_code}</current_python_code>\n"
183
+ "Please analyze the current Python code and the boundary mismatch information. "
184
+ "The mesh generation was successful, but the boundaries in the OpenFOAM conversion do not match the expected boundaries. "
185
+ "Note that these boundaries might be present in the msh file, but not in the boundary file after running gmshToFoam to convert the msh file to OpenFOAM format."
186
+ "MOST LIKELY SOLUTIONS: "
187
+ "1. Move ALL physical group creation to AFTER gmsh.model.mesh.generate(3). "
188
+ "2. Use correct thin boundary detection: abs(zmin - zmax) < tol AND (abs(zmin - z_min) < tol OR abs(zmin - z_max) < tol). "
189
+ "3. Use tolerance tol = 1e-6 for all floating point comparisons. "
190
+ "4. Use exact boundary names from user requirements, do not hardcode specific names. "
191
+ "Physical groups created before mesh generation will reference wrong surface tags after extrusion and meshing."
192
+ "Provide a corrected Python code that ensures the boundaries in the OpenFOAM boundary file match the expected boundaries exactly. "
193
+ "IMPORTANT: Return ONLY the complete corrected Python code without any additional text."
194
+ )
195
+ else:
196
+ # Fallback to previous logic for other errors
197
+ missing_boundary_info = ""
198
+ if 'missing_boundaries' in state and state['missing_boundaries']:
199
+ missing_boundary_info = (
200
+ f"\n<missing_boundaries>Previous attempts were missing these boundaries: {state['missing_boundaries']}. "
201
+ "Ensure these boundaries are properly defined in the GMSH physical groups.</missing_boundaries>"
202
+ )
203
+ correction_prompt = (
204
+ f"<user_requirements>{state['user_requirement']}</user_requirements>{missing_boundary_info}\n"
205
+ f"<current_python_code>{current_code}</current_python_code>\n"
206
+ f"<gmsh_python_error_output>{error_output}</gmsh_python_error_output>\n"
207
+ "Please analyze the GMSH Python error output and the current Python code. "
208
+ "Identify the specific error and provide a corrected Python code that fixes the issue. "
209
+ "IMPORTANT: Return ONLY the complete corrected Python code without any additional text."
210
+ )
211
+ correction_response = state["llm_service"].invoke(
212
+ correction_prompt,
213
+ GMSH_PYTHON_ERROR_CORRECTION_SYSTEM_PROMPT,
214
+ pydantic_obj=GMSHPythonCorrection
215
+ )
216
+ if correction_response.corrected_code:
217
+ print(f"Error analysis: {correction_response.error_analysis}")
218
+ return correction_response.corrected_code
219
+ except Exception as e:
220
+ print(f"Error in Python code correction: {str(e)}")
221
+ return None
222
+
223
+ def extract_boundary_names_from_requirements(state, user_requirement):
224
+ """Extract boundary names mentioned in user requirements using LLM."""
225
+ try:
226
+ extraction_prompt = (
227
+ f"<user_requirements>{user_requirement}</user_requirements>\n"
228
+ "Please extract all boundary names mentioned in the user requirements. "
229
+ "Look for terms like inlet, outlet, wall, cylinder, top, bottom, front, back, side, etc. "
230
+ "Focus on boundaries that would need to be defined in the mesh for OpenFOAM simulation. "
231
+ "Return ONLY a comma-separated list of boundary names without any additional text."
232
+ )
233
+
234
+ boundary_response = state["llm_service"].invoke(extraction_prompt, BOUNDARY_EXTRACTION_SYSTEM_PROMPT).strip()
235
+
236
+ if boundary_response:
237
+ boundary_names = [name.strip() for name in boundary_response.split(',') if name.strip()]
238
+ print(f"LLM extracted boundary names: {boundary_names}")
239
+ return boundary_names
240
+ else:
241
+ print("No boundary names extracted by LLM")
242
+ return []
243
+
244
+ except Exception as e:
245
+ print(f"Error in LLM boundary extraction: {e}")
246
+ # Fallback to simple keyword matching
247
+ boundary_keywords = ['inlet', 'outlet', 'wall', 'cylinder', 'top', 'bottom', 'front', 'back', 'side']
248
+ found_boundaries = []
249
+
250
+ requirement_lower = user_requirement.lower()
251
+ for keyword in boundary_keywords:
252
+ if keyword in requirement_lower:
253
+ found_boundaries.append(keyword)
254
+
255
+ print(f"Fallback boundary extraction: {found_boundaries}")
256
+ return found_boundaries
257
+
258
+ def check_boundary_file_for_missing_boundaries(boundary_file_path, expected_boundaries):
259
+ """Check if all expected boundaries are present in the boundary file."""
260
+ if not os.path.exists(boundary_file_path):
261
+ return False, expected_boundaries, []
262
+
263
+ try:
264
+ with open(boundary_file_path, 'r') as f:
265
+ content = f.read()
266
+
267
+ boundary_pattern = r'(\w+)\s*\{'
268
+ found_boundaries = re.findall(boundary_pattern, content)
269
+
270
+ boundary_keywords = ['type', 'physicalType', 'nFaces', 'startFace', 'FoamFile']
271
+ found_boundaries = [b for b in found_boundaries if b not in boundary_keywords]
272
+
273
+ missing_boundaries = [b for b in expected_boundaries if b not in found_boundaries]
274
+ all_present = len(missing_boundaries) == 0
275
+
276
+ return all_present, missing_boundaries, found_boundaries
277
+
278
+ except Exception as e:
279
+ print(f"Error reading boundary file: {e}")
280
+ return False, expected_boundaries, []
281
+
282
+ def handle_custom_mesh(state, case_dir):
283
+ print("============================== Custom Mesh Processing ==============================")
284
+ custom_mesh_path = state.get("custom_mesh_path")
285
+ error_logs = []
286
+ if not custom_mesh_path:
287
+ error_logs.append("No custom mesh path provided in state")
288
+ print("Error: No custom mesh path provided")
289
+ return {
290
+ "mesh_info": None,
291
+ "mesh_commands": [],
292
+ "error_logs": error_logs
293
+ }
294
+ if not os.path.exists(custom_mesh_path):
295
+ error_logs.append(f"Custom mesh file does not exist: {custom_mesh_path}")
296
+ print(f"Error: Custom mesh file not found at {custom_mesh_path}")
297
+ return {
298
+ "mesh_info": None,
299
+ "mesh_commands": [],
300
+ "error_logs": error_logs
301
+ }
302
+ mesh_in_case_dir = os.path.join(case_dir, "geometry.msh")
303
+ try:
304
+ shutil.copy2(custom_mesh_path, mesh_in_case_dir)
305
+ print(f"Copied custom mesh from {custom_mesh_path} to {mesh_in_case_dir}")
306
+ except Exception as e:
307
+ error_logs.append(f"Failed to copy custom mesh file: {str(e)}")
308
+ print(f"Error: Failed to copy custom mesh file: {str(e)}")
309
+ return {
310
+ "mesh_info": None,
311
+ "mesh_commands": [],
312
+ "error_logs": error_logs
313
+ }
314
+ print(f"Using mesh file: {mesh_in_case_dir}")
315
+ try:
316
+ constant_dir = os.path.join(case_dir, "constant")
317
+ os.makedirs(constant_dir, exist_ok=True)
318
+ system_dir = os.path.join(case_dir, "system")
319
+ os.makedirs(system_dir, exist_ok=True)
320
+ controldict_prompt = (
321
+ f"<user_requirements>{state['user_requirement']}</user_requirements>\n"
322
+ "Please create a basic controlDict file for mesh conversion. "
323
+ "The file should include only the essential settings needed for gmshToFoam to work. "
324
+ "Use the application name as mentioned in the user requirements. "
325
+ "IMPORTANT: Return ONLY the complete controlDict file content without any additional text."
326
+ )
327
+ controldict_content = state["llm_service"].invoke(controldict_prompt, CONTROLDICT_SYSTEM_PROMPT).strip()
328
+ if controldict_content:
329
+ controldict_file = os.path.join(system_dir, "controlDict")
330
+ save_file(controldict_file, controldict_content)
331
+ print("Created basic controlDict file for mesh conversion")
332
+ result = subprocess.run(
333
+ ["gmshToFoam", "geometry.msh"],
334
+ cwd=case_dir,
335
+ check=True,
336
+ stdout=subprocess.PIPE,
337
+ stderr=subprocess.PIPE,
338
+ text=True
339
+ )
340
+ print(f"gmshToFoam command output: {result.stdout}")
341
+ polyMesh_dir = os.path.join(constant_dir, "polyMesh")
342
+ if not os.path.exists(polyMesh_dir):
343
+ error_logs.append("Mesh conversion failed: polyMesh directory not created")
344
+ return {
345
+ "mesh_info": None,
346
+ "mesh_commands": [],
347
+ "error_logs": error_logs
348
+ }
349
+ boundary_file = os.path.join(polyMesh_dir, "boundary")
350
+ if os.path.exists(boundary_file):
351
+ with open(boundary_file, 'r') as f:
352
+ boundary_content = f.read()
353
+ boundary_prompt = (
354
+ f"<user_requirements>{state['user_requirement']}</user_requirements>\n"
355
+ f"<boundary_file_content>{boundary_content}</boundary_file_content>\n"
356
+ "Please analyze the user requirements and boundary file content. "
357
+ "Identify which boundary is to be modified based on the boundaries mentioned in the user requirements."
358
+ "If this is a 2D simulation, modify ONLY the appropriate boundary to 'empty' type and 'empty' physicalType. "
359
+ "Based on the no slip boundaries mentioned in the user requirements, modify the appropriate boundary/boundaries to type 'wall' and physicalType 'wall'. "
360
+ "If this is a 3D simulation, only modify the appropriate boundary/boundaries to type 'wall' and physicalType 'wall'."
361
+ "IMPORTANT: Do not change any other boundaries - leave them exactly as they are. "
362
+ "Return ONLY the complete boundary file content with any necessary modifications. No additional text."
363
+ )
364
+ updated_boundary_content = state["llm_service"].invoke(boundary_prompt, BOUNDARY_SYSTEM_PROMPT).strip()
365
+ if updated_boundary_content:
366
+ save_file(boundary_file, updated_boundary_content)
367
+ print("Boundary file updated based on simulation requirements")
368
+ foam_file = os.path.join(case_dir, f"{os.path.basename(case_dir)}.foam")
369
+ with open(foam_file, 'w') as f:
370
+ pass
371
+ mesh_commands = [
372
+ "checkMesh",
373
+ ]
374
+ return {
375
+ "mesh_info": {
376
+ "mesh_file_path": mesh_in_case_dir,
377
+ "mesh_file_type": "gmsh",
378
+ "mesh_description": "Custom mesh processed by preprocessor",
379
+ "requires_blockmesh_removal": True
380
+ },
381
+ "mesh_commands": mesh_commands,
382
+ "custom_mesh_used": True,
383
+ "error_logs": error_logs
384
+ }
385
+ except subprocess.CalledProcessError as e:
386
+ error_message = f"Error in mesh conversion: {str(e)}"
387
+ if e.stdout:
388
+ error_message += f"\nSTDOUT:\n{e.stdout}"
389
+ if e.stderr:
390
+ error_message += f"\nSTDERR:\n{e.stderr}"
391
+ error_logs.append(error_message)
392
+ return {
393
+ "mesh_info": None,
394
+ "mesh_commands": [],
395
+ "error_logs": error_logs
396
+ }
397
+
398
+ def run_checkmesh_and_correct(state, case_dir, python_file, max_loop, current_loop, corrected_code, error_logs):
399
+ """
400
+ Run checkMesh command and handle mesh quality issues.
401
+
402
+ Args:
403
+ state: State object containing LLM service and configuration
404
+ case_dir: Case directory path
405
+ python_file: Path to the Python mesh generation file
406
+ max_loop: Maximum number of retry attempts
407
+
408
+ Returns:
409
+ tuple: (success: bool, should_continue: bool)
410
+ """
411
+ print("Running checkMesh to verify mesh quality...")
412
+
413
+ try:
414
+ # Run checkMesh command
415
+ result = subprocess.run(
416
+ ["checkMesh"],
417
+ cwd=case_dir,
418
+ check=True,
419
+ stdout=subprocess.PIPE,
420
+ stderr=subprocess.PIPE,
421
+ text=True
422
+ )
423
+
424
+ checkmesh_output = result.stdout
425
+ print("checkMesh completed successfully")
426
+ print(f"checkMesh output:\n{checkmesh_output}")
427
+
428
+ # Check if checkMesh reported any failures
429
+ if "Failed" in checkmesh_output and "mesh checks" in checkmesh_output:
430
+ # Extract the number of failed checks
431
+ failed_match = re.search(r"Failed (\d+) mesh checks", checkmesh_output)
432
+ if failed_match:
433
+ failed_count = int(failed_match.group(1))
434
+ print(f"checkMesh detected {failed_count} mesh quality issues")
435
+
436
+ if current_loop < max_loop:
437
+ print("Attempting to correct mesh generation based on checkMesh results...")
438
+
439
+ # Read current Python code
440
+ with open(python_file, 'r') as f:
441
+ current_code = f.read()
442
+
443
+ # Create error message for correction
444
+ checkmesh_error = (
445
+ f"checkMesh output:\n{checkmesh_output}\n"
446
+ "Please analyze the checkMesh output and correct the mesh generation code. "
447
+ "Common issues include: "
448
+ "- Poor mesh quality (skewness, aspect ratio, etc.) "
449
+ "- Geometry issues affecting mesh generation "
450
+ "- Boundary layer problems "
451
+ "- Boundary naming overlap or mismatch "
452
+ "Provide corrected Python code that addresses the specific mesh quality issues identified by checkMesh."
453
+ )
454
+
455
+ # Use the existing correction function
456
+ corrected_code = _correct_gmsh_python_code(
457
+ state,
458
+ current_code,
459
+ checkmesh_error
460
+ )
461
+
462
+ if corrected_code:
463
+ state['corrected_python_code'] = corrected_code
464
+ print("Generated corrected Python code for next attempt (checkMesh issues)")
465
+ return False, True # Not successful, but should continue
466
+ else:
467
+ print("Failed to generate corrected code for checkMesh issues")
468
+ return False, False # Not successful, should not continue
469
+ else:
470
+ print(f"Failed to resolve checkMesh issues after {max_loop} attempts")
471
+ return False, False # Not successful, should not continue
472
+ else:
473
+ print("checkMesh output contains 'Failed' but couldn't parse failure count")
474
+ return False, False
475
+ else:
476
+ print("checkMesh passed - no mesh quality issues detected")
477
+ return True, False # Successful, no need to continue
478
+
479
+ except subprocess.CalledProcessError as e:
480
+ error_message = f"Error running checkMesh: {str(e)}"
481
+ if e.stdout:
482
+ error_message += f"\nSTDOUT:\n{e.stdout}"
483
+ if e.stderr:
484
+ error_message += f"\nSTDERR:\n{e.stderr}"
485
+ print(error_message)
486
+ state["error_logs"].append(error_message)
487
+
488
+ if current_loop < max_loop:
489
+ print("Retrying mesh generation due to checkMesh error...")
490
+ return False, True # Not successful, but should continue
491
+ else:
492
+ print(f"Failed to run checkMesh after {max_loop} attempts")
493
+ return False, False # Not successful, should not continue
494
+
495
+ except Exception as e:
496
+ print(f"Unexpected error in checkMesh: {str(e)}")
497
+ state["error_logs"].append(f"Unexpected error in checkMesh: {str(e)}")
498
+ return False, False
499
+
500
+ def handle_gmsh_mesh(state, case_dir):
501
+ """Handle GMSH mesh generation using gmsh python logic."""
502
+ print("============================== GMSH Mesh Generation ==============================")
503
+
504
+ # Ensure case_dir exists
505
+ case_dir = os.path.abspath(case_dir)
506
+ error_logs = []
507
+ if os.path.exists(case_dir):
508
+ print(f"Warning: Case directory {case_dir} already exists. Overwriting.")
509
+ shutil.rmtree(case_dir)
510
+ os.makedirs(case_dir)
511
+ print(f"Created case directory: {case_dir}")
512
+
513
+ # Define file paths
514
+ python_file = os.path.join(case_dir, "generate_mesh.py")
515
+ msh_file = os.path.join(case_dir, "geometry.msh")
516
+
517
+ # Extract expected boundary names from user requirements
518
+ expected_boundaries = extract_boundary_names_from_requirements(state, state["user_requirement"])
519
+ print(f"Expected boundaries from user requirements: {expected_boundaries}")
520
+
521
+ # Initialize loop counter if not present
522
+ gmsh_python_current_loop = 0
523
+
524
+ # Initialize missing boundaries tracking
525
+ missing_boundaries = []
526
+
527
+ # Initialize corrected code flag
528
+ corrected_python_code = None
529
+
530
+ max_loop = state['config'].max_loop
531
+ while gmsh_python_current_loop < max_loop:
532
+ gmsh_python_current_loop += 1
533
+ print(f"GMSH Python attempt {gmsh_python_current_loop} of {max_loop}")
534
+
535
+ # Determine if we should generate new code or use corrected code
536
+ should_generate_new_code = True
537
+ if corrected_python_code:
538
+ should_generate_new_code = False
539
+ python_code_to_use = corrected_python_code
540
+ print("Using corrected Python code from previous attempt")
541
+
542
+ try:
543
+ if should_generate_new_code:
544
+ # Generate Python code based on user requirements
545
+ missing_boundary_info = ""
546
+ if missing_boundaries:
547
+ missing_boundary_info = f"\n<missing_boundaries>Previous attempts were missing these boundaries: {missing_boundaries} in boundary file after performing gmshToFoam. Ensure these boundaries are properly defined in the GMSH physical groups.</missing_boundaries>"
548
+
549
+ python_prompt = (
550
+ f"<user_requirements>{state['user_requirement']}</user_requirements>{missing_boundary_info}\n"
551
+ "Please create Python code using the GMSH library to generate a mesh based on the user requirements. "
552
+ "Use boundary names specified in user requirements (e.g., 'inlet', 'outlet', 'wall', 'cylinder', etc.). "
553
+ "Return ONLY the complete Python code without any additional text."
554
+ )
555
+
556
+ # Generate Python code using LLM
557
+ python_response = state["llm_service"].invoke(python_prompt, GMSH_PYTHON_SYSTEM_PROMPT, pydantic_obj=GMSHPythonCode)
558
+
559
+ if not python_response.python_code:
560
+ print("Failed to generate GMSH Python code")
561
+ if gmsh_python_current_loop >= max_loop:
562
+ return {
563
+ "mesh_info": None,
564
+ "mesh_commands": [],
565
+ "mesh_file_destination": None,
566
+ "error_logs": error_logs
567
+ }
568
+ continue
569
+
570
+ python_code_to_use = python_response.python_code
571
+ mesh_type = python_response.mesh_type
572
+ geometry_type = python_response.geometry_type
573
+ else:
574
+ # Use corrected code from previous attempt
575
+ mesh_type = "3D" # Default for corrected code
576
+ geometry_type = "corrected" # Indicate this is corrected code
577
+
578
+ # Save the Python file
579
+ save_file(python_file, python_code_to_use)
580
+ print(f"Created GMSH Python file: {python_file}")
581
+
582
+ # Clear the corrected code flag since we're using it
583
+ corrected_python_code = None
584
+
585
+ # Run the Python code to generate the mesh
586
+ print("Running GMSH Python code to generate mesh...")
587
+
588
+ # Use Popen to get real-time output while still capturing stderr for error correction
589
+ process = subprocess.Popen(
590
+ ["python", python_file],
591
+ cwd=case_dir,
592
+ stdout=subprocess.PIPE,
593
+ stderr=subprocess.PIPE,
594
+ text=True,
595
+ bufsize=1,
596
+ universal_newlines=True
597
+ )
598
+
599
+ # Stream stdout in real-time
600
+ while True:
601
+ output = process.stdout.readline()
602
+ if output == '' and process.poll() is not None:
603
+ break
604
+ if output:
605
+ print(output.strip())
606
+
607
+ # Wait for process to complete and get return code
608
+ return_code = process.wait()
609
+
610
+ # Get stderr for potential error correction
611
+ stderr_output = process.stderr.read()
612
+
613
+ if return_code != 0:
614
+ raise subprocess.CalledProcessError(return_code, process.args, stderr=stderr_output)
615
+
616
+ print("GMSH Python mesh generation completed successfully")
617
+
618
+ # Verify the mesh file was created
619
+ if not os.path.exists(msh_file):
620
+ print("Error: Mesh file was not created by GMSH Python")
621
+ if stderr_output:
622
+ print(f"GMSH Python error output: {stderr_output}")
623
+ # Try to correct the Python code based on the error
624
+ if gmsh_python_current_loop < max_loop:
625
+ print("Attempting to correct Python code based on error...")
626
+ corrected_code = _correct_gmsh_python_code(
627
+ state,
628
+ python_code_to_use,
629
+ stderr_output
630
+ )
631
+ if corrected_code:
632
+ corrected_python_code = corrected_code
633
+ print("Generated corrected Python code for next attempt")
634
+ continue
635
+ if gmsh_python_current_loop >= max_loop:
636
+ return {
637
+ "mesh_info": None,
638
+ "mesh_commands": [],
639
+ "mesh_file_destination": None,
640
+ "error_logs": error_logs
641
+ }
642
+ continue
643
+
644
+ print(f"Successfully created mesh file: {msh_file}")
645
+
646
+ # ========== INTEGRATED PREPROCESSOR OPERATIONS ==========
647
+ print("Starting OpenFOAM conversion and boundary checking...")
648
+
649
+ try:
650
+ # Create constant and system directories
651
+ constant_dir = os.path.join(case_dir, "constant")
652
+ system_dir = os.path.join(case_dir, "system")
653
+ os.makedirs(constant_dir, exist_ok=True)
654
+ os.makedirs(system_dir, exist_ok=True)
655
+
656
+ # Create basic controlDict file
657
+ controldict_prompt = (
658
+ f"<user_requirements>{state['user_requirement']}</user_requirements>\n"
659
+ "Please create a basic controlDict file for mesh conversion. "
660
+ "The file should include only the essential settings needed for gmshToFoam to work. "
661
+ "IMPORTANT: Return ONLY the complete controlDict file content without any additional text."
662
+ )
663
+
664
+ controldict_content = state["llm_service"].invoke(controldict_prompt, CONTROLDICT_SYSTEM_PROMPT).strip()
665
+
666
+ if controldict_content:
667
+ controldict_file = os.path.join(system_dir, "controlDict")
668
+ save_file(controldict_file, controldict_content)
669
+ print("Created basic controlDict file for mesh conversion")
670
+
671
+ # Run gmshToFoam command
672
+ print("Running gmshToFoam conversion...")
673
+ result = subprocess.run(
674
+ ["gmshToFoam", "geometry.msh"],
675
+ cwd=case_dir,
676
+ check=True,
677
+ stdout=subprocess.PIPE,
678
+ stderr=subprocess.PIPE,
679
+ text=True
680
+ )
681
+ print(f"gmshToFoam command output: {result.stdout}")
682
+
683
+ # Check if the mesh was converted successfully
684
+ polyMesh_dir = os.path.join(constant_dir, "polyMesh")
685
+ if not os.path.exists(polyMesh_dir):
686
+ raise subprocess.CalledProcessError(1, "gmshToFoam", "polyMesh directory not created")
687
+
688
+ # Check boundary file for boundaries
689
+ boundary_file = os.path.join(polyMesh_dir, "boundary")
690
+ if os.path.exists(boundary_file):
691
+ all_present, missing_boundaries, found_boundaries = check_boundary_file_for_missing_boundaries(
692
+ boundary_file, expected_boundaries
693
+ )
694
+
695
+ print(f"Found boundaries in OpenFOAM: {found_boundaries}")
696
+ print(f"Expected boundaries: {expected_boundaries}")
697
+
698
+ # Simple: If sets don't match, ask LLM to fix
699
+ if set(found_boundaries) != set(expected_boundaries):
700
+ print(f"Boundary mismatch detected. Found: {found_boundaries}, Expected: {expected_boundaries}")
701
+ # Store for LLM context
702
+ state['found_boundaries'] = found_boundaries
703
+ state['expected_boundaries'] = expected_boundaries
704
+ # Read current generate_mesh.py code
705
+ with open(python_file, 'r') as f:
706
+ current_code = f.read()
707
+ if gmsh_python_current_loop < max_loop:
708
+ print("Retrying mesh generation due to boundary mismatch...")
709
+ boundary_error = (
710
+ f"Boundary mismatch after gmshToFoam. "
711
+ f"Found boundaries: {found_boundaries}. "
712
+ f"Expected boundaries: {expected_boundaries}. "
713
+ "Please correct the mesh code so that the boundaries in the OpenFOAM boundary file match the expected boundaries exactly."
714
+ )
715
+ error_logs.append(boundary_error)
716
+ corrected_code = _correct_gmsh_python_code(
717
+ state,
718
+ current_code,
719
+ boundary_error
720
+ )
721
+ if corrected_code:
722
+ corrected_python_code = corrected_code
723
+ print("Generated corrected Python code for next attempt (boundary mismatch)")
724
+ continue
725
+ else:
726
+ print(f"Failed to generate correct boundaries after {max_loop} attempts")
727
+ return {
728
+ "mesh_info": None,
729
+ "mesh_commands": [],
730
+ "mesh_file_destination": None,
731
+ "error_logs": error_logs
732
+ }
733
+ else:
734
+ print("All boundaries match expected in OpenFOAM boundary file")
735
+ # Clear any previous boundary issues
736
+ if 'found_boundaries' in state:
737
+ del state['found_boundaries']
738
+ if 'expected_boundaries' in state:
739
+ del state['expected_boundaries']
740
+ missing_boundaries = []
741
+
742
+ # Run checkMesh to verify mesh quality BEFORE boundary file update
743
+ checkmesh_success, should_continue = run_checkmesh_and_correct(
744
+ state, case_dir, python_file, max_loop, gmsh_python_current_loop, corrected_python_code, error_logs
745
+ )
746
+
747
+ if not checkmesh_success:
748
+ if should_continue:
749
+ continue # Continue to next iteration with corrected code
750
+ else:
751
+ print(f"Failed to resolve checkMesh issues after {max_loop} attempts")
752
+ return {
753
+ "mesh_info": None,
754
+ "mesh_commands": [],
755
+ "mesh_file_destination": None,
756
+ "error_logs": error_logs
757
+ }
758
+
759
+ # Handle boundary conditions based on user requirements
760
+ with open(boundary_file, 'r') as f:
761
+ boundary_content = f.read()
762
+
763
+ boundary_prompt = (
764
+ f"<user_requirements>{state['user_requirement']}</user_requirements>\n"
765
+ f"<boundary_file_content>{boundary_content}</boundary_file_content>\n"
766
+ "Please analyze the user requirements and boundary file content. "
767
+ "Identify which boundary is to be modified based on the boundaries mentioned in the user requirements."
768
+ "If this is a 2D simulation, modify ONLY the appropriate boundary to 'empty' type and 'empty' physicalType. "
769
+ "Based on the no slip boundaries mentioned in the user requirements, modify the appropriate boundary/boundaries to type 'wall' and physicalType 'wall'. "
770
+ "If this is a 3D simulation, only modify the appropriate boundary/boundaries to type 'wall' and physicalType 'wall'."
771
+ "IMPORTANT: Do not change any other boundaries - leave them exactly as they are. "
772
+ "Return ONLY the complete boundary file content with any necessary modifications. No additional text."
773
+ )
774
+
775
+ updated_boundary_content = state["llm_service"].invoke(boundary_prompt, BOUNDARY_SYSTEM_PROMPT).strip()
776
+
777
+ if updated_boundary_content:
778
+ save_file(boundary_file, updated_boundary_content)
779
+ print("Boundary file updated based on simulation requirements")
780
+
781
+ # Create .foam file
782
+ foam_file = os.path.join(case_dir, f"{os.path.basename(case_dir)}.foam")
783
+ with open(foam_file, 'w') as f:
784
+ pass
785
+
786
+ print("OpenFOAM conversion, boundary setup, and mesh quality check completed successfully")
787
+
788
+ # Generate mesh commands for the InputWriter node
789
+ mesh_commands = [ # Check mesh quality # Renumber mesh for better performance
790
+ ]
791
+
792
+ # Update state with mesh information
793
+ return {
794
+ "mesh_info": {
795
+ "mesh_file_path": msh_file,
796
+ "mesh_file_type": "gmsh",
797
+ "mesh_description": f"GMSH generated {geometry_type} mesh",
798
+ "requires_blockmesh_removal": True
799
+ },
800
+ "mesh_commands": mesh_commands,
801
+ "mesh_file_destination": msh_file,
802
+ "custom_mesh_used": True,
803
+ "error_logs": error_logs
804
+ }
805
+
806
+ except subprocess.CalledProcessError as e:
807
+ error_message = f"Error in OpenFOAM conversion: {str(e)}"
808
+ if e.stdout:
809
+ error_message += f"\nSTDOUT:\n{e.stdout}"
810
+ if e.stderr:
811
+ error_message += f"\nSTDERR:\n{e.stderr}"
812
+ print(error_message)
813
+ error_logs.append(error_message)
814
+
815
+ # Retry mesh generation
816
+ if gmsh_python_current_loop < max_loop:
817
+ print("Retrying mesh generation due to OpenFOAM conversion error...")
818
+ continue
819
+ else:
820
+ print(f"Failed to convert mesh to OpenFOAM format after {max_loop} attempts")
821
+ return {
822
+ "mesh_info": None,
823
+ "mesh_commands": [],
824
+ "mesh_file_destination": None,
825
+ "error_logs": error_logs
826
+ }
827
+
828
+ except subprocess.CalledProcessError as e:
829
+ error_message = f"Error in GMSH Python mesh generation (attempt {gmsh_python_current_loop}): {str(e)}"
830
+ if e.stdout:
831
+ error_message += f"\nSTDOUT:\n{e.stdout}"
832
+ if e.stderr:
833
+ error_message += f"\nSTDERR:\n{e.stderr}"
834
+ print(error_message)
835
+ error_logs.append(error_message)
836
+
837
+ # Try to correct the Python code based on the error
838
+ if gmsh_python_current_loop < max_loop:
839
+ print("Attempting to correct Python code based on error...")
840
+ try:
841
+ # Read the current Python file
842
+ with open(python_file, 'r') as f:
843
+ current_code = f.read()
844
+
845
+ corrected_code = _correct_gmsh_python_code(
846
+ state,
847
+ current_code,
848
+ e.stderr
849
+ )
850
+ if corrected_code:
851
+ corrected_python_code = corrected_code
852
+ print("Generated corrected Python code for next attempt")
853
+ continue
854
+ except Exception as correction_error:
855
+ print(f"Error during Python code correction: {correction_error}")
856
+
857
+ if gmsh_python_current_loop >= max_loop:
858
+ print(f"Failed to generate mesh after {max_loop} attempts")
859
+ return {
860
+ "mesh_info": None,
861
+ "mesh_commands": [],
862
+ "mesh_file_destination": None,
863
+ "error_logs": error_logs
864
+ }
865
+
866
+ except Exception as e:
867
+ print(f"Error in GMSH Python node: {str(e)}")
868
+ if gmsh_python_current_loop >= max_loop:
869
+ return {
870
+ "mesh_info": None,
871
+ "mesh_commands": [],
872
+ "mesh_file_destination": None,
873
+ "error_logs": error_logs
874
+ }
875
+ continue
876
+
877
+ return {
878
+ "mesh_info": None,
879
+ "mesh_commands": [],
880
+ "mesh_file_destination": None,
881
+ "error_logs": error_logs
882
+ }
883
+
884
+ def handle_standard_mesh(state, case_dir):
885
+ """Handle standard OpenFOAM mesh generation."""
886
+ print("============================== Standard Mesh Generation ==============================")
887
+ print("Using standard OpenFOAM mesh generation (blockMesh, snappyHexMesh, etc.)")
888
+ return {
889
+ "mesh_info": None,
890
+ "mesh_commands": [],
891
+ "mesh_file_destination": None,
892
+ "custom_mesh_used": False,
893
+ "error_logs": []
894
+ }
895
+
896
+ def meshing_node(state):
897
+ """
898
+ Meshing node: Handle different mesh scenarios based on user requirements.
899
+
900
+ Three scenarios:
901
+ 1. Custom mesh: User provides existing mesh file (uses preprocessor logic)
902
+ 2. GMSH mesh: User wants mesh generated using GMSH (uses gmsh python logic)
903
+ 3. Standard mesh: User wants standard OpenFOAM mesh generation (returns None)
904
+
905
+ Updates state with:
906
+ - mesh_info: Information about the custom mesh
907
+ - mesh_commands: Commands needed for mesh processing
908
+ - mesh_file_destination: Where the mesh file should be placed
909
+ """
910
+ config = state["config"]
911
+ user_requirement = state["user_requirement"]
912
+ case_dir = state["case_dir"]
913
+
914
+ # Get mesh type from state (determined by router)
915
+ mesh_type = state.get("mesh_type", "standard_mesh")
916
+
917
+ # Handle mesh based on type determined by router
918
+ if mesh_type == "custom_mesh":
919
+ print("Router determined: Custom mesh requested.")
920
+ return handle_custom_mesh(state, case_dir)
921
+ elif mesh_type == "gmsh_mesh":
922
+ print("Router determined: GMSH mesh requested.")
923
+ return handle_gmsh_mesh(state, case_dir)
924
+ else:
925
+ print("Router determined: Standard mesh generation.")
926
+ return handle_standard_mesh(state, case_dir)
Foam-Agent/source/src/nodes/reviewer_node.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # reviewer_node.py
2
+ from pydantic import BaseModel, Field
3
+ from typing import List
4
+
5
+ REVIEWER_SYSTEM_PROMPT = (
6
+ "You are an expert in OpenFOAM simulation and numerical modeling. "
7
+ "Your task is to review the provided error logs and diagnose the underlying issues. "
8
+ "You will be provided with a similar case reference, which is a list of similar cases that are ordered by similarity. You can use this reference to help you understand the user requirement and the error."
9
+ "When an error indicates that a specific keyword is undefined (for example, 'div(phi,(p|rho)) is undefined'), your response must propose a solution that simply defines that exact keyword as shown in the error log. "
10
+ "Do not reinterpret or modify the keyword (e.g., do not treat '|' as 'or'); instead, assume it is meant to be taken literally. "
11
+ "Propose ideas on how to resolve the errors, but do not modify any files directly. "
12
+ "Please do not propose solutions that require modifying any parameters declared in the user requirement, try other approaches instead. Do not ask the user any questions."
13
+ "The user will supply all relevant foam files along with the error logs, and within the logs, you will find both the error content and the corresponding error command indicated by the log file name."
14
+ )
15
+
16
+ def reviewer_node(state):
17
+ """
18
+ Reviewer node: Reviews the error logs and provides analysis and suggestions
19
+ for fixing the errors. This node only focuses on analysis, not file modification.
20
+ """
21
+ print(f"============================== Reviewer Analysis ==============================")
22
+ if len(state["error_logs"]) == 0:
23
+ print("No error to review.")
24
+ return state
25
+
26
+ # Analysis the reason and give the method to fix the error.
27
+ if state.get("history_text") and state["history_text"]:
28
+ reviewer_user_prompt = (
29
+ f"<similar_case_reference>{state['tutorial_reference']}</similar_case_reference>\n"
30
+ f"<foamfiles>{str(state['foamfiles'])}</foamfiles>\n"
31
+ f"<current_error_logs>{state['error_logs']}</current_error_logs>\n"
32
+ f"<history>\n"
33
+ f"{chr(10).join(state['history_text'])}\n"
34
+ f"</history>\n\n"
35
+ f"<user_requirement>{state['user_requirement']}</user_requirement>\n\n"
36
+ f"I have modified the files according to your previous suggestions. If the error persists, please provide further guidance. Make sure your suggestions adhere to user requirements and do not contradict it. Also, please consider the previous attempts and try a different approach."
37
+ )
38
+ else:
39
+ reviewer_user_prompt = (
40
+ f"<similar_case_reference>{state['tutorial_reference']}</similar_case_reference>\n"
41
+ f"<foamfiles>{str(state['foamfiles'])}</foamfiles>\n"
42
+ f"<error_logs>{state['error_logs']}</error_logs>\n"
43
+ f"<user_requirement>{state['user_requirement']}</user_requirement>\n"
44
+ "Please review the error logs and provide guidance on how to resolve the reported errors. Make sure your suggestions adhere to user requirements and do not contradict it."
45
+ )
46
+
47
+ review_response = state["llm_service"].invoke(reviewer_user_prompt, REVIEWER_SYSTEM_PROMPT)
48
+ review_content = review_response
49
+
50
+ # Initialize history_text if it doesn't exist
51
+ if not state.get("history_text"):
52
+ history_text = []
53
+ else:
54
+ history_text = state["history_text"]
55
+
56
+ # Add current attempt to history
57
+ current_attempt = [
58
+ f"<Attempt {len(history_text)//4 + 1}>\n"
59
+ f"<Error_Logs>\n{state['error_logs']}\n</Error_Logs>",
60
+ f"<Review_Analysis>\n{review_content}\n</Review_Analysis>",
61
+ f"</Attempt>\n" # Closing tag for Attempt with empty line
62
+ ]
63
+ history_text.extend(current_attempt)
64
+
65
+ print(review_content)
66
+
67
+
68
+
69
+ # Return updated state with review analysis
70
+ return {
71
+ "history_text": history_text,
72
+ "review_analysis": review_content,
73
+ "loop_count": state.get("loop_count", 0) + 1,
74
+ "input_writer_mode": "rewrite",
75
+ }
Foam-Agent/source/src/nodes/visualization_node.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # visualization_node.py
2
+ import os
3
+ import subprocess
4
+ import sys
5
+ from typing import List, Optional
6
+ from pydantic import BaseModel, Field
7
+ from utils import save_file
8
+ import glob
9
+
10
+ # Helper to get the .foam file name
11
+ def get_foam_file(case_dir):
12
+ case_dir_name = os.path.basename(os.path.normpath(case_dir))
13
+ return f"{case_dir_name}.foam"
14
+
15
+ VISUALIZATION_SYSTEM_PROMPT = (
16
+ "You are an expert in OpenFOAM post-processing and PyVista Python scripting. "
17
+ "Your task is to generate a PyVista Python script that visualizes the specified data from the OpenFOAM case. "
18
+ "The script should load the OpenFOAM case data by reading the .foam file (e.g., 'runs.foam') in the case directory using PyVista, display the geometry, and color the surface by the specified field (e.g., 'U' for velocity). "
19
+ "Ensure the script shows the geometry, sets up the colorbar, and saves the visualization as a PNG file. "
20
+ "Use coolwarm colormap by default."
21
+ "The script must save the visualization as a PNG file, and the output image must contain the geometry and the colorbar, not just the colorbar. "
22
+ "IMPORTANT: Return ONLY the Python code without any markdown formatting, code block markers, or explanatory text. "
23
+ "The script should start with the necessary imports, read the .foam file using PyVista, and end with the screenshot saving."
24
+ )
25
+
26
+ ERROR_FIX_SYSTEM_PROMPT = (
27
+ "You are an expert in PyVista Python scripting and OpenFOAM visualization. "
28
+ "Your task is to fix the provided PyVista Python script that encountered an error. "
29
+ "Ensure the script loads the OpenFOAM case data by reading the .foam file (e.g., 'runs.foam') in the case directory using PyVista, displays the geometry, and colors the surface by the specified field. "
30
+ "Make sure the script shows the geometry, sets up the colorbar, and saves the visualization as a PNG file. "
31
+ "Use coolwarm colormap by default."
32
+ "The script must save the visualization as a PNG file, and the output image must contain the geometry and the colorbar, not just the colorbar. "
33
+ "IMPORTANT: Return ONLY the Python code without any markdown formatting, code block markers, or explanatory text. "
34
+ "The script should start with the necessary imports, read the .foam file using PyVista, and end with the screenshot saving."
35
+ )
36
+
37
+ class PlotConfigPydantic(BaseModel):
38
+ """Configuration for plotting parameters"""
39
+ plot_type: str = Field(description="Type of plot (e.g., 'contour', 'vector', 'streamline', 'time_series')")
40
+ field_name: str = Field(description="Field to plot (e.g., 'U', 'p', 'T', 'rho')")
41
+ time_step: Optional[str] = Field(default=None, description="Time step to plot (if None, use latest)")
42
+ output_format: str = Field(default="png", description="Output format for plots")
43
+ output_path: str = Field(description="Path to save the plot")
44
+
45
+ class VisualizationPlanPydantic(BaseModel):
46
+ """Plan for visualization tasks"""
47
+ plots: List[PlotConfigPydantic] = Field(description="List of plots to generate")
48
+
49
+ class VisualizationAnalysisPydantic(BaseModel):
50
+ """Analysis of user requirements for visualization needs"""
51
+ primary_field: str = Field(description="Primary field to visualize (e.g., 'U', 'p', 'T', 'rho')")
52
+ plot_type: str = Field(description="Type of plot requested (e.g., 'contour', 'vector', 'streamline', 'time_series')")
53
+ time_step: Optional[str] = Field(default=None, description="Specific time step to plot (if mentioned)")
54
+ plane_info: Optional[str] = Field(default=None, description="Plane information if 2D slice is requested (e.g., 'Z plane', 'X=0.5')")
55
+ additional_fields: List[str] = Field(default=[], description="Additional fields that might be useful to visualize")
56
+ visualization_priority: str = Field(description="Priority of visualization (e.g., 'high', 'medium', 'low')")
57
+
58
+ def visualization_node(state):
59
+ """
60
+ Visualization node: Creates PyVista visualizations from the successfully generated OpenFOAM case.
61
+ This node uses the successfully generated code and user_requirement to create PyVista visualizations.
62
+
63
+ Updates state with:
64
+ - plot_configs: List of plot configurations
65
+ - plot_outputs: List of generated plot file paths
66
+ - visualization_summary: Summary of generated visualizations
67
+ - pyvista_visualization: PyVista visualization results
68
+ """
69
+ config = state["config"]
70
+ user_requirement = state["user_requirement"]
71
+ case_dir = state["case_dir"]
72
+
73
+ print(f"============================== Visualization (PyVista) ==============================")
74
+
75
+ # Ensure case_dir is absolute
76
+ case_dir = os.path.abspath(case_dir)
77
+ if not os.path.exists(case_dir):
78
+ print(f"Case directory does not exist: {case_dir}")
79
+ return {
80
+ **state,
81
+ "plot_configs": [],
82
+ "plot_outputs": [],
83
+ "visualization_summary": {"error": f"Case directory does not exist: {case_dir}"},
84
+ "pyvista_visualization": {"success": False, "error": f"Case directory does not exist: {case_dir}"}
85
+ }
86
+
87
+ # Touch the .foam file before generating the visualization script
88
+ foam_file = get_foam_file(case_dir)
89
+ foam_file_path = os.path.join(case_dir, foam_file)
90
+ with open(foam_file_path, 'a'):
91
+ os.utime(foam_file_path, None)
92
+
93
+ # Initialize loop counter
94
+ current_loop = 0
95
+ error_logs = []
96
+ max_loop = state['config'].max_loop
97
+
98
+ while current_loop < max_loop:
99
+ current_loop += 1
100
+ print(f"Attempt {current_loop} of {max_loop}")
101
+
102
+ # Create visualization script
103
+ viz_prompt = (
104
+ f"<case_directory>{case_dir}</case_directory>\n"
105
+ f"<foam_file>{foam_file}</foam_file>\n"
106
+ f"<visualization_requirements>{state['user_requirement']}</visualization_requirements>\n"
107
+ f"<previous_errors>{error_logs}</previous_errors>\n"
108
+ f"Please create a PyVista Python script that visualizes the specified data by reading the .foam file ('{foam_file}')."
109
+ "Save the visualization as PNG file named visualization.png if not specified otherwise in the user requirement."
110
+ )
111
+
112
+ viz_script = state["llm_service"].invoke(viz_prompt, VISUALIZATION_SYSTEM_PROMPT)
113
+
114
+ # Save the visualization script
115
+ script_path = os.path.join(case_dir, "visualization.py")
116
+ save_file(script_path, viz_script)
117
+
118
+ # Execute the script using Python
119
+ try:
120
+ result = subprocess.run(
121
+ [sys.executable, script_path],
122
+ cwd=case_dir,
123
+ check=True,
124
+ stdout=subprocess.PIPE,
125
+ stderr=subprocess.PIPE,
126
+ )
127
+ print(f"Finished command: Return Code {result.returncode}")
128
+ error_logs = []
129
+
130
+ # Check if any PNG output image was created
131
+ png_files = glob.glob(os.path.join(case_dir, "*.png"))
132
+ if png_files:
133
+ output_image = png_files[0] # Use the first PNG file found
134
+ print(f"PyVista visualization created successfully: {output_image}")
135
+
136
+ # Create plot configs and outputs in the expected format
137
+ plot_configs = [
138
+ {
139
+ "plot_type": "pyvista_2d",
140
+ "field_name": "U", # Default field, could be enhanced to detect from script
141
+ "time_step": "latest",
142
+ "output_format": "png",
143
+ "output_path": output_image
144
+ }
145
+ ]
146
+
147
+ plot_outputs = [output_image]
148
+
149
+ visualization_summary = {
150
+ "total_plots_generated": len(plot_outputs),
151
+ "plot_types": ["pyvista_2d"],
152
+ "fields_visualized": ["U"],
153
+ "output_directory": case_dir,
154
+ "pyvista_success": True
155
+ }
156
+
157
+ pyvista_result = {
158
+ "success": True,
159
+ "output_image": output_image,
160
+ "script": viz_script
161
+ }
162
+
163
+ print(f"Generated {len(plot_outputs)} plots")
164
+ print(f"PyVista visualization saved to: {output_image}")
165
+ print("============================== Visualization Complete ==============================")
166
+
167
+ return {
168
+ **state,
169
+ "plot_configs": plot_configs,
170
+ "plot_outputs": plot_outputs,
171
+ "visualization_summary": visualization_summary,
172
+ "pyvista_visualization": pyvista_result
173
+ }
174
+ else:
175
+ error_logs.append("Visualization script executed but no PNG output image was created")
176
+
177
+ except subprocess.CalledProcessError as e:
178
+ error_message = f"Error executing visualization script: {str(e)}"
179
+ if e.stdout:
180
+ error_message += f"\nSTDOUT:\n{e.stdout.decode() if isinstance(e.stdout, bytes) else e.stdout}"
181
+ if e.stderr:
182
+ error_message += f"\nSTDERR:\n{e.stderr.decode() if isinstance(e.stderr, bytes) else e.stderr}"
183
+ error_logs.append(error_message)
184
+
185
+ # If we have errors and haven't reached max loops, try to fix them
186
+ if error_logs and current_loop < max_loop:
187
+ error_fix_prompt = (
188
+ f"<error_logs>{error_logs}</error_logs>\n"
189
+ f"<foam_file>{foam_file}</foam_file>\n"
190
+ f"<original_script>{viz_script}</original_script>\n"
191
+ f"<attempt_number>{current_loop}</attempt_number>\n"
192
+ f"Please fix the PyVista Python script based on the error messages. The script should read the .foam file ('{foam_file}') in the case directory."
193
+ )
194
+
195
+ fixed_script = state["llm_service"].invoke(error_fix_prompt, ERROR_FIX_SYSTEM_PROMPT)
196
+
197
+ # Save the fixed script
198
+ save_file(script_path, fixed_script)
199
+
200
+ # Try executing the fixed script
201
+ try:
202
+ result = subprocess.run(
203
+ [sys.executable, script_path],
204
+ cwd=case_dir,
205
+ check=True,
206
+ stdout=subprocess.PIPE,
207
+ stderr=subprocess.PIPE,
208
+ )
209
+ print(f"Finished command: Return Code {result.returncode}")
210
+ error_logs = []
211
+
212
+ # Check if any PNG output image was created
213
+ png_files = glob.glob(os.path.join(case_dir, "*.png"))
214
+ if png_files:
215
+ output_image = png_files[0] # Use the first PNG file found
216
+ print(f"PyVista visualization created successfully: {output_image}")
217
+
218
+ # Create plot configs and outputs in the expected format
219
+ plot_configs = [
220
+ {
221
+ "plot_type": "pyvista_3d",
222
+ "field_name": "U", # Default field, could be enhanced to detect from script
223
+ "time_step": "latest",
224
+ "output_format": "png",
225
+ "output_path": output_image
226
+ }
227
+ ]
228
+
229
+ plot_outputs = [output_image]
230
+
231
+ visualization_summary = {
232
+ "total_plots_generated": len(plot_outputs),
233
+ "plot_types": ["pyvista_3d"],
234
+ "fields_visualized": ["U"],
235
+ "output_directory": case_dir,
236
+ "pyvista_success": True
237
+ }
238
+
239
+ pyvista_result = {
240
+ "success": True,
241
+ "output_image": output_image,
242
+ "script": fixed_script
243
+ }
244
+
245
+ print(f"Generated {len(plot_outputs)} plots")
246
+ print(f"PyVista visualization saved to: {output_image}")
247
+ print("============================== Visualization Complete ==============================")
248
+
249
+ return {
250
+ **state,
251
+ "plot_configs": plot_configs,
252
+ "plot_outputs": plot_outputs,
253
+ "visualization_summary": visualization_summary,
254
+ "pyvista_visualization": pyvista_result
255
+ }
256
+ else:
257
+ error_logs.append("Fixed visualization script executed but no PNG output image was created")
258
+
259
+ except subprocess.CalledProcessError as e:
260
+ error_message = f"Error executing fixed visualization script: {str(e)}"
261
+ if e.stdout:
262
+ error_message += f"\nSTDOUT:\n{e.stdout.decode() if isinstance(e.stdout, bytes) else e.stdout}"
263
+ if e.stderr:
264
+ error_message += f"\nSTDERR:\n{e.stderr.decode() if isinstance(e.stderr, bytes) else e.stderr}"
265
+ error_logs.append(error_message)
266
+
267
+ # If we've exhausted all attempts
268
+ if current_loop >= max_loop:
269
+ print(f"Failed to create visualization after {max_loop} attempts")
270
+ error_message = f"Maximum number of attempts ({max_loop}) reached without success"
271
+ error_logs.append(error_message)
272
+
273
+ # Return failure state in the expected format
274
+ plot_configs = []
275
+ plot_outputs = []
276
+
277
+ visualization_summary = {
278
+ "total_plots_generated": 0,
279
+ "plot_types": [],
280
+ "fields_visualized": [],
281
+ "output_directory": case_dir,
282
+ "pyvista_success": False,
283
+ "error": error_message if 'error_message' in locals() else "Unknown error"
284
+ }
285
+
286
+ pyvista_result = {
287
+ "success": False,
288
+ "error": error_message if 'error_message' in locals() else "Unknown error",
289
+ "error_logs": error_logs
290
+ }
291
+
292
+ print("============================== Visualization Failed ==============================")
293
+
294
+ return {
295
+ **state,
296
+ "plot_configs": plot_configs,
297
+ "plot_outputs": plot_outputs,
298
+ "visualization_summary": visualization_summary,
299
+ "pyvista_visualization": pyvista_result
300
+ }
Foam-Agent/source/src/router_func.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import TypedDict, List, Optional
2
+ from config import Config
3
+ from utils import LLMService, GraphState
4
+ from langgraph.graph import StateGraph, START, END
5
+ from langgraph.types import Command
6
+
7
+
8
+ def llm_requires_custom_mesh(state: GraphState) -> int:
9
+ """
10
+ Use LLM to determine if user requires custom mesh based on their requirement.
11
+
12
+ Args:
13
+ state: Current graph state containing user requirement and LLM service
14
+
15
+ Returns:
16
+ int: 1 if custom mesh is required, 2 if gmsh mesh is required, 0 otherwise
17
+ """
18
+ user_requirement = state["user_requirement"]
19
+
20
+ system_prompt = (
21
+ "You are an expert in OpenFOAM workflow analysis. "
22
+ "Analyze the user requirement to determine if they want to use a custom mesh file. "
23
+ "Look for keywords like: custom mesh, mesh file, .msh, .stl, .obj, gmsh, snappyHexMesh, "
24
+ "or any mention of importing/using external mesh files. "
25
+ "If the user explicitly mentions or implies they want to use a custom mesh file, return 'custom_mesh'. "
26
+ "If they want to use standard OpenFOAM mesh generation (blockMesh, snappyHexMesh with STL, etc.), return 'standard_mesh'. "
27
+ "Look for keywords like gmsh and determine if they want to create mesh using gmsh. If they want to create mesh using gmsh, return 'gmsh_mesh'. "
28
+ "Be conservative - if unsure, assume 'standard_mesh' unless clearly specified otherwise."
29
+ "Only return 'custom_mesh' or 'standard_mesh' or 'gmsh_mesh'. Don't return anything else."
30
+ )
31
+
32
+ user_prompt = (
33
+ f"User requirement: {user_requirement}\n\n"
34
+ "Determine if the user wants to use a custom mesh file. "
35
+ "Return exactly 'custom_mesh' if they want to use a custom mesh file, "
36
+ "'standard_mesh' if they want standard OpenFOAM mesh generation or 'gmsh_mesh' if they want to create mesh using gmsh."
37
+ )
38
+
39
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
40
+ if "custom_mesh" in response.lower():
41
+ return 1
42
+ elif "gmsh_mesh" in response.lower():
43
+ return 2
44
+ else:
45
+ return 0
46
+
47
+
48
+ def llm_requires_hpc(state: GraphState) -> bool:
49
+ """
50
+ Use LLM to determine if user requires HPC/cluster execution based on their requirement.
51
+
52
+ Args:
53
+ state: Current graph state containing user requirement and LLM service
54
+
55
+ Returns:
56
+ bool: True if HPC execution is required, False otherwise
57
+ """
58
+ user_requirement = state["user_requirement"]
59
+
60
+ system_prompt = (
61
+ "You are an expert in OpenFOAM workflow analysis. "
62
+ "Analyze the user requirement to determine if they want to run the simulation on HPC (High Performance Computing) or locally. "
63
+ "Look for keywords like: HPC, cluster, supercomputer, SLURM, PBS, job queue, "
64
+ "parallel computing, distributed computing, or any mention of running on remote systems. "
65
+ "If the user explicitly mentions or implies they want to run on HPC/cluster, return 'hpc_run'. "
66
+ "If they want to run locally or don't specify, return 'local_run'. "
67
+ "Be conservative - if unsure, assume local run unless clearly specified otherwise."
68
+ "Only return 'hpc_run' or 'local_run'. Don't return anything else."
69
+ )
70
+
71
+ user_prompt = (
72
+ f"User requirement: {user_requirement}\n\n"
73
+ "return 'hpc_run' or 'local_run'"
74
+ )
75
+
76
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
77
+ return "hpc_run" in response.lower()
78
+
79
+
80
+ def llm_requires_visualization(state: GraphState) -> bool:
81
+ """
82
+ Use LLM to determine if user requires visualization based on their requirement.
83
+
84
+ Args:
85
+ state: Current graph state containing user requirement and LLM service
86
+
87
+ Returns:
88
+ bool: True if visualization is required, False otherwise
89
+ """
90
+ user_requirement = state["user_requirement"]
91
+
92
+ system_prompt = (
93
+ "You are an expert in OpenFOAM workflow analysis. "
94
+ "Analyze the user requirement to determine if they want visualization of results. "
95
+ "Look for keywords like: plot, visualize, graph, chart, contour, streamlines, paraview, post-processing."
96
+ "Only if the user explicitly mentions they want visualization, return 'yes_visualization'. "
97
+ "If they don't mention visualization or only want to run the simulation, return 'no_visualization'. "
98
+ "Be conservative - if unsure, assume visualization is wanted unless clearly specified otherwise."
99
+ "Only return 'yes_visualization' or 'no_visualization'. Don't return anything else."
100
+ )
101
+
102
+ user_prompt = (
103
+ f"User requirement: {user_requirement}\n\n"
104
+ "return 'yes_visualization' or 'no_visualization'"
105
+ )
106
+
107
+ response = state["llm_service"].invoke(user_prompt, system_prompt)
108
+ return "yes_visualization" in response.lower()
109
+
110
+
111
+ def route_after_architect(state: GraphState):
112
+ """
113
+ Route after architect node based on whether user wants custom mesh.
114
+ For current version, if user wants custom mesh, user should be able to provide a path to the mesh file.
115
+ """
116
+ mesh_type = state.get("mesh_type", "standard_mesh")
117
+ if mesh_type == "custom_mesh":
118
+ print("Router: Custom mesh requested. Routing to meshing node.")
119
+ return "meshing"
120
+ elif mesh_type == "gmsh_mesh":
121
+ print("Router: GMSH mesh requested. Routing to meshing node.")
122
+ return "meshing"
123
+ else:
124
+ print("Router: Standard mesh generation. Routing to input_writer node.")
125
+ return "input_writer"
126
+
127
+
128
+ def route_after_input_writer(state: GraphState):
129
+ """
130
+ Route after input_writer node based on whether user wants to run on HPC.
131
+ """
132
+ if llm_requires_hpc(state):
133
+ print("LLM determined: HPC run requested. Routing to hpc_runner node.")
134
+ return "hpc_runner"
135
+ else:
136
+ print("LLM determined: Local run requested. Routing to local_runner node.")
137
+ return "local_runner"
138
+
139
+ def route_after_runner(state: GraphState):
140
+ if state.get("error_logs") and len(state["error_logs"]) > 0:
141
+ return "reviewer"
142
+ elif llm_requires_visualization(state):
143
+ return "visualization"
144
+ else:
145
+ return END
146
+
147
+ def route_after_reviewer(state: GraphState):
148
+ loop_count = state.get("loop_count", 0)
149
+ max_loop = state["config"].max_loop
150
+ if loop_count >= max_loop:
151
+ print(f"Maximum loop count ({max_loop}) reached. Ending workflow.")
152
+ if llm_requires_visualization(state):
153
+ return "visualization"
154
+ else:
155
+ return END
156
+ print(f"Loop {loop_count}: Continuing to fix errors.")
157
+
158
+ return "input_writer"
Foam-Agent/source/src/tracking_aws.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ # Usage
3
+
4
+ Initialize your client as follows:
5
+
6
+ ```python
7
+
8
+ import tracking_aws
9
+
10
+ bedrock_runtime = tracking_aws.new_default_client()
11
+ ```
12
+
13
+ Your usage will be tracked in a local file called `usage_nrel_aws.json`.
14
+ """
15
+ from __future__ import annotations
16
+ from math import nan
17
+ import itertools
18
+ import os
19
+ import io
20
+ import json
21
+ import pathlib
22
+ from typing import Union, Dict
23
+ import pathlib
24
+ from typing import Any
25
+ from contextlib import contextmanager
26
+
27
+ import boto3
28
+
29
+ Usage = Dict[str, Union[int, float]] # TODO: could have used Counter class
30
+ default_usage_file = pathlib.Path("usage_nrel_aws.json")
31
+
32
+ CLAUDE_3_5_HAIKU = 'arn:aws:bedrock:us-west-2:991404956194:application-inference-profile/g47vfd2xvs5w'
33
+ CLAUDE_3_5_SONNET = 'arn:aws:bedrock:us-west-2:991404956194:application-inference-profile/56i8iq1vib3e'
34
+
35
+ # prices match https://aws.amazon.com/bedrock/pricing/ as of January 2025.
36
+ # but they don't consider discounts for caching or batching.
37
+ # Not all models are listed in this file, nor is the fine-tuning API.
38
+ pricing = { # $ per 1,000 tokens
39
+ CLAUDE_3_5_HAIKU: {'input': 0.0008, 'output': 0.004},
40
+ CLAUDE_3_5_SONNET: {'input': 0.003, 'output': 0.015},
41
+ }
42
+
43
+ # Default models. These variables can be imported from this module.
44
+ # Even if the system that is being evaluated uses a cheap default_model.
45
+ # one might want to evaluate it carefully using a more expensive default_eval_model.
46
+ default_model = CLAUDE_3_5_HAIKU
47
+ default_eval_model = CLAUDE_3_5_HAIKU
48
+
49
+
50
+ # A context manager that lets you temporarily change the default models
51
+ # during a block of code. You can write things like
52
+ # with use_model('arn:aws:bedrock:us-west-2:991404956194:application-inference-profile/g47vfd2xvs5w'):
53
+ # ...
54
+ #
55
+ # with use_model(eval_model='arn:aws:bedrock:us-west-2:991404956194:application-inference-profile/g47vfd2xvs5w'):
56
+ # ...
57
+ @contextmanager
58
+ def use_model(model: str = default_model, eval_model: str = default_eval_model):
59
+ global default_model, default_eval_model
60
+ save_model, save_eval_model = default_model, default_eval_model
61
+ default_model, default_eval_model = model, eval_model
62
+ try:
63
+ yield
64
+ finally:
65
+ default_model, default_eval_model = save_model, save_eval_model
66
+
67
+
68
+ def track_usage(client: boto3.client, path: pathlib.Path = default_usage_file) -> boto3.client:
69
+ """
70
+ This method modifies (and returns) `client` so that its API calls
71
+ will log token counts to `path`. If the file does not exist it
72
+ will be created after the first API call. If the file exists the new
73
+ counts will be added to it.
74
+
75
+ The `read_usage()` function gets a Usage object from the file, e.g.:
76
+ {
77
+ "cost": 0.0022136,
78
+ "input_tokens": 16,
79
+ "output_tokens": 272
80
+ }
81
+
82
+ >>> client = boto3.client('bedrock')
83
+ >>> track_usage(client, "example_usage_file.json")
84
+ >>> type(client)
85
+ <class 'botocore.client.BaseClient'>
86
+
87
+ """
88
+ old_invoke_model = client.invoke_model
89
+
90
+ def tracked_invoke_model(*args, **kwargs) -> Any:
91
+ response = old_invoke_model(*args, **kwargs)
92
+ old: Usage = read_usage(path)
93
+ new, response_body = get_usage(response, model=kwargs.get('modelId', None))
94
+ _write_usage(_merge_usage(old, new), path)
95
+ return response_body
96
+
97
+ client.invoke_model = tracked_invoke_model # type:ignore
98
+ return client
99
+
100
+
101
+ def get_usage(response, model=None) -> Usage:
102
+ """Extract usage info from an AWS Bedrock response."""
103
+ response_body = json.loads(response['body'].read().decode())
104
+ usage: Usage = {'input_tokens': response_body['usage']['input_tokens'],
105
+ 'output_tokens': response_body['usage']['output_tokens']}
106
+
107
+ # add a cost field
108
+ try:
109
+ costs = pricing[model] # model name passed in request (may be alias)
110
+ except KeyError:
111
+ raise ValueError(f"Don't know prices for model {model} or {response.model}")
112
+
113
+ cost = (usage.get('input_tokens', 0) * costs['input']
114
+ + usage.get('output_tokens', 0) * costs['output']) / 1_000
115
+ usage['cost'] = cost
116
+ return usage, response_body
117
+
118
+
119
+ def read_usage(path: pathlib.Path = default_usage_file) -> Usage:
120
+ """Retrieve total usage logged in a file."""
121
+ if os.path.exists(path):
122
+ with open(path, "rt") as f:
123
+ return json.load(f)
124
+ else:
125
+ return {}
126
+
127
+
128
+ def _write_usage(u: Usage, path: pathlib.Path):
129
+ with open(path, "wt") as f:
130
+ json.dump(u, f, indent=4)
131
+
132
+
133
+ def _merge_usage(u1: Usage, u2: Usage) -> Usage:
134
+ return {k: u1.get(k, 0) + u2.get(k, 0) for k in itertools.chain(u1, u2)}
135
+
136
+
137
+ def new_default_client(default='boto3') -> boto3.client:
138
+ """Set the `default_client` to a new tracked client, based on the current
139
+ aws credentials. If your credentials change you should call this method again."""
140
+ global default_client
141
+ default_client = track_usage(
142
+ boto3.client('bedrock-runtime', region_name='us-west-2')) # create a client with default args, and modify it
143
+ # so that it will store its usage in a local file
144
+ return default_client
145
+
146
+
147
+ # new_default_client() # set `default_client` right away when importing this module
148
+
149
+ if __name__ == "__main__":
150
+ import doctest
151
+
152
+ doctest.testmod()
Foam-Agent/source/src/utils.py ADDED
@@ -0,0 +1,535 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # utils.py
2
+ import re
3
+ import subprocess
4
+ import os
5
+ from typing import Optional, Any, Type, TypedDict, List
6
+ from pydantic import BaseModel, Field
7
+ from langchain.chat_models import init_chat_model
8
+ from langchain_community.vectorstores import FAISS
9
+ from langchain_openai.embeddings import OpenAIEmbeddings
10
+ from langchain_aws import ChatBedrock, ChatBedrockConverse
11
+ from langchain_anthropic import ChatAnthropic
12
+ from pathlib import Path
13
+ import tracking_aws
14
+ import requests
15
+ import time
16
+ import random
17
+ from botocore.exceptions import ClientError
18
+ import shutil
19
+ from config import Config
20
+ from langchain_ollama import ChatOllama
21
+
22
+ # Global dictionary to store loaded FAISS databases
23
+ FAISS_DB_CACHE = {}
24
+ DATABASE_DIR = f"{Path(__file__).resolve().parent.parent}/database/faiss"
25
+
26
+ FAISS_DB_CACHE = {
27
+ "openfoam_allrun_scripts": FAISS.load_local(f"{DATABASE_DIR}/openfoam_allrun_scripts", OpenAIEmbeddings(model="text-embedding-3-small"), allow_dangerous_deserialization=True),
28
+ "openfoam_tutorials_structure": FAISS.load_local(f"{DATABASE_DIR}/openfoam_tutorials_structure", OpenAIEmbeddings(model="text-embedding-3-small"), allow_dangerous_deserialization=True),
29
+ "openfoam_tutorials_details": FAISS.load_local(f"{DATABASE_DIR}/openfoam_tutorials_details", OpenAIEmbeddings(model="text-embedding-3-small"), allow_dangerous_deserialization=True),
30
+ "openfoam_command_help": FAISS.load_local(f"{DATABASE_DIR}/openfoam_command_help", OpenAIEmbeddings(model="text-embedding-3-small"), allow_dangerous_deserialization=True)
31
+ }
32
+
33
+ class FoamfilePydantic(BaseModel):
34
+ file_name: str = Field(description="Name of the OpenFOAM input file")
35
+ folder_name: str = Field(description="Folder where the foamfile should be stored")
36
+ content: str = Field(description="Content of the OpenFOAM file, written in OpenFOAM dictionary format")
37
+
38
+ class FoamPydantic(BaseModel):
39
+ list_foamfile: List[FoamfilePydantic] = Field(description="List of OpenFOAM configuration files")
40
+
41
+ class ResponseWithThinkPydantic(BaseModel):
42
+ think: str = Field(description="Thought process of the LLM")
43
+ response: str = Field(description="Response of the LLM")
44
+
45
+ class LLMService:
46
+ def __init__(self, config: object):
47
+ self.model_version = getattr(config, "model_version", "gpt-4o")
48
+ self.temperature = getattr(config, "temperature", 0)
49
+ self.model_provider = getattr(config, "model_provider", "openai")
50
+
51
+ # Initialize statistics
52
+ self.total_calls = 0
53
+ self.total_prompt_tokens = 0
54
+ self.total_completion_tokens = 0
55
+ self.total_tokens = 0
56
+ self.failed_calls = 0
57
+ self.retry_count = 0
58
+
59
+ # Initialize the LLM
60
+ if self.model_provider.lower() == "bedrock":
61
+ bedrock_runtime = tracking_aws.new_default_client()
62
+ self.llm = ChatBedrockConverse(
63
+ client=bedrock_runtime,
64
+ model_id=self.model_version,
65
+ temperature=self.temperature,
66
+ max_tokens=8192
67
+ )
68
+ elif self.model_provider.lower() == "anthropic":
69
+ self.llm = ChatAnthropic(
70
+ model=self.model_version,
71
+ temperature=self.temperature
72
+ )
73
+ elif self.model_provider.lower() == "openai":
74
+ self.llm = init_chat_model(
75
+ self.model_version,
76
+ model_provider=self.model_provider,
77
+ temperature=self.temperature
78
+ )
79
+ elif self.model_provider.lower() == "ollama":
80
+ try:
81
+ response = requests.get("http://localhost:11434/api/version", timeout=2)
82
+ # If request successful, service is running
83
+ except requests.exceptions.RequestException:
84
+ print("Ollama is not running, starting it...")
85
+ subprocess.Popen(["ollama", "serve"],
86
+ stdout=subprocess.PIPE,
87
+ stderr=subprocess.PIPE)
88
+ # Wait for service to start
89
+ time.sleep(5) # Give it 3 seconds to initialize
90
+
91
+ self.llm = ChatOllama(
92
+ model=self.model_version,
93
+ temperature=self.temperature,
94
+ num_predict=-1,
95
+ num_ctx=131072,
96
+ base_url="http://localhost:11434"
97
+ )
98
+ else:
99
+ raise ValueError(f"{self.model_provider} is not a supported model_provider")
100
+
101
+ def invoke(self,
102
+ user_prompt: str,
103
+ system_prompt: Optional[str] = None,
104
+ pydantic_obj: Optional[Type[BaseModel]] = None,
105
+ max_retries: int = 10) -> Any:
106
+ """
107
+ Invoke the LLM with the given prompts and return the response.
108
+
109
+ Args:
110
+ user_prompt: The user's prompt
111
+ system_prompt: Optional system prompt
112
+ pydantic_obj: Optional Pydantic model for structured output
113
+ max_retries: Maximum number of retries for throttling errors
114
+
115
+ Returns:
116
+ The LLM response with token usage statistics
117
+ """
118
+ self.total_calls += 1
119
+
120
+ messages = []
121
+ if system_prompt:
122
+ messages.append({"role": "system", "content": system_prompt})
123
+ messages.append({"role": "user", "content": user_prompt})
124
+
125
+ # Calculate prompt tokens
126
+ prompt_tokens = 0
127
+ for message in messages:
128
+ prompt_tokens += self.llm.get_num_tokens(message["content"])
129
+
130
+ retry_count = 0
131
+ while True:
132
+ try:
133
+ if pydantic_obj:
134
+ structured_llm = self.llm.with_structured_output(pydantic_obj)
135
+ response = structured_llm.invoke(messages)
136
+ else:
137
+ if self.model_version.startswith("deepseek"):
138
+ structured_llm = self.llm.with_structured_output(ResponseWithThinkPydantic)
139
+ response = structured_llm.invoke(messages)
140
+
141
+ # Extract the resposne without the think
142
+ response = response.response
143
+ else:
144
+ response = self.llm.invoke(messages)
145
+ response = response.content
146
+
147
+ # Calculate completion tokens
148
+ response_content = str(response)
149
+ completion_tokens = self.llm.get_num_tokens(response_content)
150
+ total_tokens = prompt_tokens + completion_tokens
151
+
152
+ # Update statistics
153
+ self.total_prompt_tokens += prompt_tokens
154
+ self.total_completion_tokens += completion_tokens
155
+ self.total_tokens += total_tokens
156
+
157
+ return response
158
+
159
+ except ClientError as e:
160
+ if e.response['Error']['Code'] == 'Throttling' or e.response['Error']['Code'] == 'TooManyRequestsException':
161
+ retry_count += 1
162
+ self.retry_count += 1
163
+
164
+ if retry_count > max_retries:
165
+ self.failed_calls += 1
166
+ raise Exception(f"Maximum retries ({max_retries}) exceeded: {str(e)}")
167
+
168
+ base_delay = 1.0
169
+ max_delay = 60.0
170
+ delay = min(max_delay, base_delay * (2 ** (retry_count - 1)))
171
+ jitter = random.uniform(0, 0.1 * delay)
172
+ sleep_time = delay + jitter
173
+
174
+ print(f"ThrottlingException occurred: {str(e)}. Retrying in {sleep_time:.2f} seconds (attempt {retry_count}/{max_retries})")
175
+ time.sleep(sleep_time)
176
+ else:
177
+ self.failed_calls += 1
178
+ raise e
179
+ except Exception as e:
180
+ self.failed_calls += 1
181
+ raise e
182
+
183
+ def get_statistics(self) -> dict:
184
+ """
185
+ Get the current statistics of the LLM service.
186
+
187
+ Returns:
188
+ Dictionary containing various statistics
189
+ """
190
+ return {
191
+ "total_calls": self.total_calls,
192
+ "failed_calls": self.failed_calls,
193
+ "retry_count": self.retry_count,
194
+ "total_prompt_tokens": self.total_prompt_tokens,
195
+ "total_completion_tokens": self.total_completion_tokens,
196
+ "total_tokens": self.total_tokens,
197
+ "average_prompt_tokens": self.total_prompt_tokens / self.total_calls if self.total_calls > 0 else 0,
198
+ "average_completion_tokens": self.total_completion_tokens / self.total_calls if self.total_calls > 0 else 0,
199
+ "average_tokens": self.total_tokens / self.total_calls if self.total_calls > 0 else 0
200
+ }
201
+
202
+ def print_statistics(self) -> None:
203
+ """
204
+ Print the current statistics of the LLM service.
205
+ """
206
+ stats = self.get_statistics()
207
+ print("\n<LLM Service Statistics>")
208
+ print(f"Total calls: {stats['total_calls']}")
209
+ print(f"Failed calls: {stats['failed_calls']}")
210
+ print(f"Total retries: {stats['retry_count']}")
211
+ print(f"Total prompt tokens: {stats['total_prompt_tokens']}")
212
+ print(f"Total completion tokens: {stats['total_completion_tokens']}")
213
+ print(f"Total tokens: {stats['total_tokens']}")
214
+ print(f"Average prompt tokens per call: {stats['average_prompt_tokens']:.2f}")
215
+ print(f"Average completion tokens per call: {stats['average_completion_tokens']:.2f}")
216
+ print(f"Average tokens per call: {stats['average_tokens']:.2f}\n")
217
+ print("</LLM Service Statistics>")
218
+
219
+ class GraphState(TypedDict):
220
+ user_requirement: str
221
+ config: Config
222
+ case_dir: str
223
+ tutorial: str
224
+ case_name: str
225
+ subtasks: List[dict]
226
+ current_subtask_index: int
227
+ error_command: Optional[str]
228
+ error_content: Optional[str]
229
+ loop_count: int
230
+ # Additional state fields that will be added during execution
231
+ llm_service: Optional['LLMService']
232
+ case_stats: Optional[dict]
233
+ tutorial_reference: Optional[str]
234
+ case_path_reference: Optional[str]
235
+ dir_structure_reference: Optional[str]
236
+ case_info: Optional[str]
237
+ allrun_reference: Optional[str]
238
+ dir_structure: Optional[dict]
239
+ commands: Optional[List[str]]
240
+ foamfiles: Optional[dict]
241
+ error_logs: Optional[List[str]]
242
+ history_text: Optional[List[str]]
243
+ case_domain: Optional[str]
244
+ case_category: Optional[str]
245
+ case_solver: Optional[str]
246
+ # Mesh-related state fields
247
+ mesh_info: Optional[dict]
248
+ mesh_commands: Optional[List[str]]
249
+ custom_mesh_used: Optional[bool]
250
+ mesh_type: Optional[str]
251
+ custom_mesh_path: Optional[str]
252
+ # Review and rewrite related fields
253
+ review_analysis: Optional[str]
254
+ input_writer_mode: Optional[str]
255
+ # HPC-related fields
256
+ job_id: Optional[str]
257
+ cluster_info: Optional[dict]
258
+ slurm_script_path: Optional[str]
259
+
260
+ def tokenize(text: str) -> str:
261
+ # Replace underscores with spaces
262
+ text = text.replace('_', ' ')
263
+ # Insert a space between a lowercase letter and an uppercase letter (global match)
264
+ text = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ', text)
265
+ return text.lower()
266
+
267
+ def save_file(path: str, content: str) -> None:
268
+ os.makedirs(os.path.dirname(path), exist_ok=True)
269
+ with open(path, 'w') as f:
270
+ f.write(content)
271
+ print(f"Saved file at {path}")
272
+
273
+ def read_file(path: str) -> str:
274
+ if os.path.exists(path):
275
+ with open(path, 'r') as f:
276
+ return f.read()
277
+ return ""
278
+
279
+ def list_case_files(case_dir: str) -> str:
280
+ files = [f for f in os.listdir(case_dir) if os.path.isfile(os.path.join(case_dir, f))]
281
+ return ", ".join(files)
282
+
283
+ def remove_files(directory: str, prefix: str) -> None:
284
+ for file in os.listdir(directory):
285
+ if file.startswith(prefix):
286
+ os.remove(os.path.join(directory, file))
287
+ print(f"Removed files with prefix '{prefix}' in {directory}")
288
+
289
+ def remove_file(path: str) -> None:
290
+ if os.path.exists(path):
291
+ os.remove(path)
292
+ print(f"Removed file {path}")
293
+
294
+ def remove_numeric_folders(case_dir: str) -> None:
295
+ """
296
+ Remove all folders in case_dir that represent numeric values, including those with decimal points,
297
+ except for the "0" folder.
298
+
299
+ Args:
300
+ case_dir (str): The directory path to process
301
+ """
302
+ for item in os.listdir(case_dir):
303
+ item_path = os.path.join(case_dir, item)
304
+ if os.path.isdir(item_path) and item != "0":
305
+ try:
306
+ # Try to convert to float to check if it's a numeric value
307
+ float(item)
308
+ # If conversion succeeds, it's a numeric folder
309
+ try:
310
+ shutil.rmtree(item_path)
311
+ print(f"Removed numeric folder: {item_path}")
312
+ except Exception as e:
313
+ print(f"Error removing folder {item_path}: {str(e)}")
314
+ except ValueError:
315
+ # Not a numeric value, so we keep this folder
316
+ pass
317
+
318
+ def run_command(script_path: str, out_file: str, err_file: str, working_dir: str, config : Config) -> None:
319
+ print(f"Executing script {script_path} in {working_dir}")
320
+ os.chmod(script_path, 0o777)
321
+ openfoam_dir = os.getenv("WM_PROJECT_DIR")
322
+ command = f"source {openfoam_dir}/etc/bashrc && bash {os.path.abspath(script_path)}"
323
+ timeout_seconds = config.max_time_limit
324
+
325
+ with open(out_file, 'w') as out, open(err_file, 'w') as err:
326
+ process = subprocess.Popen(
327
+ ['bash', "-c", command],
328
+ cwd=working_dir,
329
+ stdout=subprocess.PIPE,
330
+ stderr=subprocess.PIPE,
331
+ stdin=subprocess.DEVNULL,
332
+ text=True
333
+ )
334
+ # stdout, stderr = process.communicate()
335
+ # out.write(stdout)
336
+ # err.write(stderr)
337
+
338
+ try:
339
+ stdout, stderr = process.communicate(timeout=timeout_seconds)
340
+ out.write(stdout)
341
+ err.write(stderr)
342
+ except subprocess.TimeoutExpired:
343
+ process.kill()
344
+ stdout, stderr = process.communicate()
345
+ timeout_message = (
346
+ "OpenFOAM execution took too long. "
347
+ "This case, if set up right, does not require such large execution times.\n"
348
+ )
349
+ out.write(timeout_message + stdout)
350
+ err.write(timeout_message + stderr)
351
+ print(f"Execution timed out: {script_path}")
352
+
353
+
354
+
355
+ print(f"Executed script {script_path}")
356
+
357
+ def check_foam_errors(directory: str) -> list:
358
+ error_logs = []
359
+ # DOTALL mode allows '.' to match newline characters
360
+ pattern = re.compile(r"ERROR:(.*)", re.DOTALL)
361
+
362
+ for file in os.listdir(directory):
363
+ if file.startswith("log"):
364
+ filepath = os.path.join(directory, file)
365
+ with open(filepath, 'r') as f:
366
+ content = f.read()
367
+
368
+ match = pattern.search(content)
369
+ if match:
370
+ error_content = match.group(0).strip()
371
+ error_logs.append({"file": file, "error_content": error_content})
372
+ elif "error" in content.lower():
373
+ print(f"Warning: file {file} contains 'error' but does not match expected format.")
374
+ return error_logs
375
+
376
+ def extract_commands_from_allrun_out(out_file: str) -> list:
377
+ commands = []
378
+ if not os.path.exists(out_file):
379
+ return commands
380
+ with open(out_file, 'r') as f:
381
+ for line in f:
382
+ if line.startswith("Running "):
383
+ parts = line.split(" ")
384
+ if len(parts) > 1:
385
+ commands.append(parts[1].strip())
386
+ return commands
387
+
388
+ def parse_case_name(text: str) -> str:
389
+ match = re.search(r'case name:\s*(.+)', text, re.IGNORECASE)
390
+ return match.group(1).strip() if match else "default_case"
391
+
392
+ def split_subtasks(text: str) -> list:
393
+ header_match = re.search(r'splits into (\d+) subtasks:', text, re.IGNORECASE)
394
+ if not header_match:
395
+ print("Warning: No subtasks header found in the response.")
396
+ return []
397
+ num_subtasks = int(header_match.group(1))
398
+ subtasks = re.findall(r'subtask\d+:\s*(.*)', text, re.IGNORECASE)
399
+ if len(subtasks) != num_subtasks:
400
+ print(f"Warning: Expected {num_subtasks} subtasks but found {len(subtasks)}.")
401
+ return subtasks
402
+
403
+ def parse_context(text: str) -> str:
404
+ match = re.search(r'FoamFile\s*\{.*?(?=```|$)', text, re.DOTALL | re.IGNORECASE)
405
+ if match:
406
+ return match.group(0).strip()
407
+
408
+ print("Warning: Could not parse context; returning original text.")
409
+ return text
410
+
411
+
412
+ def parse_file_name(subtask: str) -> str:
413
+ match = re.search(r'openfoam\s+(.*?)\s+foamfile', subtask, re.IGNORECASE)
414
+ return match.group(1).strip() if match else ""
415
+
416
+ def parse_folder_name(subtask: str) -> str:
417
+ match = re.search(r'foamfile in\s+(.*?)\s+folder', subtask, re.IGNORECASE)
418
+ return match.group(1).strip() if match else ""
419
+
420
+ def find_similar_file(description: str, tutorial: str) -> str:
421
+ start_pos = tutorial.find(description)
422
+ if start_pos == -1:
423
+ return "None"
424
+ end_marker = "input_file_end."
425
+ end_pos = tutorial.find(end_marker, start_pos)
426
+ if end_pos == -1:
427
+ return "None"
428
+ return tutorial[start_pos:end_pos + len(end_marker)]
429
+
430
+ def read_commands(file_path: str) -> str:
431
+ if not os.path.exists(file_path):
432
+ raise FileNotFoundError(f"Commands file not found: {file_path}")
433
+ with open(file_path, 'r') as f:
434
+ # join non-empty lines with a comma
435
+ return ", ".join(line.strip() for line in f if line.strip())
436
+
437
+ def find_input_file(case_dir: str, command: str) -> str:
438
+ for root, _, files in os.walk(case_dir):
439
+ for file in files:
440
+ if command in file:
441
+ return os.path.join(root, file)
442
+ return ""
443
+
444
+ def retrieve_faiss(database_name: str, query: str, topk: int = 1) -> dict:
445
+ """
446
+ Retrieve a similar case from a FAISS database.
447
+ """
448
+
449
+ if database_name not in FAISS_DB_CACHE:
450
+ raise ValueError(f"Database '{database_name}' is not loaded.")
451
+
452
+ # Tokenize the query
453
+ query = tokenize(query)
454
+
455
+ vectordb = FAISS_DB_CACHE[database_name]
456
+ docs = vectordb.similarity_search(query, k=topk)
457
+ if not docs:
458
+ raise ValueError(f"No documents found for query: {query}")
459
+
460
+ formatted_results = []
461
+ for doc in docs:
462
+ metadata = doc.metadata or {}
463
+
464
+ if database_name == "openfoam_allrun_scripts":
465
+ formatted_results.append({
466
+ "index": doc.page_content,
467
+ "full_content": metadata.get("full_content", "unknown"),
468
+ "case_name": metadata.get("case_name", "unknown"),
469
+ "case_domain": metadata.get("case_domain", "unknown"),
470
+ "case_category": metadata.get("case_category", "unknown"),
471
+ "case_solver": metadata.get("case_solver", "unknown"),
472
+ "dir_structure": metadata.get("dir_structure", "unknown"),
473
+ "allrun_script": metadata.get("allrun_script", "N/A")
474
+ })
475
+ elif database_name == "openfoam_command_help":
476
+ formatted_results.append({
477
+ "index": doc.page_content,
478
+ "full_content": metadata.get("full_content", "unknown"),
479
+ "command": metadata.get("command", "unknown"),
480
+ "help_text": metadata.get("help_text", "unknown")
481
+ })
482
+ elif database_name == "openfoam_tutorials_structure":
483
+ formatted_results.append({
484
+ "index": doc.page_content,
485
+ "full_content": metadata.get("full_content", "unknown"),
486
+ "case_name": metadata.get("case_name", "unknown"),
487
+ "case_domain": metadata.get("case_domain", "unknown"),
488
+ "case_category": metadata.get("case_category", "unknown"),
489
+ "case_solver": metadata.get("case_solver", "unknown"),
490
+ "dir_structure": metadata.get("dir_structure", "unknown")
491
+ })
492
+ elif database_name == "openfoam_tutorials_details":
493
+ formatted_results.append({
494
+ "index": doc.page_content,
495
+ "full_content": metadata.get("full_content", "unknown"),
496
+ "case_name": metadata.get("case_name", "unknown"),
497
+ "case_domain": metadata.get("case_domain", "unknown"),
498
+ "case_category": metadata.get("case_category", "unknown"),
499
+ "case_solver": metadata.get("case_solver", "unknown"),
500
+ "dir_structure": metadata.get("dir_structure", "unknown"),
501
+ "tutorials": metadata.get("tutorials", "N/A")
502
+ })
503
+ else:
504
+ raise ValueError(f"Unknown database name: {database_name}")
505
+
506
+
507
+
508
+ return formatted_results
509
+
510
+
511
+ def parse_directory_structure(data: str) -> dict:
512
+ """
513
+ Parses the directory structure string and returns a dictionary where:
514
+ - Keys: directory names
515
+ - Values: count of files in that directory.
516
+ """
517
+ directory_file_counts = {}
518
+
519
+ # Find all <dir>...</dir> blocks in the input string.
520
+ dir_blocks = re.findall(r'<dir>(.*?)</dir>', data, re.DOTALL)
521
+
522
+ for block in dir_blocks:
523
+ # Extract the directory name (everything after "directory name:" until the first period)
524
+ dir_name_match = re.search(r'directory name:\s*(.*?)\.', block)
525
+ # Extract the list of file names within square brackets
526
+ files_match = re.search(r'File names in this directory:\s*\[(.*?)\]', block)
527
+
528
+ if dir_name_match and files_match:
529
+ dir_name = dir_name_match.group(1).strip()
530
+ files_str = files_match.group(1)
531
+ # Split the file names by comma, removing any surrounding whitespace
532
+ file_list = [filename.strip() for filename in files_str.split(',')]
533
+ directory_file_counts[dir_name] = len(file_list)
534
+
535
+ return directory_file_counts
README.md CHANGED
@@ -1,10 +1,146 @@
1
- ---
2
- title: Code2MCP FoamAgent
3
- emoji: 🐨
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Code2MCP-FoamAgent
2
+
3
+ Foam-Agent 的 MCP (Model Context Protocol) 服务包装器,提供基于自然语言的 OpenFOAM CFD 仿真工作流。
4
+
5
+ ## 功能特性
6
+
7
+ - 🧪 基于自然语言的 CFD 仿真需求
8
+ - 🤖 多智能体工作流自动化
9
+ - 📊 自动错误检测和修正
10
+ - 🔧 自定义网格支持 (GMSH .msh 文件)
11
+ - 🌐 多种 LLM 提供商支持
12
+
13
+ ## 快速开始
14
+
15
+ ### 1. 环境配置
16
+
17
+ 复制环境变量模板并配置:
18
+
19
+ ```bash
20
+ cp env.example .env
21
+ ```
22
+
23
+ 编辑 `.env` 文件,设置您的 API 密钥:
24
+
25
+ ```env
26
+ # OpenAI API Configuration
27
+ OPENAI_API_KEY=your_openai_api_key_here
28
+ OPENAI_BASE_URL=https://api.openai.com/v1
29
+
30
+ # 或者使用自定义 API 端点
31
+ # OPENAI_BASE_URL=https://api.gptsapi.net/v1
32
+ ```
33
+
34
+ ### 2. 安装依赖
35
+
36
+ #### 使用 conda(推荐)
37
+
38
+ ```bash
39
+ # 创建 conda 环境
40
+ conda env create -f environment.yml
41
+
42
+ # 激活环境
43
+ conda activate openfoamAgent
44
+
45
+ # 安装额外的 MCP 依赖
46
+ pip install fastapi uvicorn[standard] fastmcp
47
+ ```
48
+
49
+ ### 3. 运行 MCP 服务
50
+
51
+ ```bash
52
+ python Foam-Agent/mcp_output/start_mcp.py
53
+ ```
54
+
55
+ ### 4. Docker 部署
56
+
57
+ #### 方法一:使用 .env 文件(推荐)
58
+
59
+ ```bash
60
+ # 1. 创建 .env 文件
61
+ cp env.example .env
62
+ # 编辑 .env 文件,填入您的 API 密钥
63
+
64
+ # 2. 使用 docker-compose(推荐)
65
+ docker-compose up -d
66
+
67
+ # 3. 或者使用 docker run
68
+ docker run -p 7860:7860 --env-file .env code2mcp-foamagent
69
+ ```
70
+
71
+ #### 方法二:直接设置环境变量
72
+
73
+ ```bash
74
+ # 构建镜像
75
+ docker build -t code2mcp-foamagent .
76
+
77
+ # 运行容器(设置环境变量)
78
+ docker run -p 7860:7860 \
79
+ -e OPENAI_API_KEY=your_api_key_here \
80
+ -e OPENAI_BASE_URL=https://api.openai.com/v1 \
81
+ code2mcp-foamagent
82
+ ```
83
+
84
+ ## 使用方法
85
+
86
+ ### 基本 CFD 仿真
87
+
88
+ ```python
89
+ # 运行简单的 CFD 仿真
90
+ result = run_foam_agent(
91
+ requirements="Do an incompressible lid driven cavity flow...",
92
+ output_dir="./output"
93
+ )
94
+ ```
95
+
96
+ ### 使用自定义网格
97
+
98
+ ```python
99
+ # 使用自定义 GMSH 网格文件
100
+ result = run_foam_agent(
101
+ requirements="Simulate flow over a tandem wing...",
102
+ output_dir="./output",
103
+ custom_mesh="./tandem_wing.msh"
104
+ )
105
+ ```
106
+
107
+ ### 完整基准测试
108
+
109
+ ```python
110
+ # 运行完整的 Foam-Agent 基准测试
111
+ result = run_foam_benchmark(
112
+ openfoam_path="/opt/openfoam10",
113
+ requirements="Your CFD requirements...",
114
+ output_dir="./output"
115
+ )
116
+ ```
117
+
118
+ ## 环境变量
119
+
120
+ | 变量名 | 描述 | 默认值 |
121
+ |--------|------|--------|
122
+ | `OPENAI_API_KEY` | OpenAI API 密钥 | 必需 |
123
+ | `OPENAI_BASE_URL` | OpenAI API 基础 URL | `https://api.openai.com/v1` |
124
+ | `WM_PROJECT_DIR` | OpenFOAM 安装路径 | `/opt/openfoam10` |
125
+ | `MODEL_PROVIDER` | LLM 提供商 | `openai` |
126
+ | `MODEL_VERSION` | 模型版本 | `gpt-4o` |
127
+ | `MCP_TRANSPORT` | MCP 传输模式 | `http` |
128
+ | `MCP_PORT` | MCP 服务端口 | `7860` |
129
+
130
+ ## 系统要求
131
+
132
+ - Python 3.11+
133
+ - OpenFOAM v10
134
+ - OpenAI API 密钥或其他 LLM 提供商
135
+ - 预处理的 OpenFOAM 数据库
136
+
137
+ ## 故障排除
138
+
139
+ 1. **检查系统状态**:使用 `check_foam_agent_status` 工具
140
+ 2. **验证 API 配置**:确保 API 密钥和 URL 正确设置
141
+ 3. **检查依赖**:确保所有 Python 包已正确安装
142
+ 4. **OpenFOAM 环境**:确保 OpenFOAM 已正确安装并配置
143
+
144
+ ## 许可证
145
+
146
+ MIT License
app.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ import os
3
+
4
+ app = FastAPI()
5
+
6
+
7
+ @app.get("/")
8
+ async def root():
9
+ return {
10
+ "status": "ok",
11
+ "service": "Code2MCP-FoamAgent",
12
+ "transport": os.environ.get("MCP_TRANSPORT", "stdio"),
13
+ }
docker-compose.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ foam-agent-mcp:
5
+ build: .
6
+ ports:
7
+ - "7860:7860"
8
+ env_file:
9
+ - .env
10
+ environment:
11
+ - MCP_TRANSPORT=http
12
+ - MCP_PORT=7860
13
+ - CONDA_DEFAULT_ENV=openfoamAgent
14
+ volumes:
15
+ - ./output:/app/output
16
+ restart: unless-stopped