![]() |
AI Assistant |
AI-DRIVEN MODELING IN NextFEM Designer |
This guide illustrates how to use the plugin AI Assistant shipped with NextFEM Designer since v.2.6.
The plugin, free for everyone, permits to use AI APIs with NextFEM Designer. Users can interact via chat with their favourite AI provides, while the plugin tells the API to format the reply in a way that can be read by the plugin and converted in commands. In that sense, the plugin acts like an AI agent in NextFEM Designer.
Supported AI providers
The user must supply his own API key and AI/LLM server address. The plugin support natively all LLM APIs that are OpenAI-like, and it has been tested with:
- OpenAI
- Claude
- OpenRouter
- HuggingFace
- Groq
- LM Studio for running LLM models locally.
Each one of the link above leads to the page from which you get your API key. Also, please refer to the documentation of each provider to get the server address and the model name.
Please find some examples below:
Provider | Server | Models |
OpenRouter | https://openrouter.ai/api/v1/chat/completions | deepseek/deepseek-chat-v3-0324:free qwen/qwen2.5-vl-32b-instruct:free google/gemini-2.5-pro-exp-03-25:free mistralai/mistral-small-3.1-24b-instruct:free open-r1/olympiccoder-32b:free google/gemma-3-4b-it:free deepseek/deepseek-v3-base:free |
HuggingFace | https://router.huggingface.co/v1/chat/completions | Qwen/Qwen2.5-VL-7B-Instruct
google/gemma-2-2b-it deepseek-ai/DeepSeek-V3-0324 |
Groq | https://api.groq.com/openai/v1/chat/completions | llama-3.3-70b-versatile
and others… |
LM Studio | http://localhost:1234/v1/chat/completions | claude-3.7-sonnet-reasoning-gemma3-12b … (all LM Studio models supported) |
Cohere | https://api.cohere.com/v2/chat | command-a-03-2025 |
You can use also other APIs than the ones listed above, if they’re compatible with OpenAI SDK.
Paste on the proper textboxes API key, server and model name, and you’re good to go!
Agenting and commands
Your message is transmitted to the server without the previous context (in order to support even free APIs providers), and with a system message constraining response format. If the server is capable enough (suggested models should have at least 3B parameters and a quantization higher or equal to 4). The system message asks also the LLM to not give human-readable explanations, in order to reduce tokens.
Chat is always cleared when API key, server or model has changed. Server is asked to reply with NextFEM commands, than can be:
- reverted by undo
- executed even partially by the user, by selecting the rows to execute.
This helps the user to control what’s been changed in the model, also by repeating modelling commands manually. Right-click in the chat box to open such commands.
If the LLM fails to provide valid nodes and/or elements, press Undo in the program and try again by better describing your request in the prompt. Different LLMs have various behaviour, hence the same prompt that’s working with a certain LLM model could be not working with another.
If you’re sharing screenshots, remember to hide your API key and server address!