![]() |
NextFEM AI Tools |
AI-DRIVEN DESIGN IN NextFEM PROGRAMS |
AI Assistant v2

NextFEM Designer v2.7 integrates the new AI Assistant v2, which supports the NextFEM MCP tools (now shipped with the program). It can be started from the Plugins tab.
This plugin keeps the support for simple modelling AI calls in v1 (see chapter below), however the usage with MCP tools is strongly encouraged. The MCP mode can be activated by selecting the “MCP tools” checkbox in the main window.

After selecting the checkbox, please wait for the local MCP server to be started. When you read the number of available MCP tools, you’re ready to go.
The providers supported by v1 continues to be supported. Generally, the AI client supports OpenAI chat model v1.
| Provider | Endpoint |
| OpenRouter | https://openrouter.ai/api/v1 |
| HuggingFace | https://router.huggingface.co/v1 |
| Groq | https://api.groq.com/openai/v1 |
| LM Studio | http://localhost:1234/v1 |
| Cohere | https://api.cohere.com/v1 |
Please note that the endpoint address to be provided are different from the ones that had to be supplied in v1.
Finally, v2 supports the storage of endpoint address, API key and chat AI model specification in a CSV file called “LLMkeys.csv” that can be places in the installation folder of NextFEM Designer.
Sample content of the CSV file:
#Endpoint;API key;Modelname https://api.yourprovider.com/v1;apikey;gpt-4.1
Please contact us for further informations.
AI Assistant v1
This chapter illustrates how to use the plugin AI Assistant v1 shipped in NextFEM Designer v.2.6.
The plugin, free for everyone, permits to use AI APIs with NextFEM Designer. Users can interact via chat with their favourite AI provides, while the plugin tells the API to format the reply in a way that can be read by the plugin and converted in commands. In that sense, the plugin acts like an AI agent in NextFEM Designer.

Supported AI providers
The user must supply his own API key and AI/LLM server address. The plugin support natively all LLM APIs that are OpenAI-like, and it has been tested with:
- OpenAI
- Claude
- OpenRouter
- HuggingFace
- Groq
- LM Studio for running LLM models locally.
Each one of the link above leads to the page from which you get your API key. Also, please refer to the documentation of each provider to get the server address and the model name.
Please find some examples below:
| Provider | Endpoint | Tested models |
| OpenRouter | https://openrouter.ai/api/v1/chat/completions | deepseek/deepseek-chat-v3-0324:free qwen/qwen2.5-vl-32b-instruct:free google/gemini-2.5-pro-exp-03-25:free mistralai/mistral-small-3.1-24b-instruct:free open-r1/olympiccoder-32b:free google/gemma-3-4b-it:free deepseek/deepseek-v3-base:free |
| HuggingFace | https://router.huggingface.co/v1/chat/completions | Qwen/Qwen2.5-VL-7B-Instruct
google/gemma-2-2b-it deepseek-ai/DeepSeek-V3-0324 |
| Groq | https://api.groq.com/openai/v1/chat/completions | llama-3.3-70b-versatile
and others… |
| LM Studio | http://localhost:1234/v1/chat/completions | claude-3.7-sonnet-reasoning-gemma3-12b … (all LM Studio models supported) |
| Cohere | https://api.cohere.com/v2/chat | command-a-03-2025 |
You can use also other APIs than the ones listed above, if they’re compatible with OpenAI SDK.
Paste on the proper textboxes API key, server and model name, and you’re good to go!
Agenting and commands
Your message is transmitted to the server without the previous context (in order to support even free APIs providers), and with a system message constraining response format. If the server is capable enough (suggested models should have at least 3B parameters and a quantization higher or equal to 4). The system message asks also the LLM to not give human-readable explanations, in order to reduce tokens.
Chat is always cleared when API key, server or model has changed. Server is asked to reply with NextFEM commands, than can be:
- reverted by undo
- executed even partially by the user, by selecting the rows to execute.
This helps the user to control what’s been changed in the model, also by repeating modelling commands manually. Right-click in the chat box to open such commands.

If the LLM fails to provide valid nodes and/or elements, press Undo in the program and try again by better describing your request in the prompt. Different LLMs have various behaviour, hence the same prompt that’s working with a certain LLM model could be not working with another.
If you’re sharing screenshots, remember to hide your API key and server address!
NextFEM MCP server
This guide will show you how to use the local NextFEM MCP server for:
- Anthropic Claude AI
- GitHub Copilot AI in Visual Studio Code
- GitHub Copilot AI in Visual Studio 2022
- LM studio (version >= 0.3.17)
- OpenAI ChatGTP Desktop
MCP server is a simple interface that allows you to connect your local NextFEM Designer installation to your favorite AI provider. You don’t need to have a paid plan of Claude or GitHub copilot, this server is working with the free version of both providers, as long with the free version of NextFEM Designer.
DEPRECATED (see subsequent node) – Version 1.0.0.2 – Release date: 17 December 2025
DEPRECATED (see subsequent node) – Version 1.0.0.1 – Release date: 20 October 2025
DEPRECATED (see subsequent node) – Version 1.0.0.0 – Release date: 08 October 2025
Note: From version 2.7 onwards, the MCP tools server is supplied and updated with NextFEM Designer and can be found in the installation folder.
Prerequisites
NextFEM Designer is supposed to be already installed on your system. Be sure to activate the REST server at startup of the program, by activating the option depicted below.

Installation in Claude Desktop (Windows)
Install Claude Desktop for WIndows.
MCP server consists in an executable exposing the tools to be connected with AI. This is supplied by our MCPserver.
1. Find the NextFEM Designer installation folder (typically, it’s C:\Program Files\NextFEM\NextFEM Designer 64bit\);
2.Configure Claude Desktop to load MCP server at startup. Open folder:
%appdata%\Claude
and double-click claude_desktop_config.json to edit it.
If it is not existing, please do not create it by hand, but, from inside Claude Desktop, select File / Settings / Developers / Change configuration button.
Then change the content of the file to:
{
"mcpServers": {
"NextFEM": {
"command": "C:\\myPath\\NextFEMmcpServer.exe",
"args": []
}
}
}
Remember to change myPath with the MCP server path.
3. That’s all. Restart NextFEM Designer and Claude Desktop. You’ll see a hammer with the number of NextFEM Designer tools available in Claude.

See it in action
Installation in GitHub Copilot for Visual Studio Code (Windows)
1. Find the NextFEM Designer installation folder (typically, it’s C:\Program Files\NextFEM\NextFEM Designer 64bit\);
2.Configure Visual Studio Code to load MCP server at startup. Edit file:
%appdata%\Code\User\mcp.json
with the following lines:
{
"servers": {
"NextFEM": {
"type": "stdio",
"command": "C:\\myPath\\NextFEMmcpServer.exe",
"args": []
}
},
"inputs": []
}


Installation in GitHub Copilot for Visual Studio 2022 (Windows)
1. Find the NextFEM Designer installation folder (typically, it’s C:\Program Files\NextFEM\NextFEM Designer 64bit\);
2. Configure Visual Studio 2022 by adding the MCP server from the tools icon in the Copilot chat.
3. After the addition, enable the NextFEM tools.

Installation in LM Studio (Windows)
1. Download the NextFEM MCP server executable from here. Then decompress the .exe to a known and reachable folder;
2. Configure LM Studio to use the MCP server together with the LLM you’re using. On chat window, click on button “Program” on the right sidebar; then click on “Edit mcp.json”.

Put in the JSON file the following lines:
{
"mcpServers": {
"NextFEM": {
"command": "C:\\myPath\\NextFEMmcpServer.exe"
}
}
}
Remember to change myPath with the MCP server path. Finally, enable the tool appeared on the right sidebar (“mcp/next-fem”).
Installation in OpenAI chatGPT (Windows)
The paid versions of the chatGPT desktop client only support the use of remote MCP tools (i.e., those accessible from an internet server). The free version does not support the addition of MCP tools.
For the following procedure, you need to have a free account on GitHub:
1. install Node.js with the command:
winget install --silent --accept-package-agreements --accept-source-agreements OpenJS.NodeJS
Install also Microsoft DevTunnel with the command:
winget install Microsoft.devtunnel
and log in with your GitHub account after giving the command:
devtunnel user login -d -g
2. You can temporarily publish the local MCP server of NextFEM Designer with the command:
npx -y supergateway --stdio "C:\Program Files\NextFEM\NextFEM Designer 64bit\NextFEMmcpServer.exe" --outputTransport streamableHttp
leaving the prompt window active. Start a new command prompt and enter the command:
devtunnel host -p 8000 --allow-anonymous
leaving the prompt window active.
3. Configure chatGPT with the address provided by the last command on the line “Connect via browser:”, which is usually in the format https://randomCode.devtunnels.ms
– Enable Developer Mode from Settings / Apps

– Select “Create app” and fill in the fields:
Name: NextFEM MCP
Authentication: no authentication
URL: https://randomCode.devtunnels.ms/mcp
NOTE: be sure to add the suffix /mcp

Then click Create. Now the NextFEM MCP tools are available in your chats—add the tools from the menu in the message pane to use them.
Notes
- Use clear and circumstanced prompts – e.g. always refer at least once to NextFEM Designer in order to force the AI to use MCP tools
- Be aware that only a few selected commands in NextFEM API are available as a tool. Avoid to make requests not covered by the commands.

