This is what most enterprise AI agent setups look like from a security standpoint right now.
The agent has broad access to internal systems. It can call external APIs, browse the web, and read and write files. Someone on the IT team added a prompt that says "do not do anything harmful," and everyone moved on.
That is not a security plan. That is a guess dressed up as a plan.
When an AI agent works on its own without a human approving every step, it has a level of trust that most security teams have not actually reviewed. The AI might be tricked by a harmful document it was asked to read. It might send data to an outside website that your rules would never allow. It might change a file it should never touch. None of these failures mean the AI is broken. They can happen with a normal, well-working AI making a simple mistake.
The question enterprise security teams need to ask is not "is this AI model safe?" The real question is "what can this agent actually do if something goes wrong, and who approved it?"
NVIDIA released NemoClaw on March 16, 2026 to answer that question. It is open source under the Apache 2.0 license, backed by NVIDIA, and currently in alpha testing. While it is not ready for full production yet, the security design it uses matters to every business thinking about using AI agents.
This is the security layer the industry has been missing.
What NemoClaw Actually Is
To understand why this matters for business, it helps to know what each piece does.
OpenClaw is the AI agent itself. It is the software that talks to AI models, uses tools, and does tasks on its own. Think of it as the brain and hands of the system.
NVIDIA OpenShell is the secure runtime environment. It is a locked container that OpenClaw runs inside. It provides basic security walls like sandboxing, process isolation, and network controls. Think of OpenShell as a locked room.
NemoClaw is the layer that ties everything together for business use. It installs OpenShell, sets up the security rules, manages the connection to the AI model, and gives your team a simple control panel. It adds guided onboarding to walk your team through setup. It adds state management so the agent remembers its rules even after a restart. It includes routed inference to check every AI request, and layered protection on top of the OpenShell runtime. Think of NemoClaw as the building manager.
The relationship is simple: OpenShell provides the locked room. OpenClaw is the worker inside. NemoClaw connects them, sets the rules, and gives your team the keys.
The Architecture: Built for Team Control

The main design rule is that every layer adds a hard wall that the layers above cannot break through. The agent cannot reach the internet without going through the network rule layer. The network layer cannot approve a new website without your team's review. Even if one security layer fails, the other layers are still standing.
This is a security method called "defense in depth." It is the same idea security teams use for other critical software. NemoClaw is the first serious attempt to use it for AI agents.
The Five Enterprise Features That Matter
1. Every External Connection Needs Your Approval
This is the most important feature for daily business operations.
By default, the system blocks all outbound internet traffic. Nothing leaves the agent unless your team says it is okay. When the agent needs to reach a new website or API, it sends an alert to your team. You review the request and approve or deny it.
Every approval is logged. Every denial is logged. Every connection is saved with a time stamp.
For compliance teams, this changes everything. Instead of saying "we trust the AI to only visit safe websites," you have a written record of every website the agent visited, when it visited, and who approved it. This is a real auditable control.
2. OS-Level Sandboxing Limits the Damage of Mistakes
NemoClaw uses three standard Linux security tools to lock down the agent. These tools work below the AI model itself.
Landlock restricts which folders the agent can open. If the AI is tricked into trying to read password files or private folders, Landlock blocks it at the base level. The AI does not get a chance to fail. The access is simply denied.
seccomp restricts the basic actions the agent is allowed to take. It uses a strict list of allowed actions. Anything outside that list is blocked, no matter what the AI decides to do.
Network isolation gives the agent its own separate network view. The agent cannot see the rest of your company network.
The business impact here is huge. "Prompt injection" is a real attack where bad actors hide instructions in a document to trick the AI. NemoClaw does not stop the AI from reading the trick, but it stops the AI from doing damage. If a trick tells the agent to delete everything it can, "everything it can" is only a few safe folders. It cannot touch your whole system.
The damage of an attack is limited by the architecture, not by hope.
3. Data Residency and Offline Support
For organizations that must keep data inside a certain country or inside their own private network, NemoClaw offers local options.
The local vLLM option runs AI models entirely on your own computer hardware. Prompts never leave your building. The same security rules apply to this local setup.
The local NIM option uses NVIDIA's container system for your own hardware.
Both options mean you do not have to choose between strong security and keeping data private. You get both at the same time.
4. The Blueprint System: Tracked Security Rules
Every business has a different risk level. A healthcare AI needs different rules than a software coding AI.
NemoClaw uses a "blueprint" system. A blueprint is a file that defines the exact security rules of the agent: which folders are open, which network rules apply, and what the AI cannot do. This file is the single source of truth for your agent's limits.
Blueprints are tracked over time. If the system restarts, it loads the exact same rules. Rules do not slowly change or break down over time.
For security teams, this creates something most AI tools lack: a formal, written document of the agent's security rules. You can put this blueprint in front of an auditor and say, "Here is exactly what this AI can and cannot do."
5. AI Requests Are Logged and Checked
The agent never talks directly to the AI model. All requests go through NemoClaw's routing layer first. They are checked and logged before they ever reach the model.
This gives you two benefits. First, you have a complete log of every question sent to the AI and every answer it gave. This is a full audit trail of AI behavior.
Second, the routing layer checks for strange patterns. Unusual requests or known attack patterns can be caught before they happen.
The Compliance Conversation
For legal and compliance teams, NemoClaw changes five common worries into documented facts.
"What data is leaving our company?"
Without NemoClaw: You know what the AI is supposed to do, but not what it actually sent.
With NemoClaw: Every outbound connection is logged. Every new website needed your approval.
"What happens if the AI makes a bad choice?"
Without NemoClaw: The damage could be high because the agent has broad access.
With NemoClaw: The damage is limited. File access is restricted and network access is controlled.
"Can we prove we have control over this system?"
Without NemoClaw: It is hard to prove because AI choices are random.
With NemoClaw: Yes. The blueprint is a written, tracked rule set that you can audit.
"Does this meet our data privacy rules?"
Without NemoClaw on cloud AI: It depends entirely on the cloud provider's rules.
With NemoClaw on local AI: Data stays inside your building by design.
"Who is responsible when something goes wrong?"
Without NemoClaw: It is unclear who approved what.
With NemoClaw: The approval records exist. The logs exist. The chain of trust is documented.
Hardware, Setup, and IT Operations
For IT teams evaluating the tool, the setup process is designed to be simple but it requires specific resources.
Hardware Needs
Resource | Minimum | Recommended |
|---|---|---|
CPU | 4 vCPU | 4+ vCPU |
RAM | 8 GB | 16 GB |
Disk | 20 GB free | 40 GB free |
The sandbox image is about 2.4 GB in size. During setup, several background tools run at the same time. On computers with less than 8 GB of RAM, this combined usage can cause the system to run out of memory and crash. If you cannot add more RAM to the machine, you can set up at least 8 GB of swap space. This acts like extra memory on your hard drive. It makes the setup slower, but it stops the crashes.
Supported Platforms
Platform | Container Runtime | Status | Important Notes |
|---|---|---|---|
Linux | Docker | Fully tested | This is the main path for enterprise use. |
macOS (Apple Silicon) | Colima, Docker Desktop | Tested with limits | You must install Xcode Command Line Tools first. |
NVIDIA DGX Spark | Docker | Fully tested | Great for local AI testing. |
Windows WSL2 | Docker Desktop | Tested with limits | Must use the WSL2 backend for Docker. |
The Installation and Onboarding Process
The setup uses a guided wizard to make sure security is applied correctly from day one.
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
The installer runs as a normal user. You do not need root access or administrator passwords to run it. It installs Node.js and NemoClaw into your user folders. Note that Docker itself must be installed on the machine first, and installing Docker might require administrator rights on Linux.
When the install finishes, it creates a fresh, secure OpenClaw instance inside the sandbox. A summary screen confirms exactly what is running:
──────────────────────────────────────────────────
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoints)
──────────────────────────────────────────────────
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────
Important IT Rule: For environments managed by NemoClaw, your team should use the nemoclaw onboard command whenever you need to create or update the secure sandbox. You should avoid using manual OpenShell update commands directly. If you change things manually outside of NemoClaw, you must rerun the onboard command to make sure the security rules are still applied correctly.
Clean Uninstall Process
Enterprise IT teams know that installing software is only half the battle. You also need a clean way to remove it without leaving leftover files. NemoClaw handles this safely with a built-in command:
nemoclaw uninstall
There are helpful flags for IT teams managing this at scale:
--yes: This skips the confirmation prompt. It is useful for automated cleanup scripts.--keep-openshell: This leaves the base OpenShell software installed if other tools on the machine need it.--delete-models: This removes any local AI models that NemoClaw downloaded, which can free up a large amount of hard drive space.
The uninstall command runs entirely from files already on your computer. It does not need to download anything from the internet to remove itself.
Behind the Scenes: How the Project is Built
For technical leaders reviewing the code before allowing it into your environment, the project is well organized. It is not a messy hobby project.
The code is split into clear folders with specific jobs. The basic commands that run the tool are kept in the "bin" folder. The core logic that handles your security rules and checks for attacks is in the "blueprint" folder. The actual security rule files that define what the AI can do are kept separate in their own "blueprint" folder.
Most importantly for enterprise trust, there is a dedicated "test" folder. This contains automated tests that run to make sure new updates do not break the security features. This shows that NVIDIA is treating this security tool with the same rigor as their other enterprise software.
Security Reporting and Responsibility
NVIDIA takes security flaws very seriously. If your security team finds a weakness in NemoClaw, they should never post it publicly on the internet.
Instead, NVIDIA provides standard private channels for reporting. You can submit a report through the official NVIDIA Vulnerability Disclosure Program. You can also send an encrypted email directly to their security team. For enterprise security teams, having a clear, private way to report flaws is a basic requirement for adopting any new software. NemoClaw meets that standard.
What Honest Evaluation Looks Like
NemoClaw is alpha software, released on March 16, 2026. NVIDIA states clearly that it is not ready for production use yet. The tool is still changing, and the way it works might change without warning.
For enterprise teams, this means two things.
What you should not do: Do not put this in charge of real production work or sensitive customer data yet. The alpha label is real, and there will be changes to how it works.
What you should do: Study the architecture now, before you desperately need it. The security model NemoClaw uses is not going away. Every AI agent your company uses will become more powerful and more independent over time. The security controls that feel optional today will be mandatory in a year or two.
Teams that understand this architecture now can ask the right questions about any AI tool they buy: What files can this AI access? Who approves new website connections? Are the AI requests logged? What is the damage limit if the AI makes a mistake?
NemoClaw is the first open-source project to give real, architectural answers to those questions. That matters, even if you do not install it today.
Resources and Community
GitHub Repository: NVIDIA/NemoClaw - Source code and Apache 2.0 license
Official Documentation: Architecture and Setup Guides - Full technical details for your IT team
NVIDIA OpenShell: OpenShell Runtime - The locked room that NemoClaw builds on
Security Best Practices: Enterprise Security Guide - Risk frameworks and security profiles
NVIDIA Security Portal: Vulnerability Disclosure - Private channel to report security issues


