Tech giants including Meta are clamping down on employee use of OpenClaw, the viral open-source agentic AI tool, citing serious security risks such as potential privacy breaches and unpredictable behavior that could compromise secure environments.
A Meta executive anonymously told Wired on February 18, 2026, that he recently instructed his team to avoid installing OpenClaw on regular work laptops, threatening job loss for violations, due to fears it could lead to a privacy breach in secure settings. The tool, formerly known as MoltBot or Clawdbot, has gained notoriety for its advanced capabilities in enabling AI agents to perform tasks like sending emails autonomously, but experts describe it as a “security nightmare” marked by lack of disclosure, transparency, and basic protocols.
Valere CEO Guy Pistone publicly banned OpenClaw in a LinkedIn video, while other unnamed firms have issued similar warnings to staff. Security vulnerabilities include the ability to hijack personal computers, as highlighted in ZDNet reports, prompting calls for caution despite its open-source nature fostering community contributions. An MIT study further underscores agentic AI’s risks, labeling them “fast, loose, and out of control.”
The bans reflect a broader corporate dilemma: balancing innovation with security in an era where powerful AI tools evolve rapidly but expose significant vulnerabilities. OpenClaw’s developer, who recently joined OpenAI, has committed to maintaining its open-source status through a foundation.
As agentic AI tools like OpenClaw push boundaries, corporate restrictions could slow adoption but underscore the need for robust safeguards, potentially influencing how enterprises integrate emerging technologies amid escalating cybersecurity threats.





