Last month, Jason Grad, the co-founder of a tech startup, cautioned his team against using Clawdbot—a new agentic AI tool gaining popularity but also labeled as high-risk. He communicated this in a Slack message, urging his 20 employees to avoid using it on company devices. Grad’s concerns reflect a broader apprehension echoing throughout the tech industry regarding this experimental tool, now known as OpenClaw, which has prompted many executives to implement strict usage restrictions.
An executive from Meta recently informed his team that failure to keep OpenClaw off of their work laptops could jeopardize their jobs. He noted the software’s unpredictability and the potential for privacy breaches if integrated into secure environments. OpenClaw was developed by Peter Steinberger as a free, open-source tool, but its rapid popularity has raised alarms among cybersecurity professionals.
Despite its appeal as a multifunctional assistant—capable of organizing files, researching, and online shopping—security experts have advised companies to impose strict regulations on its use. Grad, for instance, emphasized a "mitigate first, investigate second" approach in response to anything potentially harmful to their organization.
At another tech firm, Valere, a similar sentiment was expressed after an employee highlighted OpenClaw’s capabilities on an internal communication channel. The CEO swiftly enacted a ban, noting concerns about compromised access to sensitive company information if the AI tool infiltrated their systems.
In an attempt to understand OpenClaw’s vulnerabilities, Valere briefly permitted its usage on an isolated machine. Their research team aimed to identify security flaws and subsequently recommended implementing strict control measures to manage its commands.
Some executives, while recognizing OpenClaw’s innovative nature, have opted for a dependence on existing cybersecurity measures rather than enacting specific bans. One CEO mentioned that approximately 15 programs are authorized on their devices, with anything outside of that being systematically blocked.
Grad’s company, Massive, has taken a cautious approach, testing OpenClaw in isolated environments while exploring its commercial potential. They recently launched ClawPod, a platform allowing OpenClaw agents to utilize their services in a controlled manner. Grad acknowledges that while OpenClaw presents security challenges, it could also signify a transformative direction for technology.
As organizations navigate the complexities of adopting novel AI technologies like OpenClaw, the focus remains on balancing innovation with stringent security measures.