AI is turning the vision of pervasively intelligent networks into reality. It’s becoming the driving force behind networks that are smarter, more adaptable, more reliable and more secure than ever. No longer limited to generating text, AI is evolving into an active operational agent within cable networks. It’s an evolution that unlocks unprecedented opportunities for innovation and new subscriber experiences.
However, this transformation also introduces new and complex security challenges.
Managing the Challenges of AI
Many of AI’s challenges stem from the relentless pace of innovation. New protocols, tools and methods are adopted soon after they emerge, leaving security as an afterthought. For instance, some AI agents are capable of autonomously interacting with data, using tools and calling APIs. All of this adds new layers of complexity, dependencies and attack surfaces. These risks should be carefully managed and addressed as early as possible in the design and development stages.
A key element in any risk management framework is grounding this innovation in a secure foundation that spans the entire AI stack, from the large language models (LLMs) at the core, to the AI agents and their prompts, to the tools and APIs they use, and finally to the AI protocols that connect and coordinate them.
CableLabs is investigating cable-specific elements where we can bring well-understood security principles into AI R&D, laying the foundation of protecting cable’s critical infrastructure from tampering and subversion. Just like any other software, the AI stack would benefit from provenance guarantees, integrity, identity assertions and transitive credentialing.
The AI Security Working Group
Because AI architectures interact directly with critical systems and sensitive data, securing the entire ecosystem — from clients and servers to the protocols that link them — isn’t just a technical best practice; it’s a strategic imperative for the industry. A unified focus on AI security will enable trusted communications and accelerate innovation across both business and technical domains.
To foster collaborative input and discussion, CableLabs is launching the AI Security Working Group (AISWG), a CableLabs member initiative focused on developing and promoting best common practices and practical security guidance for AI deployment relevant to the cable broadband industry. By bringing our members together in this forum, we can help our industry harness the transformative power of AI confidently, responsibly and securely.
CableLabs is also actively engaged in two other non-cable groups working on securing AI:
- The Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) is working on preventing abuse of and abuse from AI systems.
- The Open Worldwide Application Security Project (OWASP) is working on a broad range of threats to AI.
How You Can Help
There are a few ways CableLabs members can be part of this essential initiative:
- Maximize your membership: Register for a CableLabs account to explore exclusive, member-only resources and content.
- Join the working group: Request to join the AI Security Working Group, and help us define the future of AI security. (A CableLabs account is required to log in.)
- Connect with us: Introduce us to the people in your organization who are focused on AI security, governance and agent protocol implementation.
