Comcast, Charter Deploy AI Infrastructure at the Network Edge Using NVIDIA AI Technologies
Tests by the two largest cable operators in the U.S. are a notable development in the use of AI in networks for low-latency, compute-heavy applications like animated movie production, gaming and advertising
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
You are now subscribed
Your newsletter sign-up was successful
PHILADELPHIA & STAMFORD, Conn. —In an important example of how operators are looking to use AI to improve operations and develop new services, Comcast and Charter have issued separate announcements regarding tests and deployments of AI processing using technologies from NVIDIA.
The announcements illustrate how operators hope to test real-time AI applications running milliseconds from customers, unlock faster, more responsive experiences for the next wave of AI and improve the performance of low-latency, compute heavy applications like animated movie production, gaming and advertising in their networks.
Such applications could also give them a competitive advantage in the growing competition for broadband customers between cable operators and 5G fixed wireless.
Article continues belowIn its announcement, Charter's Spectrum also highlighted how the deployment of AI technologies in its network could greatly improve and speed up the creation of animated movies and special effects. Spectrum's network serves the Hollywood and Los Angeles where many of those movies and special effects are produced.
Both announcements were made at NVIDIA’s GTC event in San Jose on March 17.
In its announcement, Comcast called its NVIDIA AI deployment a “a groundbreaking initiative to bring AI processing, using NVIDIA GPUs, closer to customers than ever before to accelerate the development of next-generation AI applications across America. The first-of-its-kind collaboration will test the performance of AI workloads running directly at the edge of Comcast’s network – in regional facilities located just milliseconds from where customers live and work.”
“The industry is shifting towards a more distributed AI infrastructure and Comcast operates a network that supports it today,” said Elad Nafshi, chief network officer, Comcast. “NVIDIA AI Grid vision requires intelligent infrastructure that reaches all the way to the customer’s doorstep. By bringing NVIDIA GPUs directly into our edge cloud, we can explore what becomes possible when AI inference happens only milliseconds from end users.”
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
“Distributed AI Grid is the next big opportunity for the telecommunications industry, and Comcast’s nationwide, deeply distributed network is a perfect match for building it,” said Ronnie Vasishta, senior vice president of AI and Telecoms, NVIDIA. “By bringing intelligent AI inference to the network edge, Comcast can unlock inherent cost efficiencies, while delivering deterministic, low‑latency experiences for customers at massively concurrent scale. This collaboration is powering the next era of hyper personalized experiences that run just milliseconds from users.”
The field trial takes advantage of Comcast’s nationwide, distributed architecture that reaches 65 million homes and businesses with the aim of showing how AI at the network edge can unlock faster, smarter, more responsive experiences. This, Comcast said, will translate into quicker apps, more relevant recommendations, smoother gaming, and AI-powered tools that respond instantly.
Comcast also noted that with advanced DOCSIS 4.0 FDX nodes, smart amplifiers, and intelligent gateways across its footprint, Comcast can support real-time AI inference at scale, something traditional centralized, fiber-only, or wireless networks cannot match.
As more AI workloads move from distant data centers to local edge locations, Comcast said that its architecture positions the company as a key contributor to the emerging AI Grid for the next generation of AI-driven services.
Comcast will initially focus on three use cases designed to showcase the benefits of running AI workloads at the network’s edge:
- Personalized Advertising Agent. This is an advanced ad-delivery engine powered by Decart real-time AI video models. Decart’s technology is capable of customizing video advertisements down to the household level using attributes such as language, content preferences, household size, or other non-sensitive demographic categories – enabling hyper-relevant experiences for viewers while improving efficiency for advertisers.
- Small Business Concierge Agent. Leveraging Personal AI’s small language model (SLM) and memory platform deployed on HPE ProLiant servers to deliver an AI-powered “front desk” service capable of greeting customers, managing appointments, answering questions, and supporting day-today-day operations for small businesses.
- Reducing Latency for Gaming. Delivering ultra-low latency streaming for online gaming, the AI Grid brings GPU resources physically closer to players. This can dramatically improve responsiveness and overall gameplay quality, building on the impact of the low-latency technology Comcast rolled out for NVIDIA GeForce NOW and other applications last year.
In a separate announcement by Charter’s Spectrum, the operator said it is deploying remote graphics processing units (GPUs) at the network edge to support latency-sensitive applications and compute-heavy use cases, using NVIDIA AI Grid reference design over Spectrum’s fiber broadband network.
More specifically, Spectrum is demonstrating enterprise-level, low-latency, remote NVIDIA AI infrastructure use cases built on NVIDIA RTX6000 PRO Blackwell Server Edition technology and a distributed AI Grid.
Spectrum reported that the solution enables animation artists to render blockbuster-level CGI with GPU compute resources located nearby at the edge of Spectrum’s fiber-powered broadband network. The proximity of Spectrum’s ECI to studios, coupled with 100 Gbps low-latency fiber network, extends the power of the NVIDIA AI Grid to remote workstations.
Creating a movie means rendering hundreds of thousands of images or “frames,” which are stitched together to create the whole story. Developing each frame requires a huge amount of processing power. Coupled with the fact that centralized cloud environments can introduce latency that affect time-sensitive graphics processing and AI workloads, this can be a challenging technical problem to solve.
Spectrum said that the deployment of NVIDIA RTX PRO 6000 Blackwell GPUs at the edge of Spectrum’s network means that customers gain faster, more reliable access to NVIDIA AI infrastructure, empowering CGI artists to more efficiently create visual stories on a massive scale.
“Spectrum is supporting the next wave of enterprise workloads by providing connectivity and infrastructure for real-time applications,” said Rich DiGeronimo, president, Product and Technology, Spectrum. “Our advantage is our footprint, with more than 1 million miles of infrastructure delivering gig or greater speeds to tens of millions of residential, business and enterprise customers. We have the scale to deliver the speed, low latency and reliability that higher performance GPU and AI applications require. Our work with NVIDIA shows how connectivity companies can bring real-time graphics rendering performance closer to where it’s needed – not just in the entertainment industry, but across all industries.”
More specifically, Spectrum said that the deployment leveraged NVIDIA’s AI Grid reference design, which provides operators with a unified hardware and software platform to build, deploy and manage GPUs and AI across distributed sites.
The collaboration leverages Spectrum’s fiber broadband network and ECI, with the capacity to scale to hundreds of MW of power from more than 1,000 edge data centers and hubs located less than 10 milliseconds, and in some cases less than five milliseconds, of 500 million devices in homes and businesses connected to Spectrum’s network.
This is important because new GPU and AI-native apps must operate with predictable latency, support high concurrency and deliver best cost per token at scale. This initial deployment showcases an extreme low-latency application with a uniquely decentralized Edge GPU solution.
“The shift to real-time, AI-native applications is driving the demand for distributed infrastructure that can deliver predictable low latency at scale,” said Chris Penrose, Global VP - Business Development - Telco at NVIDIA. “Spectrum’s fiber network and Edge Compute Infrastructure extends the power of the NVIDIA AI Grid to deliver performance where it is needed most—right where movies are made.”
George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.

