FCC Debates Evolution of Live Captioning for News

Forum urges better community outreach, staff training and more thorough oversight.
Author:
Publish date:
Updated on

WASHINGTON—For broadcasters, closed captioning has moved beyond being an FCC requirement checkbox. As the number of viewing devices and screens multiply exponentially, closed captioning has evolved into a service both hearing and hearing-impaired viewers take for granted to help them beyond just understanding the spoken word.

Citing a recent study that noted that 80% of viewers who use captioning are not hearing impaired, Suzy Rosen Singleton, chief of the CGB Disability Rights Office for the FCC, noted that “captioning really has become ubiquitous and is a huge benefit for the general population.”

Singleton made her comments during an FCC forum on best practices for closed captioning for news, what the commission terms “electronic news techniques,” a methodology for converting prompter text to captions. The forum included representatives from the FCC, broadcasters and the deaf and hearing impaired community that debated the latest developments and challenges in light of the commission’s release of its best practices document in 2015. That document outlined four main tenets for improving ENT:

  • Captions must be accurate and reflect the dialogue and other sounds appearing on the screen;
  • Captions must be sync-ed as much as possible to the dialogue that is being spoken;
  • Captioning must run the full length of the program and that captioning must end before commercials or the next program begins, and
  • Placement of captions on the screen must be appropriate and not block other visual information.

Compared to scripted programming, where captioning can be done in advance, the issue of captioning live news in real time presents its own set of challenges, which were first outlined by the FCC more than 20 years ago. In 1998, the FCC banned the four major broadcast networks and their affiliates in the top 25 markets from using ENT for captioning and that ban remains in place.

Closed-Captioning_1474900308200_11229815_ver1.0_1280_720

For stations outside the top 25, the FCC allows the use of ENT for captioning, but in 2014, additional requirements were added including defining the type of programming that could use ENT—in-studio produced news, sports, weather and entertainment programming. For breaking news where live real-time captioning may not be immediately available, stations must provide news crawls and other textual information to viewers. Stations are also required to provide staff training and appoint an ENT coordinator in charge of compliance.

CONSTANTLY UPDATING

At the forum, FCC officials noted that further improvements to ENT best practices were needed since they were released in 2015, including more accurate scripting for weather, better understanding of local obligations toward emergency information, as well as training that provides a consistent approach across all stations, as well as public information efforts.

R. Lantz Croft, news operations manager, WBRC-TV in Birmingham, Ala., noted the effect that social media has had on improving ENT.

“Because of digital properties and social media, we are constantly updating our material and content throughout the day, so the material that is moving for closed captioning more closely matches the conversation that is happening over traditional audio,” he said.

Using ENT for closed captioning has also resulted in less ad libbing/off the cuff remarks among news, weather and sports anchors, according to Brett Jenkins, EVP/CTO for Nexstar Media Group.

“Each segment has a script attached,” he said. “Before ENT was in practice, there used to be more ad libbing. The reality is that those sections are not as ad libbed as they use to be because we now require the meteorologists and sportscasters to do more scripting of their segment in advance.”

When it comes to doing real-time captioning for live breaking events, Jenkins noted that some things haven’t changed, however.

“We simply pick up the phone and go to live captioning, there is no other way to do it,” he said.

Digital technology that allows reporters to use mobile devices and tablets to connect to prompting systems has had a significant impact on ENT, according to Kelly Williams, senior director of Engineering and Technology Policy for NAB.

“I’ve seen a real significant change in the workflow of how news has happened and how the workflow of news is produced,” Williams said, adding that he thinks that behavior is still an issue.

“When monitoring compliance, it’s a shared responsibility through the entire newsroom staff,” he said. “The line producer needs to make sure that when they go upstairs to put their show on television, that complete scripting is just as important as the video.”

Jenkins said Nexstar runs periodic audits to monitor compliance.

“We do all the things before and during production to try to make sure we get it right,” he said. “We go through training and we have producers and even news directors checking scripts and approving them and making sure stories don’t run if they don’t have a script,” he said.

AUTOMATED CAPTIONING: READY FOR PRIMETIME?

Although more vendors are now offering AI-enabled voice recognition software for live captioning (also known as automated speech recognition, ASR), the consensus at the forum was that the technology is still not ready for primetime.

“At NAB, we saw vendors that are getting really close to having software that can do voice recognition and turn that into a script,” Jenkins said. “But the mistakes are still there and the software is not at a point now that it is going to be able to replace how we are doing it.”

Advocates for the hearing impaired community noted the importance of public outreach as well as a more deliberate approach when using automated technology for captioning. Claude Stout, executive director, Telecommunications for the Deaf and Hard of Hearing Inc., noted a “great hue and cry” over some stations’ use of ASR for live news captioning recently.

“We did have some deaf people who spoke up loudly on social media, who were furious that there were some local news, weather and sports programs that were starting to use automated speech recognition-generated captions,” he said, adding that “Deaf people recognize when they see bad captions.”

Christian Vogler, director, Technology Access Program, Gallaudet University, talked about a new project the D.C.-based university—which primarily serves the deaf community—is conducting to evaluate the effectiveness of closed captioning. The five-year research project, funded by the National Institute on Disability and Independent Living and Rehabilitation Research, started in October 2018 and focuses on captioning quality metrics,

“It is my goal to investigate in great detail how people with hearing loss perceive and understand closed captioning, and how they perceive and understand the programming, and how that relates to coming up with some potential objective closed captioning quality metrics,” he said. “Basically, the project's goal is to determine how to measure those four of the FCC's captioning quality categories—synchronicity, accuracy, completeness and placement.”

Along with close monitoring among station staff, the general consensus is that stations could do a better job communicating about captioning to their viewers.

“We want to see local TV stations do more outreach and more education with the community,” Stout said. “Not just the deaf and hard of hearing community, but the hearing community, because many of them are using captions in bars, restaurants, public places, and there are many people for whom english is a second language.”