CHEQ is pushing brand safety beyond keyword blacklists

The cybersecurity company's AI can now understand 200 highly detailed news categories.

CHEQ is pushing brand safety beyond keyword blacklists

Most brand safety tools rely on keyword blacklists, but CHEQ’s newly enhanced AI can now identify the context around specific content online, including 200 granular news categories across 14 languages.

The AI solution can decide in real-time whether or not to serve or block an advertisement, which helps brands and agencies ensure that they’re not incorrectly barring ads around legitimate content. 

This investment to further enhance its AI technology comes on the heels of a report released by CHEQ that revealed 73 percent of safe stories on LGTB web sites are incorrectly flagged as brand unsafe. The study also stated that 75 percent of safe history content is being blocked, with words like "shooter," which could be referring to a basketball game, being marked as brand unsafe.

CHEQ Founder and CEO Guy Tytunovich believes the keyword blacklist approach is outdated, adding that "for brands turning to AI, we have seen at least 20 percent more reach."

One example Tytunovich points to is how a fast-food restaurant chain may want to avoid advertising next to content about obesity.

"Engineers have trained the AI to define obesity as a category, but also trained it to understand sub-terms in context, such as heart disease and diabetes," he told Campaign US. "To uncover if a piece of content is about obesity or not, the technology does not just look at one specific keyword, but rather analyzes how many category sub terms are present in the article, and the relation between them."

When it comes to the news data category, Tytunovich said CHEQ’s AI has been trained to understand real-life conflict, torture and warzones based on millions of articles to avoid brands serving articles. "It has been trained to know the difference between real-life war, and say, a TV show, such as Game of Thrones with war-like themes," he said.

CHEQ has been developing its AI brand safety solution for five years, and in the last year, the company says it has prevented at least $5.8 million in wasted ad spend for its clients.

"For instance, across 2.4 billion ad requests for Dentsu’s CCI in Japan, 24 percent of impressions were blocked in real-time on the grounds of brand safety, preventing household name brands from appearing next to potentially disastrous articles, including executions of women for affairs, incest, and animal cruelty," said Tytunovich.

Another agency tapping into CHEQ’s brand safety AI is independent media shop Noble People.

Scott Konopasek, media director at Noble People, told Campaign US that the "use of AI and enhancement of categories will be a significant step forward with many clients I work with."

"AI is only as good as the information we feed into it, and CHEQ’s methodology lends itself to gathering significant amounts of data and making smart decisions," added Konopasek. "I love the idea of AI and Machine learning (ML) playing a larger role in marketing efforts, because I need to sleep and eat and do other things. AI and ML are constantly learning and innovating. The quality of media and the quality of our control are only going to continue to improve as the technologies evolve."


Become a member of Campaign

Get the very latest news and insight from Campaign with unrestricted access to , plus get exclusive discounts to Campaign events

Become a member

Looking for a new job?

Get the latest creative jobs in advertising, media, marketing and digital delivered directly to your inbox each day.

Create an alert now

Partner content