Livefyre Adds New Moderation Tools, Detects Inappropriate Images


Livefyre

Livefyre is fresh off acquiring social curation platform Storify, and today has announced new moderation tools available to all customers.

Automatic image filtering is able to detect inappropriate images in real-time, helping content creators easily remove photos of nudity, competitor’s products and much more.

Another feature which Livefyre calls “human moderation services,” allows you to customize automatic moderation features depending on your specific needs.

ModQ is a new dashboard interface that displays content in real-time that needs moderated, and ranked from least to most severe.

According to the official press release, these new moderation tools “complement” existing features such as Magic Moderation, which is able to detect posts containing hate speech or personally identifiable information for example.

From Founder and CEO Jordan Kretchmer:

The engagement benefits of real-time social applications are clear, but companies need to feel secure that offensive user generated content won’t negatively impact their brand. We developed the most advanced suite of moderation capabilities on the market so that our customers can confidently integrate real-time user generated content into their websites, mobile apps, ads and TV broadcasts with peace of mind.

Back in April, Livefyre announced it was now serving one billion monthly page views, a 468 percent increase from a year prior.


Mike Stenger

Lover of technology, Mike often makes jokes that nobody laughs at.

0 Comments

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.