Valossa AI Delivers World-Leading Video Intelligence to Content Moderation and Compliance Teams
Valossa, a leading video recognition, and content intelligence company, has announced a novel video analysis solution that is able to recognize and summarize inappropriate content elements in video files and streams for content compliance and moderation.
Based on the world-leading Valossa AI video recognition technology, their new solution can recognize a broad variety of concepts such as nudity, sexuality, violence, substance use, bad language, and other compliance-related visual objects and sounds. Valossa Reports provide interactive compliance report summaries for instant inspection of results. The solution can also screen video streams in real-time, and alert attention to inappropriate objects.
Valossa’s tool helps teams in broadcasting and digital distribution to become more efficient in screening volumes of content.
Valossa CEO, Mika Rautiainen says that making compliance decisions requires human consideration for all aspects that are governed by the laws and regulations. “Our tool harnesses modern video recognition technology to support human judgment and effective oversight. Our AI reviews the content faster than actual playback speed and creates a time-based summary of inappropriate scenes for fast reviewing. Teams become more productive since our solution reduces review completion times over linear watching,” said Rautiainen. “We have been working with progressive media companies to develop the tool to meet their business needs.”
Rautiainen continues: “Our tool’s uniqueness stems from our AI’s capability to interpret sounds, speech, and visuals to assess content appropriateness for airing or streaming. Valossa AI delivers also other scene-level metadata alongside of the compliance detections. If you consider monitoring appearances of celebrities or brands, Valossa AI is truly a one-stop solution to all of your video analysis needs with world leading technology for content profiling, object labelling, detecting people, logos and objects, and now inappropriate content reviewing.”