Image Analyzer Automated Visual Content Moderation Technology Wins Computing Magazine AI & Machine Learning Awards

June 10th 2021, visual content moderation software company, Image Analyzer, has announced that it has won the Best Emerging Technology in AI category of the Computing AI and Machine Learning Awards.

Judged by a 12-member panel of CIOs and IT professionals from the public and private sectors and academia, and Computing Magazine journalists, the awards identify the leading companies, projects, and professionals in the AI sector.

Cris Pikes, CEO and founder of Image Analyzer commented, “We are delighted to have won the Computing Award for Best Emerging Technology in AI. Online organizations are tackling a huge number of images and videos uploaded by more and more users. Human moderators can no longer cope with the sheer volume and the impending legislation is only adding to the pressure. Our technology was specifically developed to help digital platform providers to make their online communities and working environments safer. Automated content moderation allows organizations to scale their efforts and demonstrate to the relevant authorities that they have put systems and processes in place to protect their users and employees from illegal and harmful content posted to their sites.”

Marketing Technology News: Anneka Gupta to Leave LiveRamp to Pursue New Opportunity

To maintain a positive online experience for all users, reduce their legal risk exposure, and protect their brand reputation and revenue, organizations are under increasing pressure to moderate the visual content that users upload to their digital platforms. Impending changes to UK and EU online safety laws will legally oblige platform operators to swiftly remove illegal or harmful content posted to their websites, or risk large fines. Companies that fail to comply with the new laws could ultimately have access to their services suspended in the UK or European countries in which their users reside.
Image Analyzer’s AI-powered visual risk moderation technology helps organizations to automatically remove more than 90% of manifestly illegal and harmful images, videos, and live-streamed footage, so that toxic content never reaches their websites or moderation queues.

Image Analyzer was selected as the winner of Computing’s AI and Machine Learning Awards from a shortlist of six companies. Explaining their selection, the judges described Image Analyzer Visual Intelligence System (IAVIS) as, “A great use of AI to resolve a problem that affects all sectors and all organizations.”

Content moderation has traditionally been undertaken by human moderators, who manually review questionable content uploaded to their platforms. Manual review of toxic content risks creating an unsafe working environment, where harrowing images and videos harm human moderators’ mental health and huge backlogs of material cause employee stress and burnout. IAVIS helps organizations to combat these workplace and online harms by automatically categorizing and filtering out high-risk-scoring images, videos and live-streamed footage, leaving only the more nuanced content for human review. By applying advanced AI computer vision technology that is trained to identify specific visual threats, the solution gives each piece of content a risk probability score, speeds the review of posts, and reduces the moderation queue by 90% or more. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. It can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.

Image Analyzer holds US and European patents for its AI-powered content moderation technology, IAVIS, which identifies visual risks in milliseconds, with near zero false positives.

Organizations use IAVIS to protect online community members from being harmed by visual content that contravenes existing and impending laws. It minimizes corporate legal risk exposure; aids digital forensics investigations; and helps safeguard children and educational communities. In HR applications, IAVIS reduces vicarious liability exposure by blocking content that is not safe for work, identifying high risk users and providing visibility of misuse.

Marketing Technology News: OSF Digital Acquires Relation1 to Strengthen Marketing Cloud Expertise in North America

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.