Company Recognized for Advanced Predictive Analytics Platform that Combines Deep Learning and AI Technologies to Verify Digital IDs, Prevent New Account Fraud, and Enhance KYC Compliance
Socure, a leading provider of predictive analytics for digital identity verification, announced it was named an IDC Innovator for identity proofing by research firm International Data Corporation (IDC). The Socure ID+ platform was cited for helping financial institutions reduce fraud, increase auto acceptance in the onboarding process, and cut costs associated with manual review.
.@socureme named lead Innovator in #onlineidentity proofing by Research Firm IDC for advanced #predictiveanalytics platform that combines deep learning and #AI to verify #digitalID, prevent new account fraud and enhance #KYC compliance
Socure was profiled in the report, IDC Innovators: Identity Proofing Solutions to Prevent New Account Fraud and Enhance KYC Compliance, 2018 .
According to the IDC analysts, verifying the legitimacy of new identities is a key step in the customer onboarding process. However, identity verification often involves considerable amounts of manual effort that can produce high rates of false positives and are unable to distinguish synthetic identities, putting firms in a position of higher compliance and fraud risk.
Socure was named an IDC Innovator for its advanced predictive analytics platform that uses deep learning technologies to produce highly accurate assessments and validations of digital identities in online channels. The Secure ID+ Platform was cited for its unique use of AI to simultaneously run hundreds of models in a “champion challenger” process to identify those that are most efficient and accurate based on each customer’s requirements.
“Being named an IDC Innovator provides further validation that Socure has developed the most advanced AI platform for online identity proofing in the financial services industry,” said Tom Thimot, CEO for Socure. “In addition to helping banks increase their auto acceptance rates by up to 60%, the explainability of our models enables them to meet regulatory requirements.”