Cortical.io Semantic Folding Approach demonstrates a 2,800x acceleration and 4,300x increase in energy efficiency over BERT

Cortical.io

Substantial cost reduction for NLU implementations enables ubiquitous language intelligence

Cortical.io announced its breakthrough prototype for classifying high volumes of unstructured text. Classifying documents or messages constitutes one of the most fundamental Natural Language Understanding (NLU) functions for business artificial intelligence (AI). The benchmark was carried out on two similar system setups using the same, off-the-shelve, dual AMD-Epyc server hardware. The “BERT” system, a transformer-based machine learning technique for natural language processing, was augmented by a NVidia GPU. The “Semantic Folding” approach utilized a cost comparable number of Xilinx Alveo FPGA accelerator cards.

The goal of the benchmark was to compare the throughput performance of the classification-inference engine of both systems. To measure performance, Cortical.io classified sixteen different sets of data including well-known data sets such as Enron (Kaminski, Farmer, and Lokay), DBPedia, IMDb, PubMed, Reuters (R8, R52), Ohsumed, Web of Science, BBC news text and others.

While large industries are determined to use less energy, the AI and ML industry is headed in the opposite direction: growing its carbon footprint exponentially. The future of green computing hangs by the thread of high efficiency AI capabilities.

Marketing Technology News: Alorica Hires Industry HR Leader Asma Sultana as Vice President of Corporate Talent Acquisition

Staggering results were achieved by the simultaneous application of three distinct innovative steps:
1. Improving the machine learning approach by applying Semantic Folding.
2. Using tooling that enabled the concurrent implementation of software, hardware and networking aspects of the Semantic Folding approach.
3. Using the parallelism of large gate arrays, practically implemented using FPGA technology in form of COTS datacenter hardware from Xilinx.

Benchmark results show that with Semantic Folding, the operations costs can be reduced from several dollars per classifier to a fraction of a cent, making large-scale classification use cases for the first time commercially viable. Example real world workloads could be hate-speech detection for nearly three billion Facebook users or content filtering the Twitter firehose for hundreds of millions of users.

“Efficiency is the new precision in Artificial Intelligence,” said Francisco Webber, CEO at Cortical.io. “While large industries are determined to use less energy, the AI and ML industry is headed in the opposite direction: growing its carbon footprint exponentially. The future of green computing hangs by the thread of high efficiency AI capabilities.”

Marketing Technology News: MarTech Interview with Alex Campbell, Co-founder and Chief Innovation Officer, Vibes

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.