Figure Eight Launches Machine Learning Assisted Video Object Tracking Solution to Accelerate the Creation of Training Data
Machine Learning Enhanced Platform Creates Training Data up to 100 Times Faster Than Human-Only Solutions
Figure Eight, the essential Human-in-the-Loop Machine Learning platform, launched its Machine Learning assisted Video Object Tracking solution to accelerate the creation of training data for customers in key industries such as automotive and transportation, consumer goods and retail, media and entertainment, and security and surveillance.
Figure Eight’s Machine Learning assisted Video Object Tracking solution allows customers to annotate an object within a video frame and then have machine learning predictions persist that annotation across frames within the video. Human annotators can review the machine predictions and adjust where necessary to deliver video annotations that are highly accurate but up to a hundred times faster than human-only solutions where every object in every frame must be hand annotated. This machine learning assistance helps human annotation teams increase their efficiency and effectiveness. Previously, video training data creation would be prohibitively expensive and time-consuming to meet the needs of customers with AI applications trying to understand objects moving through time and space. With the increasingly rapid adoption of AI, time to market is of the utmost importance for companies looking to stay competitive.
“Training data is the bottleneck to making AI work at scale in the real world today. Unfortunately human-only annotation solutions cannot meet the market demands of training data volume and quality,” said Robert Munro, CTO of Figure Eight. “The only viable solution to creating high quality training data at scale is to combine the best of machine learning and human intelligence. We’ve spent the last year integrating a deep learning ensemble model into the Figure Eight platform so we can apply billions of compute cycles to the billions of human judgments previously generated for computer vision and natural language processing use cases. The result today is that our customers can now create training data up to a hundred times faster than previously possible. By applying machine learning to the quality control process, we also annotate with more accuracy than purely manual processes, giving our customers the best of both worlds: scale and accuracy.”
The volume of data being created worldwide through the ubiquity of low-cost mobile devices, drones, satellite, and sensors is ramping at a rapid pace. IDC estimates 30 zettabytes of data will be created globally this year, of which more than 85% of it is unstructured data such as raw text and video.”By 2020, 99% of enterprise-captured video/image content will be analyzed by machines rather than humans,” Gartner estimates. To achieve this, enterprises will need to create large-scale video training data sets so their machine learning models can identify and track objects in the real world.
“With Figure Eight’s new video tracking capabilities, we see huge potential for accelerating the creation of high-quality training data, which is a critical ingredient to our high-definition map development,” said Sanjay Sood, VP Highly Automated Driving, HERE.