In A World Where “Big Data is Video”, An AI Language for Machines Emerges

CDVA, the 1st Global AI Standard, Enables Words Supporting a Language between Machines to Develop for Lower Cost, Lower Latency and Energy Efficient Automation Intelligence

Gyrfalcon Technology Inc. (GTI) sees interest surging around MPEG7’s latest standards that usher in a wave of capabilities leveraging AI, but more importantly, solving issues for AI that promise to accelerate deployment, reduce latency, lower cost and alleviate compatibility. It is timely that such standards are emerging, as machines are becoming the largest and also the fastest segment of video user, to accommodate the rapid growth in IoT and AI.

Mobilocity, an analyst firm, published a paper “Solving the Machine-to-Machine (M2M) Transmission & Search Bottleneck” highlighting the need for such a standard, and the anticipated benefits and market impact related to the standard’s addressing current issues.

With CDVA, approved as a part of the standard in July of 2019, a number of significant benefits become immediately clear. First, a range of new capabilities and service delivery models that incorporate the edge and unburden the value chain for automated intelligence. Second, the industry suddenly has the ability to produce the “words” that deliver a universal vocabulary in a “language for machines” that can be compatible between the devices and the processors from different providers across ecosystems.

Marketing Technology News: Champlain College Announces Online Bachelor of Science in Marketing Communication

The New Capabilities, Starting with “Shoot & Search”

CDVA enables images and video to be encoded while being captured, which yields the opportunity to unburden the network and storage resources for dramatic benefits. It supports machine-only as well as for hybrid (machine and human) formats to support various service models and a broad range of applications.

Machine only encoding needs only basic and inexpensive camera sensors to extract features into feature maps that machines can use without all the high resolution humans require, and therefore capture, use and store with much smaller file sizes. Sizes could be as much as 1,000 times smaller, reducing the impact on networks and storage, and thereby reducing latency and energy use.

Hybrid encoding would allow people to use the video, and at the same time would be embedded with the information needed for machines to share. This Hybrid encoding embeds the metadata automatically using AI that makes image and video files searchable with greater speed and precision than currently possible. One way to think about this Hybrid encoding is “closed captioning” for machines, where only machines would detect and use the encoded features, providing optimized understanding efficiency for them, without impacting the human consumers of those data files.

CDVA opens the door for exciting new capabilities, such as allowing accelerated and precise searches of content “on the device” or on “home servers”, where users want to keep personal data for security and privacy. Consumers lack a means of easily searching the tens of thousands of image and video files stored on computers, phones and electronics, and CDVA will allow users to open their camera, gallery or browser and “Shoot” an image and recall the most closely matching files for their “Search”. They can use this same capability to more quickly and precisely search the libraries for service providers providing entertainment and education content complying with the new MPEG7 standards.

Marketing Technology News: Five9 Sees Market Success in New Joint Offering with Zoom

Producing the “Words” for the Emerging Machine “Language”

As a standard, CDVA would extract features from images using a local AI processor equipped camera sensor, and produce a feature map to identify the object, activity, location, etc. These feature maps are defined by the standard, so they can be shared between devices and processors provided by different manufacturers. VCM, on the MPEG roadmap, will provide creative ways to aggregate feature maps together to provide “machine understanding” of images and videos, much like using different words in the right sequences allows people to communicate with languages. Certainly, the ability for developers to share the algorithms stemming a shared global standard like CDVA and VCM addresses what has been a development challenge for new products and services integrating AI.

Many markets will benefit from the adoption of VCM, such as Smart Home, Smart City, Autonomous Vehicle and Intelligent Transportation. Devices can include robots, drones, autonomous vehicles, traffic cameras, surveillance cameras, and all manner of camera equipped appliances and equipment. All of the captured video will be more usable, due to the embedding of indexed features on the frames of the video. Much of the video will only be needed for machines, allowing the sensors to be basic and lower cost, use less energy as well as to result in files of very small size. This will reduce network congestion, by sending smaller files, and lower demand for storage of archived video.

“This is what we are seeing in working with the new standard, and proving out the effectiveness of CDVA and VCM using AI processors and camera sensors,” said Dr. Menouchehr Rafie, Vice President of Advanced Technologies at Gyrfalcon Technology Inc. “Transporting compressed extracted feature vectors rather than compressed raw textual data will drastically reduce the data amount for video transmission or storage and achieve interoperability between various applications and devices particularly in the emerging 5G IoT and V2X standards.”

Marketing Technology News: Brex Launches Ecommerce Marketing Agency Directory

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.