Nextech3D.ai Files Multiple Generative AI Patent Covering Breakthrough 3D-Model Creation For $5.5trillion Ecomm Industry

Nextech3D.ai Enters Asian Market with Major 3D Modeling Deal to Revolutionize E-commerce

The Company is filing multiple pivotal patents for its game-changing Generative AI

Nextech3D.AI (formally “Nextech AR Solutions Corp” or the “Company”), a Generative AI-Powered 3D model supplier for Amazon, P&G, Kohls and other major e-commerce retailers is pleased to announce the Company has filed it’s second in a series of patents for converting 2D photos to 3D models. These patents position the Company as a leader in the rapidly growing 2D photo -3D models transformation happening in the $5.5 trillion dollar global ecommerce industry estimated to be worth $100billion. Nextech3D.ai is using its newly developed AI to power its diversified 3D/AR businesses including Arway.ai, (OTC: ARWYF / CSE ARWY) Toggle3D.ai and Nextech3D.ai.

Patent filing title: “Fixed-point diffusion for robust 2D to 3D conversion and other applications.”

A major contributor to Nextech3D.ai’s 3D modeling success and ability to meet market demand is its Generative Artificial Intelligence (AI). This patent builds on the Company’s previous patents filed. Earlier this month, a patent was filed titled “Generative AI for 3D Model Creation from 2D Photos using Stable Diffusion with Deformable Template Conditioning”, and late last year the Company filed a patent for creating complex 3D models by parts. The game-changing AI technology underpinning these patents places the Company in a leadership position in the 3D modeling for ecommerce space and positions the Company to generate significant revenue acceleration and cash flow in 2023 and beyond.

Marketing Technology News: MarTech Interview with Anish Mehta, Founder at Animeta

Our new patent application highlights our commitment to driving innovation in the field of generative AI, and we look forward to continued success and advancement.””

Building on the Company’s previous patents, Nextech3D.ai will be using fixed-point diffusion for learning to construct 3D models from 2D reference photos, starting with simpler objects, and individual parts, before expanding to more complex, multi-part objects.

Nima Sarshar, Chief Technology Officer of Nextech3D.ai commented, “With the development of our fixed-point diffusion models, we are excited to offer a new reliable and innovative way to generate 3D models at scale from 2D reference photos. Our new patent application highlights our commitment to driving innovation in the field of generative AI, and we look forward to continued success and advancement.”

Diffusion models prescribes a solution for creating 3D models from 2D reference photos, either as a whole, or part-by-part by evolving differentiable, deformable templates to convert into 3D parts, conditioned on one or more reference photos of the part. As previously announced, over the last several years Nextech3D.ai has been building tens of thousands of high-quality, fully textured, photo-realistic 3D assets, with hundreds of thousands of individual parts. These parts get harvested into Nextech3D.ai’s “part library”, synthetically rendering them from random views, and using them to train new diffusion models that are able to reconstruct 3D mesh parts from reference photos. The Company’s first clean dataset with 70,000+ 3D objects and more than 2.2M synthetically rendered reference photos are now ready for training. This is still a tiny portion of all the parts and assets in its model library, and yet, it is already larger than the largest publicly available 3D dataset called ShapeNet, with its 51K models of varying quality.

Technical Explanation

Diffusion deep-learning models have been successful in creating realistic images by adding noise to a training example and using a neural network to estimate and remove the noise at each step. The general idea is as follows: starting from a training example, say an image, noise is successively added to the example. A neural network, usually a U-Net, learns to estimate and remove the noise from the noisy sample at each step. To create new novel images, one starts with a sample from a pure noise distribution, and the noise is successively estimated and removed using the same U-Net, until one converges into a (hopefully) realistic input image. “Conditioning” data, such as embeddings of textual prompts, is provided as side-information during the training process. At sampling time, a conditioning data provided by the user will steer the backward diffusion process towards an image that is relevant to the user’s input.

Each time a diffusion model is sampled to generate an image, by design, it will generate an independent image. This allows for generating a virtually infinite number of images. However, there is no ground truth for the validity of the image generated. The quality of the resulting image, and its relevance to the prompt is rather subjective.

To use diffusion models to turn 2D reference photos to 3D models, one can think of 2D reference images as conditioning prompts, and hope to recover the 3D model the 2D photos correspond to. The issue is, among other things, that the backward diffusion process will end up generating a different 3D model upon convergence. Although, Nextech3D.ai has filed a breakthrough provisional patent application that addresses this issue, by prescribing a new variation of diffuse models we call fixed-point diffusion, that is capable of reliably generating 3D models from 2D photos, where there is only a single ground truth correspondent to the conditioning data (I.e., 2D reference images).

With a new wave of generative AI systems, the world is entering a period of generational change where entire industries have the potential to be transformed. Due to its advances in AI the Company believes it is perfectly positioned to be the supplier of choice for the global $5.5 trillion ecommerce industry as it pivots from 2D-3D models, which is estimated to be worth $100 billion.

Marketing Technology News: DataRobot Unveils New AI Platform, Ushering in the Era of Value-Driven AI

Picture of MTS Staff Writer

MTS Staff Writer

MarTech Series (MTS) is a business publication dedicated to helping marketers get more from marketing technology through in-depth journalism, expert author blogs and research reports.

You Might Also Like