The Future of Ai Will Take a Different, More General Approach

Thought Industries New Online Academy for Enterprise Learning Built on Industry's Leading 'Headless' LMS Platform

ORBAI aims to develop a human-like AI with fluent conversational speech

The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

With this technology, ORBAI aims to develop Human AI, with fluent conversational speech on top of this AGI core, to provide AI professional services, all the way from customer service to legal, medical, to financial advice, with these services provided online, inexpensively, for the whole world. The core of the Legal AI has already been tested in litigation, with great success.

Marketing Technology News: Anvil Launches Workflows to Automate Complex Webs of Paperwork

Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt ‘AI’, but consistently found that DL architectures tools fell far short of their expectations for ‘AI’. Brent started ORBAI to develop something better for them.

Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. Still, it isn’t easy to develop artificial general intelligence (AGI) through human-created algorithms. Do you think AGI may require machines to create their own algorithms? According to you, what is the future of machines that learn to learn?

This is correct, today people design deep learning networks by hand, defining the layers and how they connect, but after a lot of tinkering, they can only get each network to do a specific task, like CNNs for image recognition, or RNNs for speech recognition, or Reinforcement learning for simple problem solving, like games or mazes. All of these require a very well defined and constrained problem, and labelled data or human input to measure success and train. This limits the effectiveness and breadth of application for each of these specific methods.

ORBAI has built a toolset called NeuroCAD that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. One example is our SNN autoencoder that can learn to take in any type or 2D or 3D spatial-temporal input, encode it to a latent, compressed format and also decode it. The cool part is you don’t have to format nor label your data. It learns the encoding automatically. This takes the functionality of CNNs, RNNs, LSTMs, GANs, and combines them into one more powerful general purpose analog neural network that can do all these tasks. By itself this is very useful, as the output can be clustered, then the clusters labelled, or associated with other modalities of input, or used to train a conventional predictor pipeline.

Marketing Technology News: MarTech Interview with Eddie Porrello, Director of Product at Amber Engine

But this is for designing components. There is a second level to NeuroCAD that allows these components to be assembled and connected into structures, and these composite structures can be evolved to do very general tasks. For example, we may want to build a robot controller, so we put two vision autoencoders for stereo vision, a speech recognition autoencoder for voice commands, and autoencoders for sensors and motion controllers. Then we put an AI decision making core in the middle, that can take in our encoded inputs, store them in memory, learn how sequences of these inputs evolve in time, and store models for what responses are required. Again, all of these autoencoders and components are evolved to their specific area, and how they connect is evolved, as is the decision core in the middle.

To get this to work, we have to take some guesses about how to design this artificial decision core, the brain in the middle, and seed the genetic algorithms with a couple decent designs, so it will process the sensory input, store it and build relationships between the memories, build narratives with inputs and actions with progressively more advanced models that make the robot better able to understand what to do given specific instructions and the state of its world. Once we have an initial guess, we start evolving the components and how they connect to each other and the architecture of the decision making core.

So the short answer is yes, we will have evolutionary genetic algorithms design our AI, from the components, to the way they connect, to how they solve problems, starting with small ‘brains’ and working up, like biological evolution did.

Marketing Technology News: MarTech Interview with Eddie Porrello, Director of Product at Amber Engine

Picture of Globe Newswire

Globe Newswire

GlobeNewswire is one of the world's largest newswire distribution networks, specializing in the delivery of corporate press releases financial disclosures and multimedia content to the media, investment community, individual investors and the general public.

You Might Also Like