Three Realities of AI as We Approach 2020

Three Realities of AI as We Approach 2020

Deep Learning technology has enabled a democratization of AI: it used to be that you needed a team of people to describe AI’s features and it was a long process, with another team of qualified PhDs required to deploy algorithms. But nowadays, we have Ph.D. students doing internships where they produce really valuable and viable production-ready results.

There are also resources such as TensorFlow, an open-source Machine Learning library that anyone can play around with, as well as hundreds of AI-focused online courses and summer schools. The democratization of AI has truly been a revolution, and something we should be proud of. However, even though we’re not far from solving many of the world’s problems with AI, if we’re not looking carefully at how AI is being deployed, we may become complacent and miss out on breakthroughs and opportunities to achieve far greater things.

Read More: The Rise of Zero Trust Security: How Machine Learning Is Making an Impact

So how should we be thinking of AI today? Here are three observations that may help companies considering the use of AI or those who might be questioning its evolution over the past few years.

We Shouldn’t Be So Scared of AI

A lot of people fear the consequences of AI making its own decisions, but the reality is that humans are likely to always have more influence than it may appear at this point in time.

I’m a big believer in the hybrid human-AI model, and I think even when we do create ‘super intelligence’, there is going to be a human component embedded there. We’re going to see a merge between human and AI brains. We’re already offloading a lot of our brains into machines on a daily basis: how many of us remember phone numbers anymore?

The question is more: if we’ve got a lot of computational power, what do we direct that computational power towards? These decisions, as well as what we want to solve and create, will likely always have some form of human interaction. We’ll definitely be seeing a combination of human and machine resources when it comes to applying computational power and outcomes.

We’re Not Going to Create a God-like Algorithm Anytime Soon

What does it mean to create algorithms that are unbiased? The problem is, we can’t eliminate biases completely. So many of our biases and decisions come from many different factors: many are inherent, moral opinions about how things should happen and how society should behave.

As humans, we struggle to get past our own biases, so we may need to accept what a challenge it will be to get rid of machine-learned bias. The best that we can hope to do now is make the best decisions we can as humans, and accept progress over perfection.

We Need to Be Skeptical

It may sound counterproductive, but there are real benefits to approaching the achievements of AI skeptically. There is a real danger when we assume that many problems have been completely solved by Deep Learning, and that creates a false sense of where we are in its progress.

Look at Google Translate. It’s great for a consumer application, and a lot of great work went into it. However, the issue with it as a tool is that it has generated an assumption that translation is ‘done.’ However, if you look at translation in the enterprise, it’s very far from being solved. Machine Translation is not there yet when it comes to building trust and completely optimized communication between humans. We need to look at progress clearly, and verifiability is a huge part of recognizing where we are really at when it comes to categorizing AI as having completely solved problems.

People assume we’re already at the point where we’ve created algorithms that are sufficient for general AI. The fact is that Deep Learning is limited. If you think about the amount of data that a human child is exposed to in order to learn something, versus the amount of data you have to feed Deep Learning in order to learn something, it’s obvious how much less data humans require in order to learn something. So clearly there is a difference in an algorithm’s and a human’s ability to understand and learn from data.

One vital point to consider is that we still don’t know how human intelligence or even human consciousness arises. Deep Learning gives us ‘narrow’ AI and the ability to work on specific problems, but we’ve not conquered general AI yet. The next stages of AI are likely to look very different, and we’ll need to address AI with a careful and watchful eye in order to keep it on track to continue solving the world’s problems.

Read More: How to Do Keyword Research in 5 Steps

Picture of Dr. Vasco Pedro

Dr. Vasco Pedro

Dr. Vasco Pedro is CEO & Co-Founder of San Francisco-based 2014 Y Combinator graduate and scaleup Unbabel, a leading enterprise SaaS translation company that combines state of the art Artificial Intelligence with a global crowd of 100,000+ humans to break down business communication barriers in customer service. Unbabel helps global brands such as Booking.com, Facebook, Skyscanner, easyJet, Under Armour and Rovio remove language as a concern, increasing customer satisfaction and building a more efficient customer service operation in the process.

You Might Also Like