How to Get Started with AI You Can Trust?

How to Get Started with AI You Can Trust?

cognitivescale logo Today the public sees AI (Artificial Intelligence) as a technical solution, but AI’s biggest problems are not technical, they’re the design and behavioral issues. “There is nothing Artificial about AI” to quote Fei Fei Li, a leading AI researcher at Stanford University. AI is inspired by people, it is created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility. Therefore, it is the responsibility of Business Leaders and AI designers to take a comprehensive view of AI that focuses on designing systems that engage users in a way that delivers exceptional experiences while building trust.

And trust is a very complex concept especially in the context in Machine Learning powered self-learning systems. To do this right requires a reframing our approach to AI design built around these five pillars:

  • Data Rights: Do you have rights to the data?

Data is the fuel that powers AI. Those deploying AI must ensure that the data they are using is of high-quality and the users have insight into how it is being used. User visibility around where data and models live, who has access to it, and what it is being used for is essential to ensuring the system is making trustworthy decisions. In other words, People have the right to know that their data is being collected and how exactly it’s being used. This has been in the headlines lately, relating to the use of data and the manipulation of the 2016 election across news and social media platforms, prompting changes.

  • Explainability: Is your AI transparent?

Can you explain how your Artificial Intelligence generated that specific insight or decision? Current AI systems operate in a black box and offer little insight into how it reaches its outcomes. Artificial Intelligence systems built using responsible Artificial Intelligence principles need to understand business stakeholder concerns and provide business process, algorithmic and operational transparency to build trust.

An example of this would be in the medical field. If AI were being used to recommend a course of treatment, doctors would need to know why that treatment was recommended – at a very detailed level – before prescribing that treatment.

  • Fairness: Is your AI unbiased and fair?

AI systems continuously process and learn from data. Can you be sure your Artificial Intelligence doesn’t discriminate against any group of people? Responsible Artificial Intelligence system design needs to ensure that the data being used is representative of the real world and the Artificial Intelligence models are free of algorithmic biases to mitigate skewed decision-making and reasoning, resulting in reasoning errors and unintended consequences.

As an example, a soap dispenser that was programmed to automatically dispense soap to hands placed under it failed to recognize any hands that weren’t white, because it had only been trained to recognize hands using images of hands with light color skin tones. As a result, it didn’t work on brown or black hands.

  • Robustness: Is your AI robust and secure?

As with other technologies, cyber-attacks can penetrate, and fool AI systems. How can you make sure the Artificial Intelligence cannot be hacked? There have been cases where a small amount of noise prevented AI from recognizing objects it was trained to distinguish. Responsible AI systems should provide the ability to detect adversarial data and provide protection against adversarial attacks while understanding how issues with data quality impact system performance.

Examples of Adversarial attacks – synthetically created inputs that pretend to relate to one class but actually are from another one – include fooling autonomous vehicles to misinterpret stop signs vs. speed limit and bypassing facial recognition, such as the ones for ATM’s.

  • Compliance: Is your AI appropriately governed?

Just like actions taken by humans, there needs to be a trail of auditability, to be able to defend why a particular decision was made. Organizations must use Artificial Intelligence in a compliant and auditable manner that operates within the boundaries of local, national and industry regulation. Responsible Artifical Intelligence systems take a holistic governance model that avoids silos and provides mechanisms for implementation, governance, and control of domain-specific policies and regulations such as HIPAA and FINRA rules.

Responsible Artificial Intelligence is fundamentally about building trust and confidence. Taking these five pillars into account, AI designers and deployment teams need to ensure that these systems will behave as anticipated and build trust with human users.

Businesses should undertake several steps in order to ensure that Artificial Intelligence systems are designed and implemented properly. These include the hiring of Artificial Intelligence ethicists to work with corporate decision-makers and software developers, forming an AI review board that regularly addresses corporate ethical questions, implementing Artificial Intelligence training programs to educate ethical Artificial Intelligence considerations, and develop Artificial Intelligence audit trails and means for remediation when AI solutions inflict harm or damages on people or organizations.

Transforming the human-technology relationship

AI can do amazing things to improve our everyday lives – if we act wisely and with vision today. In the pretty near future, we will hit a moment when it will be impossible to course-correct. That’s because technology is being adopted so fast, and far and wide. We have time but we have to act now.

It starts with ensuring that your Artificial Intelligence is enabling and reflecting your company’s ethics, values, and industry regulatory policies. Done right, Artificial Intelligence can deliver business results and improve the human condition at a scale that will far exceed all the innovations in the past few decades combined.

Read More:  A Look at the AI Tools Helping to Improve Marketing Productivity

Picture of Manoj Saxena

Manoj Saxena

You Might Also Like