Uri Guterman, Head of Product and Marketing, Hanwha Techwin looks at why integrity should be at the heart of AI technology
Artificial intelligence (AI) has become ubiquitous in almost every part of our lives and, in doing so, society is facing a challenge to keep pace with its advancements in a face of the Fourth Industrial Revolution. It holds exciting potential for many aspects of our lives, from improving the safety and efficiency of our cities, and guiding autonomous vehicles, to understanding consumer behaviour in a store, and supporting public health measures.
But with this potential, comes risk. That’s why more organisations are now taking a hard look at how AI might worsen societal inequality and biases, and how to combat this by developing ethical and responsible AI. Indeed, as part of the wider Hanwha Group, we want to elevate the quality of life with our innovations and solutions. AI plays a key role in this, as long as it’s developed and used in a responsible way.
Recently, Gartner identified ‘smarter, responsible and scalable AI’ as the number one market trend in 2021. A trend that will need to continue in 2022, with public trust remaining at significantly low levels. Almost two-thirds of people are inclined to distrust organisations.
The key to this is building integrity into your AI strategy and ensuring all products that use AI, do so in an ethical and responsible way. And simultaneously, that any partner or vendor that your organisation aligns itself with shares the same values and sense of responsibility to do the right thing.
Communicating and collaborating
Edelman CEO Richard Edelman recommends breaking the current cycle of distrust by uniting people together on common ground issues and making clear progress on areas of concern. Additionally, he advises institutions to provide factual information that doesn’t rely on outrage, fear, or clickbait, and that instead informs and educates on major societal issues.
Therein lies a clear opportunity to build greater integrity into AI. Trust relies on everyone being on the same page, with the same access to the facts, and an ability to relay their thoughts and feedback to product creators and business leaders.
In practice, that means communicating the use and benefits of AI to stakeholders, including customers, partners, investors, and employees. However, research has shown that people perceive the ‘threat’ of AI differently based on things like their age, gender and prior subject knowledge. The same study found that there’s a huge gap between laypeople’s perception and reality when it comes to AI, with many AI applications (like crime prevention and AI art) still requiring significant explanation.
Therefore, when communicating any new AI solution, it’s worth considering the different knowledge levels that need to be accommodated. Better still, look at the differing priorities, painpoints, and concerns of each audience group and tailor your message to this. This ensures everyone is coming to the table with the same basic level of knowledge about an AI use case.
Acting on values
Values are now front and centre for all organisations. Yet, it’s one thing to have a set of values, and another thing to proactively act in accordance with those values, with every business decision. Planning ahead is the secret to being a value-driven and responsible organisation. Having strategies and tools in place will help you meet your corporate values in the moment, no matter the urgency.
At Hanwha Techwin, our values of “Challenge, Dedication, and Integrity.” and spirit of “Trust and Loyalty” are built into everything we do. From our strategy, to our products, to our innovation.
Integrity at Hanwha means that we stick to our principles, we are impartial, and we take pride in doing so. In practice, it means that we follow through on our promises to our customers and partners, that we don’t take shortcuts with our product development, that our performance and quality remain consistent and that every employee understands our values and expectations.
Educating leadership and your board about the potential and risks of AI is core to showing top-down ownership and ethical leadership. It makes sure that AI is overseen by the very top and this direction is set at every level of the organisation.
Some of the areas that senior leadership can steer on include:
- Clear communication to all stakeholders about how data is collected, secured, and used.
- Transparency around an AI’s decision-making process and ensuring human oversight.
- Ensuring AI systems are used ethically, are free from bias, and that protected attributes aren’t being used.
- Data and AI systems are secured against malicious actors.
AI itself will be hindered without widespread understanding and acceptance. That’s why integrity is as vital to AI as the data used for training it.