
It Starts With Data
How AI can bring forth a world of responsible opportunities By Daniela Braga, PhD Artificial intelligence (AI) may be the
Responsible AI for a thriving future
By Lambert Hogenhout
The speed and scope of developments in artificial intelligence (AI) have been astonishing, which naturally creates lots of excitement and expectation about the possibilities. From self-driving cars to medical diagnoses, AI promises to revolutionize countless aspects of our lives. At the same time, a crucial realization is emerging: the lasting value of AI will hinge heavily on trust.
Within organizations, trust among employees, partners and customers is crucial for fostering collaboration and driving successful AI adoption. As stated in Deloitte’s State of AI 2024 report, employee distrust is currently one of the main obstacles to corporate AI adoption and deployment. At the same time, research published in the Harvard Business Review (May 2024) revealed that when human roles are replaced by AI within a workflow, the remaining human workers often experience decreased productivity.
This decline can be attributed, in part, to discomfort and distrust surrounding the AI’s accuracy, its potential impact on job security, and concerns about its overall competence. In short, organizations would do well to focus on trust in their strategic approach to AI adoption. A solid policy framework for the responsible use of AI, and transparency – internally and externally – about these commitments can help establish trust.
Broader societal trust in AI is also essential. As AI becomes increasingly pervasive in commercial products, public services, and infrastructure, it will have a profound impact on society. While AI offers significant potential for economic and social benefits, it can also bring about rapid changes that require careful management. Governments play a vital role in establishing trust in AI by creating a regulatory framework that balances innovation with responsible development. Such a framework must:
Regulation that balances these objectives, and that keeps up with technology and anticipates future developments is no easy feat. But governments that succeed in doing so will position themselves (and their nation) as leaders in the emerging AI era. A proactive approach that demonstrates a forward-thinking mindset will benefit business, citizens and the nation as a whole.
Saudi Arabia’s Vision 2030 is a good example. This ambitious national transformation program explicitly recognizes the pivotal role of AI in driving economic diversification and social progress. The vision emphasizes transparency and accountability in AI development, aiming to build a robust and ethical AI ecosystem that benefits both citizens and the global community.
Another aspect that governments can play a role in is the development of locally relevant AI models and algorithms. The training data used in training AI models necessarily give these systems a specific cultural flavor. When AI is going to be used to answer people’s questions about a wide range of topics, to produce content in local languages, or to analyze or summarize the key points in long documents, this cultural bias matters.
Locally relevant models will naturally tend to foster trust more easily than general models. Of course, creating such locally tuned AI models, trained on relevant data, and with appropriate guardrails, will take resources. But this is a worthwhile area of R&D investment for governments – if such models are made available for free, they can be a catalyst for further development, innovation an adoption.
Beyond these local benefits, it is well recognized that being perceived as a leader in cutting edge technology, and sharing such technology with partners, can create “soft power” that supports other aspects of bilateral and multilateral relations such as cultural exchange, trade and technological or scientific collaboration. The creation of regionally relevant AI models can certainly contribute to that. As such, an investment in developing such technology has the potential to provide returns in multiple forms, directly and indirectly.
AI has so many useful applications that in many cases we cannot afford not to embrace it. In the coming years, large investments will be made in the development, acquisition and adoption of AI. The success of that adoption – and hence the return on investment – will depend on good governance of AI. Trust is one of the main outcomes of that governance and it needs to be established at all levels – from individuals to organizations to society at large.
By prioritizing development of responsible and locally relevant AI, fostering transparency, and ensuring that AI serves the best interests of society, we can unlock the full potential of the technology and usher in a future where trust and innovation go hand-in-hand.
About The Author
Lambert Hogenhout is Chief Data and AI at the United Nations Secretariat. He is also an author, keynote speaker and advisor on AI and responsible use of technology. He has 25 years of experience working both in the private sector and with international organizations such as the World Bank and the United Nations. He leads governance and strategy in the areas of data and AI and oversees its practical implementation. He has published on data privacy, data governance, the societal implications of technology and responsible use of AI. lamberthogenhout.com
Discover the most outstanding articles.اكتشف أبرز المقالات.
Harnessing the power of digital twins to reimagine cities across the KSA and beyond. According to PwC’s 2022 report,
How Riyadh-based The Garage is contributing to a flourishing startup culture in Saudi Arabia In the heart of Riyadh, nestled
Saudi astronauts Rayyanah Barnawi and Ali Alqarni make history in giant leap for the kingdom. From earthbound dreamers to cosmic