top of page

Ethical considerations of agentic AI: Insights from Azadeh Williams and Gladwin Mendez

  • Writer: Jessica Phillips
    Jessica Phillips
  • May 9
  • 3 min read

The future of AI isn’t just intelligent – it’s agentic. But without ethics, autonomy can quickly become a liability. That’s why AI thought leaders are putting responsible design at the centre of the conversation.


IT agentic AI technician speaking with a graphic of a brain and letters AI.


At the recent All Access: Agentic AI in PEX APAC webinar, Azadeh Williams – founder and managing director of AZK Media and executive board member of the Global AI Ethics Institute – shared her perspectives on the ethical considerations surrounding this transformative technology. She was joined by thought leaders from the banking, insurance, and telecommunications industries.


Let’s dive in.


What is Agentic AI?


Agentic AI refers to systems designed to act independently on behalf of users or organisations to accomplish specific goals. Unlike traditional AI, which analyses data and makes recommendations, agentic AI can make decisions and take actions with minimal human intervention. These systems navigate complex environments, learn from outcomes, and adapt their approaches – characteristics that make them particularly valuable for business process automation.


It sounds impressive – and it is – but this technology also demands careful human oversight and clear ethical frameworks to be implemented responsibly.

Panelists Azadeh Williams and Gladwin Mendez
Panelists Azadeh Williams and Gladwin Mendez

Balancing autonomy with accountability


How do we align agentic AI systems with human values?


The Global AI Ethics Institute aims to raise awareness of the cultural dimensions of AI ethics and to promote respect for cultural diversity in how those ethics are applied.


With this in mind, Williams poses critical questions:


“How are we ensuring these systems make decisions aligned with human values?


What are the challenges you are facing for accountability and trust?


Who do you think is responsible when an agentic AI makes a mistake?” 


Williams advocates for clearly defined accountability structures to address potential errors or biases in AI decision-making. She emphasises that ethical AI frameworks must account for diverse cultural norms to ensure global fairness and applicability.


These questions above need to be answered in order to meet the urgent demand for strong, inclusive frameworks to guide the design and deployment of agentic AI.


Putting in AI guard-rails to build trust


Gladwin Mendez, Fractional Chief Data and Analytics Officer at GEC Prudentia and a respected leader in AI governance, focuses on the intersection of innovation and ethical oversight.


With nearly 20 years of experience in leading data transformations, Mendez explains the importance of “guardrails” in building public trust in agentic AI.


“Guardrails are what will drive trust,” stressing the need for technical, legal, and ethical parameters that help steer agentic AI toward responsible outcomes. 


His insights reinforce the necessity of proactive governance to ensure AI behaves as intended in unpredictable, real-world environments.


Mendez advocates for the development of clear regulatory frameworks that assign shared responsibility among developers, manufacturers, and operators. He also highlights the value of incorporating advanced monitoring systems that trace AI decision paths to ensure transparency and accountability.


Additionally, he proposes the concept of AI-specific insurance models, which could offer tailored coverage for autonomous systems and help distribute risk more equitably.

The role of human oversight and communication


Drawing from her extensive media and PR experience, Williams highlights the vital role of communication in fostering ethical AI practices.


"At the heart of building sustainable ethical AI frameworks is communication.”


By encouraging transparency and open dialogue, organisations can build public trust and ensure AI systems are designed and used responsibly.


Both Williams and Mendez acknowledge the potential for bias and discrimination in agentic AI systems, as these technologies are often trained on large datasets that may reflect existing societal biases. They call for:


  • Rigorous dataset auditing

  • Algorithmic transparency

  • Fairness as a core principle in both development and deployment


The panelists agree: human oversight must remain central in AI decision-making processes. While agentic AI enhances efficiency, it’s crucial to avoid treating these systems as “black boxes.”


By developing intuitive models that explain their decision-making clearly, organisations can improve user understanding and maintain trust.


Preparing for an Agentic AI future


As agentic AI becomes more advanced, Williams urges organisations to proactively address its ethical implications. This means:


  • Implementing clear accountability measures

  • Fostering transparent communication

  • Engaging with diverse cultural and ethical perspectives


By championing these principles, AZK Media and Azadeh Williams are helping to shape a future in which agentic AI is developed and deployed in ethical, responsible ways that remain aligned with human values.



Authored by Jessica Phillips, Senior Social Media and Communications Specialist at AZK Media.

Comments

Couldn’t Load Comments
It looks like there was a technical problem. Try reconnecting or refreshing the page.
AZK Logo badge sq b 2023.png

Want to Contact Us?

Discover how we can help your business.
Contact us via any one of our channels below:

  • LinkedIn
  • Facebook
  • Twitter

Sydney, NSW, Australia

© 2023 by AZK Media. 

bottom of page