Background:
In this episode, hosts Professor Michael Mainelli and Adam Leon Smith welcome Piercosma Bisconti, dialling in from Rome, for a fresh perspective on the evolving ethics and governance of generative AI. Piercosma discusses how ChatGPT's 2022 launch changed everything, suddenly bringing AI directly into human social spaces in ways earlier ethical frameworks never fully anticipated. He also explores the rise of more interconnected AI systems and the new risks that emerge when multiple models interact, collaborate, or even compete in real-world environments. Drawing on philosophy and systems thinking, he reflects on what this means for society, especially how always-agreeable AI might quietly reshape human relationships, emotional resilience, and social skills in the years ahead. Expect thoughtful insights on where standards and governance fit in, the limits of current testing approaches, and why the biggest changes may be more social than technological.
A fascinating, big-picture discussion that asks: as AI becomes part of everyday social life, how do we keep our humanity intact? Tune in for Piercosma's unique blend of deep thinking and practical standards experience.
Guest:
Dr Piercosma Bisconti is an expert in artificial intelligence governance, ethics, and the societal implications of emerging technologies. He is Of Counsel at Aiternalex and co-founder of DEXAI, Artificial Ethics, where he leads research on responsible AI, safety, and regulatory alignment with European and international standards. His work focuses on bridging technical AI development with policy and governance, helping organisations implement trustworthy AI systems and prepare for frameworks such as the EU AI Act.
Piercosma holds a PhD in Political Philosophy from the Sant’Anna School of Advanced Studies in Pisa, where his research examined socio-technical systems and human–machine interactions. He has contributed to multiple European Horizon research projects exploring the ethical and social implications of artificial intelligence and robotics, and has served as an Ethics Officer for the European Commission’s research funding body (ERCEA).
Alongside his advisory and research roles, he contributes to international standardisation efforts for trustworthy AI, including leading work on the European AI Trustworthiness Framework within the CEN-CENELEC standardisation system. He is also the author of Hybrid Societies: Living with Social Robots, which explores how social robots and artificial agents may reshape human relationships and social systems.