
Transparency First: Make AI Worthy of Trust
Hosts: Mark Smith, Meg Smith
Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, choose vendors, and schedule reviews as risks evolve. They also consider a public call to pause superintelligence, and argue for critical thinking over fear.
Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
What you’ll learn
Build an ethics checklist around transparency, fairness, reliability, privacy, inclusiveness, and accountability.
Evaluate tools for training stance, data use, privacy, and changing terms.
Design human-in-the-loop workflows with unique credentials, logging, and audit trails.
Set personal and organisational boundaries for acceptable AI use.
Plan a review cadence to reassess risks, mitigations, and vendor changes.
Chapters
00:00 The Age of Intelligence
05:37 Ethical Decision-Making in AI
16:04 The Role of Human Connection and Critical Thinking
22:38 Frameworks for Responsible AI Use
27:25 Reflections on AI and Superintelligence
Highlights
“we’re going to be squarely in the land of robotics very soon.”
“the intelligence age is so much more than just AI it is the age of intelligence”
“if you’re an unethical individual, AI is probably going to amplify it.”
“there is no ethical AI without transparency.”
“people should be accountable for AI systems.”
“terms and conditions are changing all the time.”
“they should be inclusive and for everybody, they should be transparent.”
“AI should be here to help humanity”
“we should all touch grass a little bit more often”
“Transparency First: Make AI Worthy of Trust”
Mentioned
Microsoft Responsible AI principles https://www.microsoft.com/en-us/ai/principles-and-approach#ai-principles
IBM AI course https://www.ibm.com/training/learning-paths
TikTok https://www.tiktok.com/@nz365guy
Roomba https://en.wikipedia.org/wiki/Roomba
Zoom TOS https://termly.io/resources/zoom-terms-of-service-controversy/
superintelligence-statement.org https://superintelligence-statement.org/
Beanie Bubble (movie) https://en.wikipedia.org/wiki/The_Beanie_Bubble
Billy Madison (movie) https://www.youtube.com/watch?v=dtlJjkI34V4
Joe Rogan & Elon Musk https://youtu.be/O4wBUysNe2k?si=9UTKVaZCDbwZjhZd&t=9131
Connect with the hosts
Mark Smith: Blog https://www.nz365guy.com, LinkedIn https://www.linkedin.com/in/nz365guy
Meg Smith: Blog https://www.megsmith.nz, LinkedIn https://www.linkedin.com/in/megsmithnz
Support the show
Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.
Keywords: responsible ai, ethical decision-making, transparency, accountability, fairness, privacy and security, non-deterministic, human in the loop, audit trail, intra id, superintelligence, terms and conditions
Hosts: Mark Smith, Meg Smith
Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, choose vendors, and schedule reviews as risks evolve. They also consider a public call to pause superintelligence, and argue for critical thinking over fear.
Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
What you’ll learn
Build an ethics checklist around transparency, fairness, reliability, privacy, inclusiveness, and accountability.
Evaluate tools for training stance, data use, privacy, and changing terms.
Design human-in-the-loop workflows with unique credentials, logging, and audit trails.
Set personal and organisational boundaries for acceptable AI use.
Plan a review cadence to reassess risks, mitigations, and vendor changes.
Chapters
00:00 The Age of Intelligence
05:37 Ethical Decision-Making in AI
16:04 The Role of Human Connection and Critical Thinking
22:38 Frameworks for Responsible AI Use
27:25 Reflections on AI and Superintelligence
Highlights
“we’re going to be squarely in the land of robotics very soon.”
“the intelligence age is so much more than just AI it is the age of intelligence”
“if you’re an unethical individual, AI is probably going to amplify it.”
“there is no ethical AI without transparency.”
“people should be accountable for AI systems.”
“terms and conditions are changing all the time.”
“they should be inclusive and for everybody, they should be transparent.”
“AI should be here to help humanity”
“we should all touch grass a little bit more often”
“Transparency First: Make AI Worthy of Trust”
Mentioned
Microsoft Responsible AI principles https://www.microsoft.com/en-us/ai/principles-and-approach#ai-principles
IBM AI course https://www.ibm.com/training/learning-paths
TikTok https://www.tiktok.com/@nz365guy
Roomba https://en.wikipedia.org/wiki/Roomba
Zoom TOS https://termly.io/resources/zoom-terms-of-service-controversy/
superintelligence-statement.org https://superintelligence-statement.org/
Beanie Bubble (movie) https://en.wikipedia.org/wiki/The_Beanie_Bubble
Billy Madison (movie) https://www.youtube.com/watch?v=dtlJjkI34V4
Joe Rogan & Elon Musk https://youtu.be/O4wBUysNe2k?si=9UTKVaZCDbwZjhZd&t=9131
Connect with the hosts
Mark Smith: Blog https://www.nz365guy.com, LinkedIn https://www.linkedin.com/in/nz365guy
Meg Smith: Blog https://www.megsmith.nz, LinkedIn https://www.linkedin.com/in/megsmithnz
Support the show
Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.
Keywords: responsible ai, ethical decision-making, transparency, accountability, fairness, privacy and security, non-deterministic, human in the loop, audit trail, intra id, superintelligence, terms and conditions
source






