Bridging AI Adoption: Explainability, Trust Frameworks & Continuous Oversight – EP21 by Karan & Pooja

Bridging AI Adoption: Explainability, Trust Frameworks & Continuous Oversight – EP21 by Karan & Pooja

No Comments on Bridging AI Adoption: Explainability, Trust Frameworks & Continuous Oversight – EP21 by Karan & Pooja

  • Karan Bhandari and Pooja CK discuss AI adoption barriers in development and security contexts.
  • Pooja CK analyzes vendor data to improve business decisions and seeks insights on AI challenges.
  • Karan highlights the need for multiple MCPs to connect custom data resources effectively.
  • Current AI models lack explainability, complicating decision-making and ethical considerations.
  • The integration of domain-specific agents could enhance collaboration and efficiency in development tasks.
  • Trust frameworks are essential for guiding non-technical users in AI-driven decision-making processes.
  • Data privacy concerns arise from sharing sensitive information with AI systems without proper safeguards.
  • Continuous human oversight is necessary to ensure AI outputs align with organizational standards and ethics.

Karan Bhandari and Pooja CK discussed the challenges of AI adoption, emphasizing the need for improved explainability, trust frameworks, and continuous oversight to enhance decision-making in development and security contexts.

Transcript

Pooja CK: Yeah, cool.

Karan Bhandari: You Reached out to me to talk about the barriers to AI adoption. We’re going to treat this like an interview about my role as a developer and what we’re working on.

Karan Bhandari: We both work at Cisco. I’m a developer in the cybersecurity wing under the reporting layer of the Cisco umbrella. Would you like to introduce yourself, Pooja?

Pooja CK: I’m on the vendor management office team and I work as a data analyst.

Karan Bhandari: What project are you working on and what excites you about it?

Pooja CK: I handle vendor data to understand how our suppliers are performing in resource delivery and technology. I support the team with data and insights to improve decision-making. I’m also exploring AI and emerging technologies, and I’m part of a program to identify AI barriers and propose solutions. That’s why I wanted to talk with you.

Karan Bhandari: We both took part in a hackathon where we processed news and converted it into security policies.

Pooja CK: Yes, that hackathon was a great starting point.

Karan Bhandari: I’m glad we both enjoyed that experience.

Pooja CK: Shall we dive into questions? I know you’ve been in development for many years and you’ve tried various AI tools to boost productivity. What do you see as the biggest barriers to adopting AI solutions in a development context?

Karan Bhandari: We often start with generative AI via chat interfaces or notebook assistants. Developers then use IDE plugins like GitHub Copilot. We feed these models context—documents, source code, customer data—and expect useful output. But models can’t always give the best answers. When a model’s training cutoff is outdated, we rely on specialized connectors to pull in up-to-date code docs or custom data. Many vendor-provided connectors limit how many services you can hook together, so we turn to open source alternatives for more flexibility.

Karan Bhandari: Most developers only have client-side access, so they can’t fine-tune large models without GPUs. We end up using cloud services for training because local machines lack the needed hardware.

Karan Bhandari: Large models consume a lot of API resources per query. We need smaller, domain-specific models trained on our own stacks—Java, React, security logs—that work more efficiently for precise tasks.

Karan Bhandari: Beyond code generation, AI in neural network training still requires careful hyperparameter tuning and optimization. Low-code tools exist but often lack enterprise approvals. A multi-cloud strategy helps but adds operational complexity. Container orchestration systems are improving GPU support but it’s not yet seamless.

Pooja CK: Can you summarize your main point in one statement?

Karan Bhandari: We need more specialized connectors to custom data sources because without real context AI will hallucinate. Developers must act as analysts to frame requirements clearly, and we need cloud-agnostic training and inference that doesn’t lock us in.

Pooja CK: Do you use multiple clouds?

Karan Bhandari: You can spin up VMs in any cloud, but maintaining networks, security, and infrastructure is a heavy lift. Real-time learning is still difficult; we usually train offline and redeploy, so true online adaptation is rare.

Pooja CK: Do you see any AI models that don’t exist yet but would be useful?

Karan Bhandari: Most large models train on broad internet crawls. Many valuable data silos remain isolated. We need models fine-tuned on specialized internal datasets and robust ethical guardrails to prevent bias or harmful outputs. Security against adversarial inputs, like one-pixel attacks on images, is also critical.

Karan Bhandari: Explainability is another major gap. Decision-tree models can explain rationale, but neural networks are largely black boxes unless you use techniques like LIME. We need better built-in tools for transparent AI decisions.

Pooja CK: You’re right. Explainability is a clear pain point and something that urgently needs more work.

Karan Bhandari: When given precise instructions and test cases, AI can do a solid job. But if things change we must retrain the models. Human oversight remains essential to catch deviations.

Pooja CK: If you could remove one major limitation to make AI truly impactful for development, what would you choose?

Karan Bhandari: I’d build an ecosystem of collaborating AI agents—backend, frontend, DevOps, security—each an expert in its domain. They’d plan together, execute tasks, and hand off to each other, all under human supervision for validation.

Pooja CK: That collaborative agent framework would address many data and workflow challenges. Thank you for that insight.

Karan Bhandari: And after development, AI can also automate minor deployment tasks—preparing release branches, updating tickets, generating documentation—so us humans can focus on bigger problems.

Karan Bhandari: AI will still make fewer mistakes than humans overall, but we need security agents to run checks, enforce code quality, and prevent data leaks. Knowledge is the final frontier—defining workflows, gathering contextual data, and creating a central marketplace for reliable connectors.

Pooja CK: Building trust among non-technical users requires clear guardrails and frameworks. We must enforce domain constraints, structured outputs, ethical policies, and human-in-the-loop checks to protect sensitive data and prevent biased or harmful decisions.

Karan Bhandari: Exactly. Data cleansing, privacy controls, and consistent human oversight are the keys to safe, accountable AI adoption.

Gemini Generated Image

About Kurtzace

Kurtzace is an umbrella of products that "Infuse excitement". Our products simplify your life and reduce your pain. We are creators of "Text To Voice", Kurtzac ePage and numerous more that are currently in our pipeline.

Connect With Us

Back to Top