The Institute for AI Measurement Science logo
The Institute for
AI Measurement Science
About IAIMS

Trustworthy AI Depends on Reproducible Measurement

We live in a world where AI innovation is moving at light speed, but our ability to measure its safety is stuck in "document sludge." IAIMS was founded to bridge this gap by treating AI governance as a science, not just a policy exercise.

Our Mission

The Evidence Infrastructure Initiative

IAIMS is a nonprofit technical organization established to translate high-level AI safety and risk frameworks into actionable, audit-ready engineering protocols. We provide the "Public Logic Layer" that makes standards like NIST AI RMF and the EU AI Act immediately usable and auditable.

Our Founding Philosophy: Measurement Science

01

Reproduction over Principles

Trustworthy AI depends not on abstract principles, but on reproducible measurement.

02

Metrology in AI

We apply classical measurement theory—traditionally used by physicists and statisticians at organizations like NIST and ISO—to the world of artificial intelligence.

03

Empirical Observation

Our work is grounded in properties that are empirically observable, repeatable across environments, and meaningful to risk management.

Our Principles of Neutrality

Empirical Measurement

Our work is grounded in measurement science, not normative judgment. We focus on properties that are repeatable, observable, and meaningful to risk management.

Transparent Logic

By keeping our executables open-source, we ensure the "audit trail" is transparent and peer-reviewed.

Non-Certifying

We act as a neutral bridge. We provide the tools for measurement, but we do not issue compliance determinations, certify systems, or sell software.

Closing the Implementation Gap

There is currently no shared representation of AI behavior that multiple stakeholders accept. IAIMS exists to solve this "Evidence Interoperability" crisis.

Institutional Governance & Independence

Our structure ensures neutrality and scientific rigor in everything we produce.

Balanced Oversight

We are building a balanced board representing academia, civil society, industry, and former regulators. Our governance principles and conflict-of-interest policies are being finalized.

The Liability Firewall

To maintain neutrality, every automated report generated by our tools includes a watermark stating: "This is a measurement artifact, not a certificate of compliance".

Operational Independence

We maintain a strict separation of roles between benchmark authorship and funded activities to prevent conflicts of interest, ensuring our technical outputs remain unbiased and scientifically rigorous.

Board of Directors

Jaya Kandaswamy

Jaya Kandaswamy

President & Executive Director

Jaya Kandaswamy is the Founder, President and Executive Director of the Institute for AI Measurement Science (IAIMS). She established the Institute to advance independent measurement frameworks that translate emerging AI governance principles into verifiable, operational evidence. Her work focuses on bridging the gap between policy expectations and measurable practice in areas such as transparency, accountability, and risk management. She is particularly interested in developing practical approaches that enable organizations to demonstrate responsible AI governance through structured evidence and repeatable measurement. Jaya brings experience across technology, risk, and organizational systems, with a focus on making complex systems understandable, measurable, and accountable. She holds an MBA from the University of Chicago Booth School of Business.

John Pitts

John Pitts

Board Member

John Pitts serves on the Board of Directors of the Institute for AI Measurement Science (IAIMS), where he provides governance oversight and strategic guidance in support of the Institute's mission to advance credible and independent approaches to AI measurement. John brings experience at the intersection of public policy, financial technology, and digital trust. His work has focused on regulatory engagement, industry policy, and the development of frameworks that support responsible innovation in complex technology environments. Over the course of his career, he has worked across government, legal practice, and the technology sector, with a focus on strengthening accountability and trust in digital systems. John holds a J.D. from the University of Pennsylvania Carey Law School.

Joel Martinez

Joel Martinez

Board Member

Joel Martinez serves on the Board of Directors of the Institute for AI Measurement Science (IAIMS), where he contributes governance oversight and perspective on risk management, operational controls, and institutional accountability. Joel brings extensive experience in enterprise risk management and operational governance within complex financial and technology environments. His work has focused on designing and implementing first-line risk management frameworks, developing risk and control libraries, and establishing measurement practices that enable organizations to identify, assess, and manage material risks. Additionally he has guided various De Novo financial institutions through complex transformation and maturation initiatives helping them uplift their business while working closely with executive leadership, regulators, and audit committees. At IAIMS, Joel provides perspective on how structured controls, risk measurement, and governance practices can support credible and accountable systems.

Advisors

Ramesh Ramaraj

Ramesh Ramaraj

Founding Advisor – Grants & Operations

Ramesh Ramaraj serves as Founding Advisor for Grants and Operations at the Institute for AI Measurement Science (IAIMS). In this advisory role, he provides early guidance on grant strategy and operational practices that support transparent stewardship of resources and effective organizational processes. Ramesh brings extensive experience leading enterprise strategy and operational execution in highly regulated technology environments, including work supporting U.S. federal agencies. His work has focused on digital transformation, enterprise data modernization, and the deployment of AI-enabled solutions that support complex organizational missions. He has also led strategic growth initiatives and developed operational frameworks that align long-term vision with disciplined execution. At IAIMS, Ramesh contributes strategic perspective on operational planning, responsible resource stewardship, and the development of practical systems that support sustainable institutional growth.

Sowmya Podila

Sowmya Podila

Technical Advisor

Sowmya Podila serves as Technical Advisor to the Institute for AI Measurement Science (IAIMS), providing guidance on the technical foundations of the Institute's emerging AI measurement frameworks and artifacts. Sowmya works in Applied AI, building enterprise-grade agentic systems, AI evaluation frameworks, and benchmarking Large Language Models (LLMs). With over a decade of experience across retail, healthcare, and human resources domains, she brings deep expertise in NLP, machine learning, and production-scale AI system design. She holds an MS in Technology Management from the University of Illinois Urbana-Champaign. At IAIMS, Sowmya provides technical perspective on AI architecture, measurement models, and artifact design, helping ensure governance frameworks translate into practical, technically sound approaches for measuring complex AI systems.

"Do it once, do it for all. A world where AI safety is proven by data, not claimed by marketing."