Christopher S. Campbell,
Director of AI Governance, Lenovo
“But what does that really mean?” is always the first question after reviewing an LLM security and safety test report.
The real question is how do we equate qualitative and subjective AI LLM risks, such as bias and toxicity, into quantitative risks that impact the business? Throw out the arbitrary scoring and graphs of traditional model testing reports and show the real impact to both humans and to the organization’s bottom line.
In this session, we will discuss a novel approach to evaluating and mitigating AI model security and privacy risks, in which we take a complex technology that is inherently designed to mimic a human and evaluate its behavior and risk like it is a human. This enables continuous governance and compliance, as well as true visibility into the dynamic nature of AI trust and safety at a region-by-region level.
...
Show More
Show Less