Photo by Kier in Sight Archives / Unsplash

How do LLM Makers Assess their Models?

Exercises Apr 10, 2026 (Apr 10, 2026) Loading...

Anthropic, OpenAI, Google and others release System Cards / Model Cards for their LLMs. These documents describe a model's capabilities, limitations, and set out the evidence that the makers use for those claims.

The system card for Anthropic's Claude 3.5 (June 24) has 6 pages of assessment and benchmarks from reasoning to safety. The system card for Claude Mythos Preview (April 2026) has 200+ pages, on far more – including risks of deployment, honesty and evasion, responses to an irritating repetitive prompt, an analysis by a psychologist.

Those cards describe the model – and as some aspects of a model are emergent, they include results of testing and exploring. The systems cards give us hints about how the makers have approached that work.

They're illuminating for testers – not only in describing the models, but in describing how the makers seek to sense their models. We might learn from them, understanding the breadth and aims of their exploration of whatever it is that they've made.

Exercise

10 mins – solo

There's no realistic hope of being able to understand a model card in a few minutes. So let's just dive in, and find something that we can share with other testers.

Pick a model card for a model you've used (or have heard of). Skim it. Pick out something, of interest to you as a tester, that you'd like to share.

10 mins – collective

Let's exchange what we've found and talk. What are the surprises, and what are the signals.

Sources

Model system cards
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
OpenAI Deployment Safety Hub: System cards & other updates
Review OpenAI system cards and other safety updates for deployed AI systems. See how systems are evaluated, monitored, and improved over time.
Model cards
Overviews of how an advanced AI model was designed and evaluated.

Model cards may have originated with this paper

Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.

Futurist Rob Hoeijmakers sets out his thoughts on the cards and the information (and signals) they hold:

Model Cards, System Cards and What They’re Quietly Becoming
What are AI model cards, and why are they becoming the documents regulators will turn to first? I read a few and it taught me more than I expected.

and on benchmarks and more deterministic tests

From Benchmarks to Evals: How We Measure AI and Why It Matters
Benchmarks score models. Evals test them in real workflows. This is your guide to understanding how we measure and trust AI performance today.

Member reactions

Reactions are loading...

Sign in to leave reactions on posts

Tags

Comments

Sign in or become a Workroom Productions member to read and leave comments.

James Lyndsay

Getting better at software testing. Singing in Bulgarian. Staying in. Going out. Listening. Talking. Writing. Making.