Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Digital Passport Model Assessment Workbench

This site documents an agent-oriented analytical toolkit for assessing SHACL-based digital passport models, comparing composed solutions and versions, and checking SHACL-only use-case coverage.


Purpose

The workbench is intended to support open and reproducible analysis of digital passport data models.

It focuses on questions such as:

  • how large and structurally complex is a model composition?
  • how explicit and constraint-ready is it?
  • how do two model compositions or versions differ?
  • can a declared use case be represented from the SHACL model alone?
  • which findings should be inspected first?

The tool works from inspectable model specifications and declared input files rather than hidden state or representative instance datasets.

Open Science And Replicability

The analysis is designed to be deterministic:

  • the same inputs should produce the same JSON outputs
  • input files are explicit and versionable
  • result documents are machine-readable and inspectable
  • tests use synthetic local fixtures that do not depend on network access
  • real live-source examples are separated from validation fixtures

This supports open-science practice: the analytical assumptions, input declarations, result payloads, and example applications can all be reviewed and reproduced.

Agent-Oriented Use

The workbench is designed to be usable directly by humans, but also by AI agents orchestrating analytical workflows.

The agent-oriented design choices are:

  • explicit file-based inputs
  • stable package API and CLI operations
  • JSON result envelopes
  • built-in discovery through capabilities, schema, vocabulary, and template
  • no chat-specific behavior inside the analytical core

AI agents can therefore help prepare inputs, run the pipeline, inspect outputs, and explain results while the underlying analysis remains deterministic.

Release-1 Scope

Release 1 provides:

  • SHACL ingestion from Turtle sources
  • composition-profile assessment
  • SHACL-only use-case coverage checks
  • pairwise comparison of assessment results
  • rule-based prioritization
  • discovery of built-in schemas, vocabularies, templates, and capabilities

Documentation Map

  • Analysis Dimensions: what the tool measures and reports.
  • Pipeline: operations, inputs, outputs, and result documents.
  • API Reference: Python package functions for humans, agents, and workflow runners.
  • Result Interpretation: how to read rich JSON outputs and deterministic summaries.
  • Example Applications: tutorial-style analytical progression for humans and AI agents.
  • MCP Server: stdio MCP runtime, registry discovery, packaging, and publication shape.
  • Source Ingestion: first tutorial step for profile loading and live URL examples.

Repository Validation

The repository-native validation path is:

make validate

This runs compile checks, the test suite, and lightweight CLI smoke checks without requiring an editable package install.


Funded by the European Union under Grant Agreement No. 101092281 — CE-RISE.
Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or the granting authority (HADEA). Neither the European Union nor the granting authority can be held responsible for them.

CE-RISE logo

© 2026 CE-RISE consortium.
Licensed under the European Union Public Licence v1.2 (EUPL-1.2).
Attribution: CE-RISE project (Grant Agreement No. 101092281) and the individual authors/partners as indicated.

NILU logo

Developed by NILU (Riccardo Boero — ribo@nilu.no) within the CE-RISE project.