Building the Trust Infrastructure for AI-Powered Value Chains
According to Gartner’s 2025 Hype Cycle for AI, generative AI has officially entered the “Trough of Disillusionment.” PwC’s 2026 Global CEO Survey found that only 12% of CEOs say AI has delivered both cost and revenue benefits. MIT’s research puts it even more starkly: 95% of enterprise generative AI projects have failed to show measurable financial returns.
These numbers don’t surprise me. Not because the technology isn’t capable — it clearly is – but because most organisations have focused on the AI models while underinvesting in what actually makes AI deployable: trusted data, proper authorisation, and infrastructure that connects AI to real business systems in a controlled way.
That’s what we’ve been focusing on at Compera. I want to share where we’re heading and why we’re making the architectural choices we are.
Companies need to collect and manage data across their entire value chains — sustainability metrics, compliance documentation and prequalification records. Under frameworks like VSME and CSRD, and standards like ISO 9001, 14001, and 27001, this data needs to flow between organisations: from customers to suppliers, and from those suppliers to their own suppliers, cascading through the chain.
Our platform already handles this in production. Companies use Compera to register suppliers, request standardised data and documentation, and those suppliers in turn register and request data from their own suppliers — creating a living, growing graph of the value chain.
But collecting data is only the beginning. The real question is: what can you do with it once you have it?
Most platforms treat supplier data as rows in a table. But a value chain is a network — companies connected to suppliers, connected to their suppliers, with relationships, dependencies, and data flowing in multiple directions. When you model it as a graph, entirely new possibilities open up.
Graph data science algorithms can identify single points of failure — the one supplier three tiers deep that half your critical components depend on. Community detection can reveal clusters of non-compliant suppliers, or geographic concentrations of risk that aren’t visible in tabular data. Centrality analysis can show which nodes in your network have the most influence on your overall compliance posture. Link prediction can even infer likely supplier relationships that haven’t been explicitly mapped yet.
This is the analytical foundation we’re building on top of our existing data collection platform.
We’re building the next layer of our platform around GraphQL APIs rather than traditional REST endpoints. The reason is practical: value chain data is complex and nested. A single query might need company information, their compliance status across multiple standards, supplier relationships, and sustainability metrics — all in one request. GraphQL lets consumers ask for exactly what they need, nothing more. For customers integrating our data into their own systems, whether that’s a Power BI dashboard or an internal tool, this flexibility matters.
On top of the GraphQL layer, we’re developing MCP (Model Context Protocol) servers. MCP is an open standard introduced by Anthropic that provides a standardised way for AI agents to connect to data sources and tools. Our MCP servers wrap our GraphQL API, exposing value chain data and tools so that AI agents can query supplier networks, check compliance status, or identify risk patterns — all through a structured, well-defined protocol.
This two-layer approach — GraphQL for humans and integrations, MCP for AI agents — means we’re building once and serving both audiences.
Here’s what we’ve come to believe is the most critical piece: you can’t deploy AI agents in a business context without solving authorisation first.
Consider a realistic scenario: an AI agent helping a procurement team analyse supplier risk across their value chain. That agent needs access to supplier data, compliance records and prequalification status. But not all of it — maybe they can see Tier 1 supplier details but not Tier 3 financial data. Maybe they can access ISO certification status for active contracts in their region but not for suppliers under a different business unit. Traditional role-based access control wasn’t designed for this level of granularity.
We need what we call a context-aware, granular control layer (or trust layer) – a system where access and use decisions are based on live context, relationships and metadata rather than simple role assignments. In a knowledge graph, you can model that “Company A supplies Component X to Company B under Contract Y” and derive access rules from those relationships. Different AI agents get different views of the same underlying data, with a full audit trail of what was accessed and why.
This is why we collaborate with IndyKite, whose platform is built around exactly this concept: knowledge graph-based authorisation that leverages relationships, context at runtime, not just roles and permissions. Their approach to Knowledge-Based Access Control (KBAC) aligns naturally with the graph structure of value chain data, and gives us the fine-grained, context-aware security layer that AI agent deployment demands.
This trust layer is the piece most companies skip. They build the AI capabilities but can’t actually deploy them because security and compliance teams — rightly — won’t approve production access without proper controls.
For a company using Compera to manage their value chain data, this architecture enables:
- AI-assisted compliance: Agents that can answer “which of our suppliers haven’t submitted their ISO 14001 documentation?” , “What are our highest risk suppliers related to xx ”, “how would this regulatory change affect our reporting obligations?” — with proper access controls.
- Supply chain transparency: Graph queries that trace material origins, identify concentration risks, and map dependencies across multiple tiers.
- Prequalification at scale: Automated collection and verification of ISO compliance documentation cascading through the entire supplier network, not just Tier 1.
- Secure multi-stakeholder access: Different stakeholders — customers, suppliers, auditors — see exactly the data they’re authorised to see, enforced at the API level.
We are a team based in Norway, working closely with our trusted partners SmplCo and IndyKite to build a scalable platform for trusted data and AI-ready value chains.
In the future the GraphQL and MCP layers will lay the foundation for advanced graph analytics and AI agent capabilities that are central to our product vision.
We are building for a rapidly evolving field, where trust, identity, and data provenance are becoming critical enablers for real-world AI adoption. Our strength lies in a clear architectural vision and a modular approach that allows us to scale with both technology and market needs.
We believe the trust and identity layer is a core prerequisite for AI systems that can be deployed safely and effectively in complex business environments.
If you’re working on similar challenges — making AI trustworthy enough for real business use, or modelling complex value chains as graphs — we’d love to hear from you.
Andreas is the CTO at Compera AS, a Norwegian value chain technology company building AI-powered compliance and value chain intelligence platforms.
13 February 2026