Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.
To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.
The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]
IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]
For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.
‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).
There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.
To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.
Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.
One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.
Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.
[i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.
[iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067