Author: christianhaukaas

  • Decentralised AI

    Decentralised AI

    (INSIGHTS) (TECHNOLOGY) (PROJECT) Massachusetts Institute of Technology (MIT) Media Lab. “Decentralized AI”. Accessed 24.08.2025. https://www.media.mit.edu/projects/decentralized-ai/overview/

    As AI evolves beyond screen assistants and into dimensional applications, decentralization emerges as the critical factor for unlocking its full potential.

    Introduction

    The AI landscape is at a crossroads. While advances continue, concerns mount about job displacement and data monopolies. Centralized models, dominated by a few large companies, are reaching their limits. To unlock the true power of AI, we need a new paradigm: decentralized AI.

    Challenges of Centralized AI

    • Limited data access: Siloed data restricts AI’s potential for applications like personalized healthcare and innovative supply chains.
    • Inflexible models: One-size-fits-all models struggle with diverse real-world scenarios, leading to inaccurate and unfair outcomes.
    • Lack of transparency and accountability: With data and algorithms hidden away, trust in AI erodes, hindering adoption and innovation.

    Decentralized AI: A Vision for the Future:

    • Data markets: Secure marketplaces enable data exchange while protecting privacy and ensuring fair compensation.
    • Multi-dimensional models: AI that learns from real-world experiences through simulations and agent-based modeling.
    • Verifiable AI: Mechanisms like federated learning and blockchain ensure responsible development and deployment of AI models.
    • Exchanges for AI solutions: Platforms where individuals and businesses can access and contribute to AI solutions for diverse needs.

    Opportunities in Decentralized AI:

    • Democratization of innovation: Individuals and smaller businesses can participate in the AI revolution, creating valuable solutions and capturing economic benefits.
    • Unleashing trillions in economic value: By addressing real-world challenges in healthcare, education, and other sectors, decentralized AI can unlock vast economic opportunities.
    • Building a more equitable and inclusive future: Decentralization empowers individuals and helps address concerns about bias and discrimination in AI.

    The Call to Action:

    In this pivotal moment, everyone has a role to play. Businesses must embrace decentralized models, governments should foster collaborative ecosystems, and individuals must become AI literate and contribute their expertise. By working together, we can unlock the true potential of AI and build a more prosperous and equitable future for all.

    Reach out to us at dec-ai@media.mit.edu

    Professor Ramesh Raskar spoke on this topic at EmTech Digital in May 2024

    Research Topics

    #social networks #computer vision #artificial intelligence #data#privacy #machine learning #decision-making

  • Data Space Technologies

    Data Space Technologies

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 12-14.

    Data space technologies are key enablers of AI and data-driven value creation because they address fundamental challenges with data system integration, data curation, verifiability, security, and privacy.

    A data space consists of common standards for organizing and exchanging data and a set of technologies that adhere to those standards. Data space technologies can be open-source or proprietary, but they must adhere to common data spaces standards and rules. Common standards are needed to ensure that systems and data are interoperable, and can utilize common infrastructure and services, for example to manage identities, system access and data exchange in compliance with European digital regulations.

    Data space technologies with digital trust management frameworks and digital marketplaces are being developed to enable a more open and equitable digital ecosystems in Europe, where data and digital assets can be securely exchanged, reused and improved over time.

    The International Data Spaces Association, Gaia-X, FIWARE, Big Data Value Association and OASC (Open and Agile Smart Cities and Communities) are a few examples of organizations that collectively represent more than 1000 member organisations, 400+ cities, and 100+ national hubs in Europe, Asia, and the Americas that are working on projects to develop common data space architectures and technologies.

    Smart city and community initiatives are particularly relevant for Data Spaces because cities, regions, municipalities, and the public sector need to continuously improve the cost-effectiveness of services across several critical sectors. AI has great potential to support increased productivity, sustainability and community engagement in digital and green transformation, but solutions are needed to enable secure data exchange and deployment of trustworthy AI across critical sectors and regions. The Living in EU initiative aims to promote citizen-centric collaboration and re-use of solutions, products, and services across a common digital market to avoid duplicating efforts and expenditure that result in data silos and fragmented infrastructure. Living in EU is promoted the European Commission, the European Committee of Regions, The Council of European Municipalities and Regions (CEMR), The European Regions Research and Innovation Network (ERRIN), The European Network of Living Labs (ENoLL), OASC, and Eurocities, a network of over 200 of Europe’s largest cities representing over 150 million people across 38 countries.

    A data space consists of tools that adhere to common standards defined in the Data Space Blueprint:

    • DSSC Blueprint: The Data Space Support Center (DSSC) blueprint is a comprehensive set of guidelines to support implementation, deployment and management of data spaces. The Blueprint consists of key concepts, a starter kit, glossary, a collection of data space standards and the following organisation and technical building blocks (DSSC 2024)[i]:
      • Business, governance and legal building blocks provide guidance to new entrants and operators of infrastructure, software, services and technologies that comply with data spaces standards. This support includes but not limited to guidance on choices in design of business model, data products, organisational form, regulatory compliance and contractual frameworks that are supported by services and software.
      • Technical building blocks are divided into foundational standards, control and data planes for exchanging data, and data space services for implementing the technical building blocks. These standards for technologies and services are designed to ensure data interoperability, data sovereignty and trust, and provide enablers for value creation from data, which is one of the ultimate goals of a data space (DSSC 2024).
    • Decentralised identifiers (DID) and verifiable credentials (VC): a key technical building block for Data Spaces is the DID standard developed by the World Wide Web Consortium (W3C), an international standards organization founded in 1994 by Tim-Berners Lee, the inventor of the world wide web. A DID is a universal resource identifier (URI) for an entity (e.g., a person, organization, thing, concept, data model, algorithm, abstract entity, etc.) (W3C 2022).[ii] URIs are used to organize data and services in standardized machine-readable ontologies and catalogues. This enables systems find information and navigate ontologies and data catalogues across large networks of distributed systems. A decentralised identifier (DID) goes a step further by providing a method to prove ownership/control over an entity/subject/concept. A DID points to a DID document that uses cryptographic mechanisms to verify credentials related to ownership and rights to create, access and modify information. This enables a controlling entity to create and modify their own universal identifiers independent of centralised registries because the controlling entity can use verifiable credentials (VCs) to prove their own identity and to prove their rights to create, modify and access information that is represented by the DID. VCs provide a set of tamper-evident claims, which supports verifiability, traceability and accountability in digital information, also known as data provenance (W3C 2025).[iii] This independent control over verifiable information is known as self-sovereignty and self-sovereign identity (SSI), and it has potential to revolutionize the internet by making more information verifiable, machine-readable and more easily discoverable across decentralised systems, provided that common semantic web standards are followed for organizing and accessing information.
    • Privacy-enhancing technologies (PETs): A DID document can have one or more different representations of information describing a past, current, or desired state of the DID subject. The ability to provide multiple representations of information is an enabler for PETs because a DID document can utilize different methods for sharing verifiable information without necessarily transferring data or revealing underlying data. One example is secure multi-party computation (SMPC) with full homomorphic encryption, which was used by two European hospitals in a pilot project of the European Health Data Space to securely analyse health data for cancer patients without transferring underlying health data to the hospitals (Ballhausen 2024).[iv] Another example is a zero-knowledge proof (ZKP) to prove that a person has a required credential, such as an education certificate, valid driver’s license, or fulfils a minimum age requirement, without revealing details of the person’s age, date of birth, address, or other unnecessary information. DIDs can also strengthen privacy and security by using attribute-based encryption and access control to authorize access to specific information based on a dynamic set of conditions, such as the privacy preferences of the DID owner and levels of digital trust or cyber risk to systems handling information in the digital value chain. In summary, technical standards for DIDs, VCs, and Data Spaces, in combination with EU digital regulations, create a great opportunity for innovation in PETs to address security and privacy risks of AI-enabled systems in critical sectors.
    • Technical standards have been collected and organized into the following categories:
      • Data Interoperability standards
      • Data Sovereignty and Trust standards
      • Data Value Creation standards
    • DSSC Toolbox: The Toolbox is a curated catalogue of solution implementations (software and non-software tools) that are aligned with the DSSC Blueprint and have passed the Toolbox validation scheme. The
      • Toolbox contains open and closed solutions for technical and organisational functionalities and can be accessed as data space services (DSSC 2024).
      • The Toolbox validation scheme is a self-assessment scheme that enables new solutions and solution providers to be listed in the Toolbox.
    • A digital marketplace is a common way to generate value in a data space (DSSC 2024).[1] The DS Blueprint describes functional specifications for digital marketplaces as part of the Data Value Creation standard and technical building block. The standard enables secure and efficient data exchange and digital transactions using advanced features for data catalogue management with DIDs. A data catalogue using DIDs makes product offerings machine-readable and more easily discoverable within a data space and across data spaces and marketplaces. A marketplace can also “establish a trusted relationship between a data product provider and any user who has searched, found and selected one or more data products from this provider in the data space. It provides the tools required to negotiate conditions for the delivery and use of the products, monitor the process and store all the relevant information, i.e. everything needed to ensure the journey of the provider and the user goes smoothly.” (DSSC 2024).
    • Minimum Interoperability Mechanisms (MIMs) are being developed by the OASC in a standard recommendation to the ITU Telecommunications Standardization Sector (ITU-T) to support data interoperability in Data Spaces for Sustainable and Smart Cities and Communities (DS4SSCC) and ensure compliance with the EU Interoperability Act (EC 2024)[2]. The MIMs Overview provides a description of the concept and role of the following MIMs (OASC 2024)[3]:
      • MIM 1: Context Information
      • MIM 2: Data Models
      • MIM 3: Contracts
      • MIM 4: Trust
      • MIM 5: Transparency
      • MIM 6: Security
      • MIM 7: Places
      • MIM 8: Indicators
      • MIM 9: Analytics
      • MIM 10: Resources
    • MIMs Resources provide additional support for public sector and local administrations in cities and smart communities to learn and experiment with digital transformation initiatives:
      • CITYxCITY Academy: includes an online portal with access to experts, tools and courses.
      • CITYxCITY Catalogue: global collection of deployed solutions, products and best practice.
      • CITYxCITY Festival: annual networking event for the OASC community.
      • Living-in.EU MIMs Plus: an expansion of MIMs with additional technical stacks, tools and management standards for local administrations intended to support broad up-scaling of digital transformation projects in line with the Living in EU initiative, which aims to serve 300 million Europeans. The ‘plus’ banner refers to European specifications and initiatives, such as EIF4SCC, ISA2, CEF, INSPIRE, EIP-SCC, ELISA, LORDI, DIGISER (OASC 2022)[v].

    For smaller organisations, such as startups, SMEs and municipalities, data spaces can eliminate the need to make large upfront investments in digital infrastructure for advanced digital platforms and digital twins. Open-source technologies and smart data models can be reused as a foundation platform, instead of reinventing systems, data models, communications protocols, services, and security controls. This frees more time and financing to focus on value-creation, paying startups and smaller specialist service providers to integrate components and customize software and user interfaces to customer needs.

    The concept of distributed computing is not new, but what distinguishes European data space initiatives from hyperscaler ecosystems is common technical standards to ensure interoperability that reduce vendor lock-in, and enable collaboration to improve cybersecurity, data integrity, and fair economic value creation, while complying with important EU digital regulations for privacy, safety, and cyber resilience.


    [i] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/bv15e/766061169/Data+Spaces+Blueprint+v1.5+-+Home

    . Accessed 22.01.2025.

    [1] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/BVE/357076678/Marketplace+Functional+Specifications. Accessed 22.01.2025.
    [2] European Commission. Press release 26.08.2024. Minimal Interoperability Mechanisms: Advancing Europe’s digital future. https://data.europa.eu/en/news-events/news/minimal-interoperability-mechanisms-advancing-europes-digital-future
    [3] Open and Agile Smart Cities and Communities (OASC) (2024). Draft Recommention ITU-T Y.MIM. May 2024. https://mims.oascities.org/mims/y.mim-overview
    [i] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/bv15e/766061169/Data+Spaces+Blueprint+v1.5+-+Home. Accessed 22.01.2025.

    [ii] Sporny M., Longley D., Sabadello M., Reed D., Steele O., Allen C.; World-Wide Web Consortium (W3C) (2022). Decentralized Identifiers (DIDs) v1.0. W3C Recommendation 19.07.2022. https://www.w3.org/TR/did-core/

    [iii] Sporny M., Longley D., Chadwick D., Herman I.; World-Wide Web Consortium (W3C) (2025). Verifiable Credentials Data Model v2.0. W3C Candidate Recommendation Draft. 27.01.2025. https://www.w3.org/TR/vc-data-model-2.0/

    [iv] Ballhausen, H., Corradini, S., Belka, C. et al. (2024). Privacy-friendly evaluation of patient data with secure multiparty computation in a European pilot study. npj Digit. Med. 7, 280 (2024). https://doi.org/10.1038/s41746-024-01293-4

    [v] LI.EU Technical sub-group chaired by OASC (2022). MIMs Plus version 5.0 final draft. June 2022. https://living-in.eu/mimsplus

  • Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Mahdis Moradi, Vidar Hepsø, Per Morten Schiefloe,
    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry,
    Journal of Innovation & Knowledge, Volume 9, Issue 3, 2024, 100523, ISSN 2444-569X, https://doi.org/10.1016/j.jik.2024.100523.
    (https://www.sciencedirect.com/science/article/pii/S2444569X24000623)

    Abstract

    There has been a growing interest in open-source innovation and collaborative software development ecosystems in recent years, particularly in industries dominated by intellectual property and proprietary practices.

    However, consortiums engaged in these collaborative efforts often face difficulties in effectively balancing the competing dynamics of trust and power. Collaborative knowledge creation is pivotal in ensuring long-term sustainability of the ecosystem; knowledge sharing can take place by steering trust judgments toward fostering reciprocity.

    Drawing on a longitudinal case study of the Open Subsurface Data Universe ecosystem, we investigate the intricate interplay between trust and power and its pivotal influence on ecosystem governance. Our investigation charts the trajectory of trust and power institutionalization and reveals how it synergistically contributes to the emergence of comprehensive hybrid governance strategies.

    We make the following two contributions to extant research. First, we elucidate a perspective on the conceptual interplay between power and trust, conceiving these notions as mutual substitutes and complements. Together, they synergistically foster the institutionalization and dynamic governance processes in open-source ecosystems. Second, we contribute to the governance literature by emphasizing the significance of viewing governance as a configuration of institutionalization processes and highlighting the creation of hybrid forms of governance in complex innovation initiatives.

    Keywords: Open source; Innovation; Cocreation; Governance; Institutional trust; Power

  • Towards safer healthcare

    Towards safer healthcare

    SITRA. “Towards safer healthcare.” Accessed 14.08.2025. https://www.sitra.fi/en/publications/towards-safer-healthcare/.

    Insights on the European action plan on cybersecurity for hospitals and healthcare providers

    DOWNLOAD PUBLICATION

    WRITERS

    Markus Kalliola (Sitra), Mikko Huovila (Nordic Healthcare Group) and Marianne Lindroth (DNV Cyber) 

    PUBLISHED

    May 7, 2025

    The healthcare sector is increasingly vulnerable to cyber threats due to outdated systems, fragmented practices and risks associated with human errors. Despite advancements in regulatory efforts and technical solutions, implementation remains inconsistent. Emerging technologies such as artificial intelligence (AI) and quantum computing add both urgency and complexity to securing healthcare environments. 

    The EU’s expanding cybersecurity legislation is significantly impacting various sectors, including healthcare. The primary goal is to harmonise practices and enhance the resilience of critical entities, products and infrastructure. New instruments like the Directive on measures for a high common level of cybersecurity across the Union (NIS2), Cyber Resilience Act and AI Act broaden the scope of entities covered and introduce stricter requirements, raising the bar for compliance and emphasising the need for robust security in the interconnected digital landscape. 

    Europe has awakened to the need for taking further actions to protect healthcare. The European cybersecurity action plan for hospitals and healthcare providers, published by the European Commission in January 2025, arrives at a crucial time with several strong proposals to bolster healthcare security.  

    Sitra presents seven proposals for improving the preparedness of the EU and its member states against cyber threats. Building a single market for cybersecurity and making collaboration tangible through pan-European cybersecurity exercises are among the things to consider.  

    With all actions set to improve cybersecurity, clear targets are needed to measure the impacts. This applies to the Commission’s action plan proposals for the EU and member states, but also at the grassroots level in healthcare organisations and how cybersecurity maturity is measured and improved.  

    Improving cybersecurity resilience requires healthcare organisations to address all stages of cybersecurity – before, during and after incidents. Cybersecurity should be further integrated into comprehensive security, with adequate resources allocated to healthcare organisations. A well-functioning single market is part of cybersecurity resilience, and European companies must play a significant role in it.

    Finland serves as a case study for how cybersecurity is organised in healthcare within an EU member state. In Finland’s comprehensive security model, cybersecurity responsibilities are distributed among various authorities. Healthcare organisations hold the primary responsibility, supported and guided by multiple authorities. Roles and responsibilities are clearly defined under normal circumstances, with the national cybersecurity strategy outlining priority actions. 

  • The Draghi report on EU competitiveness

    The Draghi report on EU competitiveness

    European Commission. “Draghi report.” Accessed 14.08.2025. https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en.

    The future of European competitiveness: Report by Mario Draghi

    Mario Draghi – former European Central Bank President and one of Europe’s great economic minds – was tasked by the European Commission to prepare a report of his personal vision on the future of European competitiveness. 

    The report looks at the challenges faced by the industry and companies in the Single Market. It outlines how Europe will no longer be able to rely on many of the factors that have supported growth in the past and lays out a clear diagnosis and provides concrete recommendations to put Europe onto a different trajectory.

    Download the report

    Background

    Today, Europe stands united in its pursuit of inclusive economic growth, focusing on 

    • sustainable competitiveness
    • economic security
    • open strategic autonomy
    • fair competition

    They all serve as pillars of prosperity. 

    The vision that drives Europe forward is to create conditions where businesses thrive, the environment is protected, and everyone has an equal chance at success.

    Sustainable competitiveness should make sure businesses are productive and environmentally friendly. Economic security ensures that our economy can handle challenges and protect jobs. With open strategic autonomy, Europe is not just open for business; but is shaping a better, fairer world.

    Next steps

    The findings of the Draghi report are contributing to the Commission’s work on a new plan for Europe’s sustainable prosperity and competitiveness. And in particular, to the development of the new Clean Industrial Deal for competitive industries and quality jobs, which will be presented in the first 100 days of the new Commission mandate.

    Many of its recommendations are reflected in the Commission’s Political Guidelines as well as the mission letters of the President of the European Commission to the members of the College.

    In January 2025, the Commission presented the Competitiveness Compass, a new roadmap to restore Europe’s dynamism and boost economic growth. The Compass builds on the analysis of the Draghi report and provides a strategic framework to drive the Commission’ work for the next five years.

  • EU’s Competitivness Compass

    EU’s Competitivness Compass

    European Commission. “Competitiveness Compass.” Accessed 14.08.2025. https://commission.europa.eu/topics/eu-competitiveness/competitiveness-compass_en.

    Our plan to reignite Europe’s economy

    Over the last two decades, Europe’s potential has remained strong, even as other major economies have grown at a faster pace.

    The EU has everything it takes to unlock its full potential and drive faster, more sustainable growth: we boast a talented and educated workforce, capital, savings, the single market, and a unique social model. To restore our competitiveness and unleash growth, we need to tackle the barriers and weaknesses that are holding us back.

    In January 2025, the Commission presented the competitiveness compass, a new roadmap to restore Europe’s dynamism and boost our economic growth.

    https://ec.europa.eu/avservices/play.cfm?ref=I-267829&lg=EN&sublg=none&autoplay=true&tin=10&tout=59

    Three necessities for a more competitive EU

    The compass builds on the analysis of Mario Draghi’s report on the future of European competitiveness.

    The Draghi report originally identified three necessities for the EU to boost its competitiveness: 

    1. Closing the innovation gap 
    2. Decarbonising our economy
    3. Reducing dependencies

    The compass sets out an approach to translate these necessities into reality. 

    Discover the full timeline of actions under the compass

  • INN-the-Loop: Human-Centered Artificial Intelligence

    INN-the-Loop: Human-Centered Artificial Intelligence

    “INN-the-Loop” was a research project on artificial intelligence (AI) for critical sectors in 2024 that analysed limitations and risks related to ML, GenAI and Agentic AI. It identified significant risks and barriers, as well as solutions for adoption of AI in healthcare and other critical sectors that demand high standards of accuracy, safety, security, and privacy.

    The project was managed by the University of Inland Norway (INN) with contributions from the Norwegian Computing Center (NR), SINTEF, NTNU SFI NORCICS (Norwegian Center for Cybersecurity in Critical Sectors), and VentureNet AS. The project received co-financing by the Regional Research Fund (RFF) Innlandet supported by Innlandet County and the Research Council of Norway.

    Abstract

    Building on rapid development and investment in Artificial Intelligence (AI), the year 2025 heralded in “Agentic AI” as the new frontier for Generative AI (GenAI). The implication is that virtual assistants will be able to autonomously solve problems, set goals, and increase productivity by automating workflows, generating documents, and enhancing the productivity of humans who use AI-supported systems.

    However, for Agentic AI to be suitable for use in critical sectors, a solution is needed to address inherent limitations of AI related to accuracy, safety, security, adaptivity, trustworthiness, and sustainability. This article summarizes results from a research project in 2024 with leading Norwegian research institutions titled the “INN-the-Loop”. The aim of the project was to pre-qualify a framework to design, develop and test human-centric AI-systems for critical sectors, with a focus on smart healthcare as a use case. The project’s findings on AI risks shed light on the importance of digital regulation to ensure safety and security, while also presenting possible solutions for compliance automation to cost-effectively cope with changing regulatory, technical and risk landscapes.

    This article describes a framework, methodology and system/toolkit to develop trustworthy and sustainable AI-systems with Humans-In-The-Loop (HITL). The framework aims to address limitations and risks of current AI approaches by combining human-centred design with “Data Space” technologies, including privacy-enhancing technologies (PETs) for decentralised identity and data access management.

    The project’s results are aligned with European initiatives to develop federated, sustainable and sovereign digital infrastructure for high performance (HPC) and edge computing. The results can inform design and planning of next-generation digital infrastructure, including local digital twins (LDT) and interconnected digital marketplaces, which can strengthen supply chain resilience in critical sectors.

    Download the full research report.

  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

  • Privacy-enhancing technologies

    Privacy-enhancing technologies

    The Royal Society. “Privacy-enhancing technologies.” Accessed 18.08.2025. https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/.

    What are Privacy Enhancing Technologies (PETs)? 

    Privacy Enhancing Technologies (PETs) are a suite of tools that can help maximise the use of data by reducing risks inherent to data use. Some PETs provide new tools for anonymisation, while others enable collaborative analysis on privately-held datasets, allowing data to be used without disclosing copies of data. PETs are multi-purpose: they can reinforce data governance choices, serve as tools for data collaboration or enable greater accountability through audit. For these reasons, PETs have also been described as “Partnership Enhancing Technologies” or “Trust Technologies”.

    What is data privacy, and why is it important?

    The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our 2017 British Academy and Royal Society report Data management and use: Governance in the 21st century.

    How can technology support data governance and enable new, innovative uses of data for public benefit?

    The Royal Society’s Privacy Enhancing Technologies programme investigates the potential for tools and approaches collectively known as Privacy Enhancing Technologies, or PETs, in maximising the benefit and reducing the harms associated with data use.

    Our 2023 report, From privacy to partnership: the role of Privacy Enhancing Technologies in data governance and collaborative analysis (PDF), was undertaken in close collaboration with the Alan Turing Institute, and considers the potential for PETs to revolutionise the safe and rapid use of sensitive data for wider public benefit. It considers the role of these technologies in addressing data governance issues beyond privacy, addressing the following questions:

    • How can PETs support data governance and enable new, innovative uses of data for public benefit? 
    • What are the primary barriers and enabling factors around the adoption of PETs in data governance, and how might these be addressed or amplified? 
    • How might PETs be factored into frameworks for assessing and balancing risks, harms and benefits when working with personal data? 

    In answering these questions, our report integrates evidence from a range of sources, including the advisement of an expert Working Group, consultation with a range of stakeholders across sectors, as well as a synthetic data explainer and commissioned reviews on UK public sector PETs adoption (PDF) and PETs standards and assurances (PDF), which are available for download.

en_USEnglish