Category: Technologies

  • Decentralised AI

    Decentralised AI

    (INSIGHTS) (TECHNOLOGY) (PROJECT) Massachusetts Institute of Technology (MIT) Media Lab. “Decentralized AI”. Accessed 24.08.2025. https://www.media.mit.edu/projects/decentralized-ai/overview/

    As AI evolves beyond screen assistants and into dimensional applications, decentralization emerges as the critical factor for unlocking its full potential.

    Introduction

    The AI landscape is at a crossroads. While advances continue, concerns mount about job displacement and data monopolies. Centralized models, dominated by a few large companies, are reaching their limits. To unlock the true power of AI, we need a new paradigm: decentralized AI.

    Challenges of Centralized AI

    • Limited data access: Siloed data restricts AI’s potential for applications like personalized healthcare and innovative supply chains.
    • Inflexible models: One-size-fits-all models struggle with diverse real-world scenarios, leading to inaccurate and unfair outcomes.
    • Lack of transparency and accountability: With data and algorithms hidden away, trust in AI erodes, hindering adoption and innovation.

    Decentralized AI: A Vision for the Future:

    • Data markets: Secure marketplaces enable data exchange while protecting privacy and ensuring fair compensation.
    • Multi-dimensional models: AI that learns from real-world experiences through simulations and agent-based modeling.
    • Verifiable AI: Mechanisms like federated learning and blockchain ensure responsible development and deployment of AI models.
    • Exchanges for AI solutions: Platforms where individuals and businesses can access and contribute to AI solutions for diverse needs.

    Opportunities in Decentralized AI:

    • Democratization of innovation: Individuals and smaller businesses can participate in the AI revolution, creating valuable solutions and capturing economic benefits.
    • Unleashing trillions in economic value: By addressing real-world challenges in healthcare, education, and other sectors, decentralized AI can unlock vast economic opportunities.
    • Building a more equitable and inclusive future: Decentralization empowers individuals and helps address concerns about bias and discrimination in AI.

    The Call to Action:

    In this pivotal moment, everyone has a role to play. Businesses must embrace decentralized models, governments should foster collaborative ecosystems, and individuals must become AI literate and contribute their expertise. By working together, we can unlock the true potential of AI and build a more prosperous and equitable future for all.

    Reach out to us at dec-ai@media.mit.edu

    Professor Ramesh Raskar spoke on this topic at EmTech Digital in May 2024

    Research Topics

    #social networks #computer vision #artificial intelligence #data#privacy #machine learning #decision-making

  • Data Space Technologies

    Data Space Technologies

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 12-14.

    Data space technologies are key enablers of AI and data-driven value creation because they address fundamental challenges with data system integration, data curation, verifiability, security, and privacy.

    A data space consists of common standards for organizing and exchanging data and a set of technologies that adhere to those standards. Data space technologies can be open-source or proprietary, but they must adhere to common data spaces standards and rules. Common standards are needed to ensure that systems and data are interoperable, and can utilize common infrastructure and services, for example to manage identities, system access and data exchange in compliance with European digital regulations.

    Data space technologies with digital trust management frameworks and digital marketplaces are being developed to enable a more open and equitable digital ecosystems in Europe, where data and digital assets can be securely exchanged, reused and improved over time.

    The International Data Spaces Association, Gaia-X, FIWARE, Big Data Value Association and OASC (Open and Agile Smart Cities and Communities) are a few examples of organizations that collectively represent more than 1000 member organisations, 400+ cities, and 100+ national hubs in Europe, Asia, and the Americas that are working on projects to develop common data space architectures and technologies.

    Smart city and community initiatives are particularly relevant for Data Spaces because cities, regions, municipalities, and the public sector need to continuously improve the cost-effectiveness of services across several critical sectors. AI has great potential to support increased productivity, sustainability and community engagement in digital and green transformation, but solutions are needed to enable secure data exchange and deployment of trustworthy AI across critical sectors and regions. The Living in EU initiative aims to promote citizen-centric collaboration and re-use of solutions, products, and services across a common digital market to avoid duplicating efforts and expenditure that result in data silos and fragmented infrastructure. Living in EU is promoted the European Commission, the European Committee of Regions, The Council of European Municipalities and Regions (CEMR), The European Regions Research and Innovation Network (ERRIN), The European Network of Living Labs (ENoLL), OASC, and Eurocities, a network of over 200 of Europe’s largest cities representing over 150 million people across 38 countries.

    A data space consists of tools that adhere to common standards defined in the Data Space Blueprint:

    • DSSC Blueprint: The Data Space Support Center (DSSC) blueprint is a comprehensive set of guidelines to support implementation, deployment and management of data spaces. The Blueprint consists of key concepts, a starter kit, glossary, a collection of data space standards and the following organisation and technical building blocks (DSSC 2024)[i]:
      • Business, governance and legal building blocks provide guidance to new entrants and operators of infrastructure, software, services and technologies that comply with data spaces standards. This support includes but not limited to guidance on choices in design of business model, data products, organisational form, regulatory compliance and contractual frameworks that are supported by services and software.
      • Technical building blocks are divided into foundational standards, control and data planes for exchanging data, and data space services for implementing the technical building blocks. These standards for technologies and services are designed to ensure data interoperability, data sovereignty and trust, and provide enablers for value creation from data, which is one of the ultimate goals of a data space (DSSC 2024).
    • Decentralised identifiers (DID) and verifiable credentials (VC): a key technical building block for Data Spaces is the DID standard developed by the World Wide Web Consortium (W3C), an international standards organization founded in 1994 by Tim-Berners Lee, the inventor of the world wide web. A DID is a universal resource identifier (URI) for an entity (e.g., a person, organization, thing, concept, data model, algorithm, abstract entity, etc.) (W3C 2022).[ii] URIs are used to organize data and services in standardized machine-readable ontologies and catalogues. This enables systems find information and navigate ontologies and data catalogues across large networks of distributed systems. A decentralised identifier (DID) goes a step further by providing a method to prove ownership/control over an entity/subject/concept. A DID points to a DID document that uses cryptographic mechanisms to verify credentials related to ownership and rights to create, access and modify information. This enables a controlling entity to create and modify their own universal identifiers independent of centralised registries because the controlling entity can use verifiable credentials (VCs) to prove their own identity and to prove their rights to create, modify and access information that is represented by the DID. VCs provide a set of tamper-evident claims, which supports verifiability, traceability and accountability in digital information, also known as data provenance (W3C 2025).[iii] This independent control over verifiable information is known as self-sovereignty and self-sovereign identity (SSI), and it has potential to revolutionize the internet by making more information verifiable, machine-readable and more easily discoverable across decentralised systems, provided that common semantic web standards are followed for organizing and accessing information.
    • Privacy-enhancing technologies (PETs): A DID document can have one or more different representations of information describing a past, current, or desired state of the DID subject. The ability to provide multiple representations of information is an enabler for PETs because a DID document can utilize different methods for sharing verifiable information without necessarily transferring data or revealing underlying data. One example is secure multi-party computation (SMPC) with full homomorphic encryption, which was used by two European hospitals in a pilot project of the European Health Data Space to securely analyse health data for cancer patients without transferring underlying health data to the hospitals (Ballhausen 2024).[iv] Another example is a zero-knowledge proof (ZKP) to prove that a person has a required credential, such as an education certificate, valid driver’s license, or fulfils a minimum age requirement, without revealing details of the person’s age, date of birth, address, or other unnecessary information. DIDs can also strengthen privacy and security by using attribute-based encryption and access control to authorize access to specific information based on a dynamic set of conditions, such as the privacy preferences of the DID owner and levels of digital trust or cyber risk to systems handling information in the digital value chain. In summary, technical standards for DIDs, VCs, and Data Spaces, in combination with EU digital regulations, create a great opportunity for innovation in PETs to address security and privacy risks of AI-enabled systems in critical sectors.
    • Technical standards have been collected and organized into the following categories:
      • Data Interoperability standards
      • Data Sovereignty and Trust standards
      • Data Value Creation standards
    • DSSC Toolbox: The Toolbox is a curated catalogue of solution implementations (software and non-software tools) that are aligned with the DSSC Blueprint and have passed the Toolbox validation scheme. The
      • Toolbox contains open and closed solutions for technical and organisational functionalities and can be accessed as data space services (DSSC 2024).
      • The Toolbox validation scheme is a self-assessment scheme that enables new solutions and solution providers to be listed in the Toolbox.
    • A digital marketplace is a common way to generate value in a data space (DSSC 2024).[1] The DS Blueprint describes functional specifications for digital marketplaces as part of the Data Value Creation standard and technical building block. The standard enables secure and efficient data exchange and digital transactions using advanced features for data catalogue management with DIDs. A data catalogue using DIDs makes product offerings machine-readable and more easily discoverable within a data space and across data spaces and marketplaces. A marketplace can also “establish a trusted relationship between a data product provider and any user who has searched, found and selected one or more data products from this provider in the data space. It provides the tools required to negotiate conditions for the delivery and use of the products, monitor the process and store all the relevant information, i.e. everything needed to ensure the journey of the provider and the user goes smoothly.” (DSSC 2024).
    • Minimum Interoperability Mechanisms (MIMs) are being developed by the OASC in a standard recommendation to the ITU Telecommunications Standardization Sector (ITU-T) to support data interoperability in Data Spaces for Sustainable and Smart Cities and Communities (DS4SSCC) and ensure compliance with the EU Interoperability Act (EC 2024)[2]. The MIMs Overview provides a description of the concept and role of the following MIMs (OASC 2024)[3]:
      • MIM 1: Context Information
      • MIM 2: Data Models
      • MIM 3: Contracts
      • MIM 4: Trust
      • MIM 5: Transparency
      • MIM 6: Security
      • MIM 7: Places
      • MIM 8: Indicators
      • MIM 9: Analytics
      • MIM 10: Resources
    • MIMs Resources provide additional support for public sector and local administrations in cities and smart communities to learn and experiment with digital transformation initiatives:
      • CITYxCITY Academy: includes an online portal with access to experts, tools and courses.
      • CITYxCITY Catalogue: global collection of deployed solutions, products and best practice.
      • CITYxCITY Festival: annual networking event for the OASC community.
      • Living-in.EU MIMs Plus: an expansion of MIMs with additional technical stacks, tools and management standards for local administrations intended to support broad up-scaling of digital transformation projects in line with the Living in EU initiative, which aims to serve 300 million Europeans. The ‘plus’ banner refers to European specifications and initiatives, such as EIF4SCC, ISA2, CEF, INSPIRE, EIP-SCC, ELISA, LORDI, DIGISER (OASC 2022)[v].

    For smaller organisations, such as startups, SMEs and municipalities, data spaces can eliminate the need to make large upfront investments in digital infrastructure for advanced digital platforms and digital twins. Open-source technologies and smart data models can be reused as a foundation platform, instead of reinventing systems, data models, communications protocols, services, and security controls. This frees more time and financing to focus on value-creation, paying startups and smaller specialist service providers to integrate components and customize software and user interfaces to customer needs.

    The concept of distributed computing is not new, but what distinguishes European data space initiatives from hyperscaler ecosystems is common technical standards to ensure interoperability that reduce vendor lock-in, and enable collaboration to improve cybersecurity, data integrity, and fair economic value creation, while complying with important EU digital regulations for privacy, safety, and cyber resilience.


    [i] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/bv15e/766061169/Data+Spaces+Blueprint+v1.5+-+Home

    . Accessed 22.01.2025.

    [1] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/BVE/357076678/Marketplace+Functional+Specifications. Accessed 22.01.2025.
    [2] European Commission. Press release 26.08.2024. Minimal Interoperability Mechanisms: Advancing Europe’s digital future. https://data.europa.eu/en/news-events/news/minimal-interoperability-mechanisms-advancing-europes-digital-future
    [3] Open and Agile Smart Cities and Communities (OASC) (2024). Draft Recommention ITU-T Y.MIM. May 2024. https://mims.oascities.org/mims/y.mim-overview
    [i] DSSC (Data Spaces Support Centre) (2024). Data Spaces Blueprint v1.5. Data Spaces Support Centre. https://dssc.eu/space/bv15e/766061169/Data+Spaces+Blueprint+v1.5+-+Home. Accessed 22.01.2025.

    [ii] Sporny M., Longley D., Sabadello M., Reed D., Steele O., Allen C.; World-Wide Web Consortium (W3C) (2022). Decentralized Identifiers (DIDs) v1.0. W3C Recommendation 19.07.2022. https://www.w3.org/TR/did-core/

    [iii] Sporny M., Longley D., Chadwick D., Herman I.; World-Wide Web Consortium (W3C) (2025). Verifiable Credentials Data Model v2.0. W3C Candidate Recommendation Draft. 27.01.2025. https://www.w3.org/TR/vc-data-model-2.0/

    [iv] Ballhausen, H., Corradini, S., Belka, C. et al. (2024). Privacy-friendly evaluation of patient data with secure multiparty computation in a European pilot study. npj Digit. Med. 7, 280 (2024). https://doi.org/10.1038/s41746-024-01293-4

    [v] LI.EU Technical sub-group chaired by OASC (2022). MIMs Plus version 5.0 final draft. June 2022. https://living-in.eu/mimsplus

  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

  • Privacy-enhancing technologies

    Privacy-enhancing technologies

    The Royal Society. “Privacy-enhancing technologies.” Accessed 18.08.2025. https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/.

    What are Privacy Enhancing Technologies (PETs)? 

    Privacy Enhancing Technologies (PETs) are a suite of tools that can help maximise the use of data by reducing risks inherent to data use. Some PETs provide new tools for anonymisation, while others enable collaborative analysis on privately-held datasets, allowing data to be used without disclosing copies of data. PETs are multi-purpose: they can reinforce data governance choices, serve as tools for data collaboration or enable greater accountability through audit. For these reasons, PETs have also been described as “Partnership Enhancing Technologies” or “Trust Technologies”.

    What is data privacy, and why is it important?

    The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our 2017 British Academy and Royal Society report Data management and use: Governance in the 21st century.

    How can technology support data governance and enable new, innovative uses of data for public benefit?

    The Royal Society’s Privacy Enhancing Technologies programme investigates the potential for tools and approaches collectively known as Privacy Enhancing Technologies, or PETs, in maximising the benefit and reducing the harms associated with data use.

    Our 2023 report, From privacy to partnership: the role of Privacy Enhancing Technologies in data governance and collaborative analysis (PDF), was undertaken in close collaboration with the Alan Turing Institute, and considers the potential for PETs to revolutionise the safe and rapid use of sensitive data for wider public benefit. It considers the role of these technologies in addressing data governance issues beyond privacy, addressing the following questions:

    • How can PETs support data governance and enable new, innovative uses of data for public benefit? 
    • What are the primary barriers and enabling factors around the adoption of PETs in data governance, and how might these be addressed or amplified? 
    • How might PETs be factored into frameworks for assessing and balancing risks, harms and benefits when working with personal data? 

    In answering these questions, our report integrates evidence from a range of sources, including the advisement of an expert Working Group, consultation with a range of stakeholders across sectors, as well as a synthetic data explainer and commissioned reviews on UK public sector PETs adoption (PDF) and PETs standards and assurances (PDF), which are available for download.

en_USEnglish