Category: Blog post

  • Decentralised AI

    Decentralised AI

    (INSIGHTS) (TECHNOLOGY) (PROJECT) Massachusetts Institute of Technology (MIT) Media Lab. “Decentralized AI”. Accessed 24.08.2025. https://www.media.mit.edu/projects/decentralized-ai/overview/

    As AI evolves beyond screen assistants and into dimensional applications, decentralization emerges as the critical factor for unlocking its full potential.

    Introduction

    The AI landscape is at a crossroads. While advances continue, concerns mount about job displacement and data monopolies. Centralized models, dominated by a few large companies, are reaching their limits. To unlock the true power of AI, we need a new paradigm: decentralized AI.

    Challenges of Centralized AI

    • Limited data access: Siloed data restricts AI’s potential for applications like personalized healthcare and innovative supply chains.
    • Inflexible models: One-size-fits-all models struggle with diverse real-world scenarios, leading to inaccurate and unfair outcomes.
    • Lack of transparency and accountability: With data and algorithms hidden away, trust in AI erodes, hindering adoption and innovation.

    Decentralized AI: A Vision for the Future:

    • Data markets: Secure marketplaces enable data exchange while protecting privacy and ensuring fair compensation.
    • Multi-dimensional models: AI that learns from real-world experiences through simulations and agent-based modeling.
    • Verifiable AI: Mechanisms like federated learning and blockchain ensure responsible development and deployment of AI models.
    • Exchanges for AI solutions: Platforms where individuals and businesses can access and contribute to AI solutions for diverse needs.

    Opportunities in Decentralized AI:

    • Democratization of innovation: Individuals and smaller businesses can participate in the AI revolution, creating valuable solutions and capturing economic benefits.
    • Unleashing trillions in economic value: By addressing real-world challenges in healthcare, education, and other sectors, decentralized AI can unlock vast economic opportunities.
    • Building a more equitable and inclusive future: Decentralization empowers individuals and helps address concerns about bias and discrimination in AI.

    The Call to Action:

    In this pivotal moment, everyone has a role to play. Businesses must embrace decentralized models, governments should foster collaborative ecosystems, and individuals must become AI literate and contribute their expertise. By working together, we can unlock the true potential of AI and build a more prosperous and equitable future for all.

    Reach out to us at dec-ai@media.mit.edu

    Professor Ramesh Raskar spoke on this topic at EmTech Digital in May 2024

    Research Topics

    #social networks #computer vision #artificial intelligence #data#privacy #machine learning #decision-making

  • EU’s Competitivness Compass

    EU’s Competitivness Compass

    European Commission. “Competitiveness Compass.” Accessed 14.08.2025. https://commission.europa.eu/topics/eu-competitiveness/competitiveness-compass_en.

    Our plan to reignite Europe’s economy

    Over the last two decades, Europe’s potential has remained strong, even as other major economies have grown at a faster pace.

    The EU has everything it takes to unlock its full potential and drive faster, more sustainable growth: we boast a talented and educated workforce, capital, savings, the single market, and a unique social model. To restore our competitiveness and unleash growth, we need to tackle the barriers and weaknesses that are holding us back.

    In January 2025, the Commission presented the competitiveness compass, a new roadmap to restore Europe’s dynamism and boost our economic growth.

    https://ec.europa.eu/avservices/play.cfm?ref=I-267829&lg=EN&sublg=none&autoplay=true&tin=10&tout=59

    Three necessities for a more competitive EU

    The compass builds on the analysis of Mario Draghi’s report on the future of European competitiveness.

    The Draghi report originally identified three necessities for the EU to boost its competitiveness: 

    1. Closing the innovation gap 
    2. Decarbonising our economy
    3. Reducing dependencies

    The compass sets out an approach to translate these necessities into reality. 

    Discover the full timeline of actions under the compass

  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

  • Privacy-enhancing technologies

    Privacy-enhancing technologies

    The Royal Society. “Privacy-enhancing technologies.” Accessed 18.08.2025. https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/.

    What are Privacy Enhancing Technologies (PETs)? 

    Privacy Enhancing Technologies (PETs) are a suite of tools that can help maximise the use of data by reducing risks inherent to data use. Some PETs provide new tools for anonymisation, while others enable collaborative analysis on privately-held datasets, allowing data to be used without disclosing copies of data. PETs are multi-purpose: they can reinforce data governance choices, serve as tools for data collaboration or enable greater accountability through audit. For these reasons, PETs have also been described as “Partnership Enhancing Technologies” or “Trust Technologies”.

    What is data privacy, and why is it important?

    The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our 2017 British Academy and Royal Society report Data management and use: Governance in the 21st century.

    How can technology support data governance and enable new, innovative uses of data for public benefit?

    The Royal Society’s Privacy Enhancing Technologies programme investigates the potential for tools and approaches collectively known as Privacy Enhancing Technologies, or PETs, in maximising the benefit and reducing the harms associated with data use.

    Our 2023 report, From privacy to partnership: the role of Privacy Enhancing Technologies in data governance and collaborative analysis (PDF), was undertaken in close collaboration with the Alan Turing Institute, and considers the potential for PETs to revolutionise the safe and rapid use of sensitive data for wider public benefit. It considers the role of these technologies in addressing data governance issues beyond privacy, addressing the following questions:

    • How can PETs support data governance and enable new, innovative uses of data for public benefit? 
    • What are the primary barriers and enabling factors around the adoption of PETs in data governance, and how might these be addressed or amplified? 
    • How might PETs be factored into frameworks for assessing and balancing risks, harms and benefits when working with personal data? 

    In answering these questions, our report integrates evidence from a range of sources, including the advisement of an expert Working Group, consultation with a range of stakeholders across sectors, as well as a synthetic data explainer and commissioned reviews on UK public sector PETs adoption (PDF) and PETs standards and assurances (PDF), which are available for download.

en_USEnglish