Category: Insights

  • Decentralised AI

    Decentralised AI

    (INSIGHTS) (TECHNOLOGY) (PROJECT) Massachusetts Institute of Technology (MIT) Media Lab. “Decentralized AI”. Accessed 24.08.2025. https://www.media.mit.edu/projects/decentralized-ai/overview/

    As AI evolves beyond screen assistants and into dimensional applications, decentralization emerges as the critical factor for unlocking its full potential.

    Introduction

    The AI landscape is at a crossroads. While advances continue, concerns mount about job displacement and data monopolies. Centralized models, dominated by a few large companies, are reaching their limits. To unlock the true power of AI, we need a new paradigm: decentralized AI.

    Challenges of Centralized AI

    • Limited data access: Siloed data restricts AI’s potential for applications like personalized healthcare and innovative supply chains.
    • Inflexible models: One-size-fits-all models struggle with diverse real-world scenarios, leading to inaccurate and unfair outcomes.
    • Lack of transparency and accountability: With data and algorithms hidden away, trust in AI erodes, hindering adoption and innovation.

    Decentralized AI: A Vision for the Future:

    • Data markets: Secure marketplaces enable data exchange while protecting privacy and ensuring fair compensation.
    • Multi-dimensional models: AI that learns from real-world experiences through simulations and agent-based modeling.
    • Verifiable AI: Mechanisms like federated learning and blockchain ensure responsible development and deployment of AI models.
    • Exchanges for AI solutions: Platforms where individuals and businesses can access and contribute to AI solutions for diverse needs.

    Opportunities in Decentralized AI:

    • Democratization of innovation: Individuals and smaller businesses can participate in the AI revolution, creating valuable solutions and capturing economic benefits.
    • Unleashing trillions in economic value: By addressing real-world challenges in healthcare, education, and other sectors, decentralized AI can unlock vast economic opportunities.
    • Building a more equitable and inclusive future: Decentralization empowers individuals and helps address concerns about bias and discrimination in AI.

    The Call to Action:

    In this pivotal moment, everyone has a role to play. Businesses must embrace decentralized models, governments should foster collaborative ecosystems, and individuals must become AI literate and contribute their expertise. By working together, we can unlock the true potential of AI and build a more prosperous and equitable future for all.

    Reach out to us at dec-ai@media.mit.edu

    Professor Ramesh Raskar spoke on this topic at EmTech Digital in May 2024

    Research Topics

    #social networks #computer vision #artificial intelligence #data#privacy #machine learning #decision-making

  • Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Mahdis Moradi, Vidar Hepsø, Per Morten Schiefloe,
    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry,
    Journal of Innovation & Knowledge, Volume 9, Issue 3, 2024, 100523, ISSN 2444-569X, https://doi.org/10.1016/j.jik.2024.100523.
    (https://www.sciencedirect.com/science/article/pii/S2444569X24000623)

    Abstract

    There has been a growing interest in open-source innovation and collaborative software development ecosystems in recent years, particularly in industries dominated by intellectual property and proprietary practices.

    However, consortiums engaged in these collaborative efforts often face difficulties in effectively balancing the competing dynamics of trust and power. Collaborative knowledge creation is pivotal in ensuring long-term sustainability of the ecosystem; knowledge sharing can take place by steering trust judgments toward fostering reciprocity.

    Drawing on a longitudinal case study of the Open Subsurface Data Universe ecosystem, we investigate the intricate interplay between trust and power and its pivotal influence on ecosystem governance. Our investigation charts the trajectory of trust and power institutionalization and reveals how it synergistically contributes to the emergence of comprehensive hybrid governance strategies.

    We make the following two contributions to extant research. First, we elucidate a perspective on the conceptual interplay between power and trust, conceiving these notions as mutual substitutes and complements. Together, they synergistically foster the institutionalization and dynamic governance processes in open-source ecosystems. Second, we contribute to the governance literature by emphasizing the significance of viewing governance as a configuration of institutionalization processes and highlighting the creation of hybrid forms of governance in complex innovation initiatives.

    Keywords: Open source; Innovation; Cocreation; Governance; Institutional trust; Power

  • Towards safer healthcare

    Towards safer healthcare

    SITRA. “Towards safer healthcare.” Accessed 14.08.2025. https://www.sitra.fi/en/publications/towards-safer-healthcare/.

    Insights on the European action plan on cybersecurity for hospitals and healthcare providers

    DOWNLOAD PUBLICATION

    WRITERS

    Markus Kalliola (Sitra), Mikko Huovila (Nordic Healthcare Group) and Marianne Lindroth (DNV Cyber) 

    PUBLISHED

    May 7, 2025

    The healthcare sector is increasingly vulnerable to cyber threats due to outdated systems, fragmented practices and risks associated with human errors. Despite advancements in regulatory efforts and technical solutions, implementation remains inconsistent. Emerging technologies such as artificial intelligence (AI) and quantum computing add both urgency and complexity to securing healthcare environments. 

    The EU’s expanding cybersecurity legislation is significantly impacting various sectors, including healthcare. The primary goal is to harmonise practices and enhance the resilience of critical entities, products and infrastructure. New instruments like the Directive on measures for a high common level of cybersecurity across the Union (NIS2), Cyber Resilience Act and AI Act broaden the scope of entities covered and introduce stricter requirements, raising the bar for compliance and emphasising the need for robust security in the interconnected digital landscape. 

    Europe has awakened to the need for taking further actions to protect healthcare. The European cybersecurity action plan for hospitals and healthcare providers, published by the European Commission in January 2025, arrives at a crucial time with several strong proposals to bolster healthcare security.  

    Sitra presents seven proposals for improving the preparedness of the EU and its member states against cyber threats. Building a single market for cybersecurity and making collaboration tangible through pan-European cybersecurity exercises are among the things to consider.  

    With all actions set to improve cybersecurity, clear targets are needed to measure the impacts. This applies to the Commission’s action plan proposals for the EU and member states, but also at the grassroots level in healthcare organisations and how cybersecurity maturity is measured and improved.  

    Improving cybersecurity resilience requires healthcare organisations to address all stages of cybersecurity – before, during and after incidents. Cybersecurity should be further integrated into comprehensive security, with adequate resources allocated to healthcare organisations. A well-functioning single market is part of cybersecurity resilience, and European companies must play a significant role in it.

    Finland serves as a case study for how cybersecurity is organised in healthcare within an EU member state. In Finland’s comprehensive security model, cybersecurity responsibilities are distributed among various authorities. Healthcare organisations hold the primary responsibility, supported and guided by multiple authorities. Roles and responsibilities are clearly defined under normal circumstances, with the national cybersecurity strategy outlining priority actions. 

  • The Draghi report on EU competitiveness

    The Draghi report on EU competitiveness

    European Commission. “Draghi report.” Accessed 14.08.2025. https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en.

    The future of European competitiveness: Report by Mario Draghi

    Mario Draghi – former European Central Bank President and one of Europe’s great economic minds – was tasked by the European Commission to prepare a report of his personal vision on the future of European competitiveness. 

    The report looks at the challenges faced by the industry and companies in the Single Market. It outlines how Europe will no longer be able to rely on many of the factors that have supported growth in the past and lays out a clear diagnosis and provides concrete recommendations to put Europe onto a different trajectory.

    Download the report

    Background

    Today, Europe stands united in its pursuit of inclusive economic growth, focusing on 

    • sustainable competitiveness
    • economic security
    • open strategic autonomy
    • fair competition

    They all serve as pillars of prosperity. 

    The vision that drives Europe forward is to create conditions where businesses thrive, the environment is protected, and everyone has an equal chance at success.

    Sustainable competitiveness should make sure businesses are productive and environmentally friendly. Economic security ensures that our economy can handle challenges and protect jobs. With open strategic autonomy, Europe is not just open for business; but is shaping a better, fairer world.

    Next steps

    The findings of the Draghi report are contributing to the Commission’s work on a new plan for Europe’s sustainable prosperity and competitiveness. And in particular, to the development of the new Clean Industrial Deal for competitive industries and quality jobs, which will be presented in the first 100 days of the new Commission mandate.

    Many of its recommendations are reflected in the Commission’s Political Guidelines as well as the mission letters of the President of the European Commission to the members of the College.

    In January 2025, the Commission presented the Competitiveness Compass, a new roadmap to restore Europe’s dynamism and boost economic growth. The Compass builds on the analysis of the Draghi report and provides a strategic framework to drive the Commission’ work for the next five years.

  • EU’s Competitivness Compass

    EU’s Competitivness Compass

    European Commission. “Competitiveness Compass.” Accessed 14.08.2025. https://commission.europa.eu/topics/eu-competitiveness/competitiveness-compass_en.

    Our plan to reignite Europe’s economy

    Over the last two decades, Europe’s potential has remained strong, even as other major economies have grown at a faster pace.

    The EU has everything it takes to unlock its full potential and drive faster, more sustainable growth: we boast a talented and educated workforce, capital, savings, the single market, and a unique social model. To restore our competitiveness and unleash growth, we need to tackle the barriers and weaknesses that are holding us back.

    In January 2025, the Commission presented the competitiveness compass, a new roadmap to restore Europe’s dynamism and boost our economic growth.

    https://ec.europa.eu/avservices/play.cfm?ref=I-267829&lg=EN&sublg=none&autoplay=true&tin=10&tout=59

    Three necessities for a more competitive EU

    The compass builds on the analysis of Mario Draghi’s report on the future of European competitiveness.

    The Draghi report originally identified three necessities for the EU to boost its competitiveness: 

    1. Closing the innovation gap 
    2. Decarbonising our economy
    3. Reducing dependencies

    The compass sets out an approach to translate these necessities into reality. 

    Discover the full timeline of actions under the compass

  • INN-the-Loop: Human-Centered Artificial Intelligence

    INN-the-Loop: Human-Centered Artificial Intelligence

    “INN-the-Loop” was a research project on artificial intelligence (AI) for critical sectors in 2024 that analysed limitations and risks related to ML, GenAI and Agentic AI. It identified significant risks and barriers, as well as solutions for adoption of AI in healthcare and other critical sectors that demand high standards of accuracy, safety, security, and privacy.

    The project was managed by the University of Inland Norway (INN) with contributions from the Norwegian Computing Center (NR), SINTEF, NTNU SFI NORCICS (Norwegian Center for Cybersecurity in Critical Sectors), and VentureNet AS. The project received co-financing by the Regional Research Fund (RFF) Innlandet supported by Innlandet County and the Research Council of Norway.

    Abstract

    Building on rapid development and investment in Artificial Intelligence (AI), the year 2025 heralded in “Agentic AI” as the new frontier for Generative AI (GenAI). The implication is that virtual assistants will be able to autonomously solve problems, set goals, and increase productivity by automating workflows, generating documents, and enhancing the productivity of humans who use AI-supported systems.

    However, for Agentic AI to be suitable for use in critical sectors, a solution is needed to address inherent limitations of AI related to accuracy, safety, security, adaptivity, trustworthiness, and sustainability. This article summarizes results from a research project in 2024 with leading Norwegian research institutions titled the “INN-the-Loop”. The aim of the project was to pre-qualify a framework to design, develop and test human-centric AI-systems for critical sectors, with a focus on smart healthcare as a use case. The project’s findings on AI risks shed light on the importance of digital regulation to ensure safety and security, while also presenting possible solutions for compliance automation to cost-effectively cope with changing regulatory, technical and risk landscapes.

    This article describes a framework, methodology and system/toolkit to develop trustworthy and sustainable AI-systems with Humans-In-The-Loop (HITL). The framework aims to address limitations and risks of current AI approaches by combining human-centred design with “Data Space” technologies, including privacy-enhancing technologies (PETs) for decentralised identity and data access management.

    The project’s results are aligned with European initiatives to develop federated, sustainable and sovereign digital infrastructure for high performance (HPC) and edge computing. The results can inform design and planning of next-generation digital infrastructure, including local digital twins (LDT) and interconnected digital marketplaces, which can strengthen supply chain resilience in critical sectors.

    Download the full research report.

  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

  • Privacy-enhancing technologies

    Privacy-enhancing technologies

    The Royal Society. “Privacy-enhancing technologies.” Accessed 18.08.2025. https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/.

    What are Privacy Enhancing Technologies (PETs)? 

    Privacy Enhancing Technologies (PETs) are a suite of tools that can help maximise the use of data by reducing risks inherent to data use. Some PETs provide new tools for anonymisation, while others enable collaborative analysis on privately-held datasets, allowing data to be used without disclosing copies of data. PETs are multi-purpose: they can reinforce data governance choices, serve as tools for data collaboration or enable greater accountability through audit. For these reasons, PETs have also been described as “Partnership Enhancing Technologies” or “Trust Technologies”.

    What is data privacy, and why is it important?

    The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our 2017 British Academy and Royal Society report Data management and use: Governance in the 21st century.

    How can technology support data governance and enable new, innovative uses of data for public benefit?

    The Royal Society’s Privacy Enhancing Technologies programme investigates the potential for tools and approaches collectively known as Privacy Enhancing Technologies, or PETs, in maximising the benefit and reducing the harms associated with data use.

    Our 2023 report, From privacy to partnership: the role of Privacy Enhancing Technologies in data governance and collaborative analysis (PDF), was undertaken in close collaboration with the Alan Turing Institute, and considers the potential for PETs to revolutionise the safe and rapid use of sensitive data for wider public benefit. It considers the role of these technologies in addressing data governance issues beyond privacy, addressing the following questions:

    • How can PETs support data governance and enable new, innovative uses of data for public benefit? 
    • What are the primary barriers and enabling factors around the adoption of PETs in data governance, and how might these be addressed or amplified? 
    • How might PETs be factored into frameworks for assessing and balancing risks, harms and benefits when working with personal data? 

    In answering these questions, our report integrates evidence from a range of sources, including the advisement of an expert Working Group, consultation with a range of stakeholders across sectors, as well as a synthetic data explainer and commissioned reviews on UK public sector PETs adoption (PDF) and PETs standards and assurances (PDF), which are available for download.

  • Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    VentureNet participated in a research project “INN-the-Loop” (2024-2025), which produced eye-opening analysis of risks and limitations of AI deployment in critical sectors. The project also analysed solutions and produced a roadmap for development of sovereign digital infrastructure for deploying trustworthy AI in critical sectors such as healthcare.

    View or download the report on Showstoppers: Limitations and Risks of AI.

en_USEnglish