Tag: Trust

  • Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Mahdis Moradi, Vidar Hepsø, Per Morten Schiefloe,
    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry,
    Journal of Innovation & Knowledge, Volume 9, Issue 3, 2024, 100523, ISSN 2444-569X, https://doi.org/10.1016/j.jik.2024.100523.
    (https://www.sciencedirect.com/science/article/pii/S2444569X24000623)

    Abstract

    There has been a growing interest in open-source innovation and collaborative software development ecosystems in recent years, particularly in industries dominated by intellectual property and proprietary practices.

    However, consortiums engaged in these collaborative efforts often face difficulties in effectively balancing the competing dynamics of trust and power. Collaborative knowledge creation is pivotal in ensuring long-term sustainability of the ecosystem; knowledge sharing can take place by steering trust judgments toward fostering reciprocity.

    Drawing on a longitudinal case study of the Open Subsurface Data Universe ecosystem, we investigate the intricate interplay between trust and power and its pivotal influence on ecosystem governance. Our investigation charts the trajectory of trust and power institutionalization and reveals how it synergistically contributes to the emergence of comprehensive hybrid governance strategies.

    We make the following two contributions to extant research. First, we elucidate a perspective on the conceptual interplay between power and trust, conceiving these notions as mutual substitutes and complements. Together, they synergistically foster the institutionalization and dynamic governance processes in open-source ecosystems. Second, we contribute to the governance literature by emphasizing the significance of viewing governance as a configuration of institutionalization processes and highlighting the creation of hybrid forms of governance in complex innovation initiatives.

    Keywords: Open source; Innovation; Cocreation; Governance; Institutional trust; Power

  • FAME: Federated decentralized trusted dAta Marketplace for Embedded finance

    FAME: Federated decentralized trusted dAta Marketplace for Embedded finance

    January 1, 2023 @ 8:00 am December 31, 2025 @ 5:00 pm CET

    (PROJECT)

    FAME. “FAME.” Accessed 13.08.2025. https://www.fame-horizon.eu.

    FAME is a joint effort of world-class experts in data management, data technologies, the data economy, and digital finance, aiming to develop and launch to the global market a unique, trustworthy, energy-efficient, and secure federated data marketplace for Embedded Finance (EmFi).

    ​The FAME project has received funding from the European Union’s Horizon 2023 Research and Innovation Programe under grant agreement nª 101092639. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or Horizon Europe. Neither the European Union nor the granting authority can be held responsible for them.

    Marketplace

    The FAME Federated Data Marketplace for Embedded Finance is a Data Space, according to the definition of the EU, customized for buying and selling federated data assets in the financial sector.

    Accessible through a unified entry point, the Marketplace allows for secure data access, sharing, trading and analysis through FAME’s analytical tools for both finance and non-finance organizations, tech and non-tech users, and other end-users.

    Why FAME?

    Modern data marketplaces are transforming how data assets are shared, traded, and utilized. Recent European initiatives have made significant strides, particularly in enhancing data monetization, regulatory compliance, and secure data exchange. However, existing centralized marketplaces face challenges that limit broader participation and accessibility. Notable limitations include complex data discovery processes, limited transparency in value-based data monetization, and insufficient integration of trusted, energy-efficient analytics. Addressing these gaps can unlock new data-driven applications in sectors like finance, retail, and smart cities, empowering innovative services that seamlessly integrate financial data. That is where FAME comes in.

  • ENFIELD: European Lighthouse to Manifest Trustworthy and Green AI

    ENFIELD: European Lighthouse to Manifest Trustworthy and Green AI

    September 1, 2023 @ 8:00 am August 30, 2026 @ 5:00 pm CEST

    (PROJECT)

    NTNU. “ENFIELD”. Accessed 13.08.2025. https://enfield-project.eu.

    ENFIELD is set to establish a distinctive European Center of Excellence focused on advancing fundamental research in Green, Adaptive, Human-Centric, and Trustworthy AI and applied research within key sectors like Energy, Healthcare, Manufacturing, and Space
     
    Promoted by 30 leading research institutions, businesses, and public sector representatives from 18 countries, the project builds a vibrant AI community of the brightest minds from across Europe. 
     
    ENFIELD network will deliver over 75 unique AI solutions, 180 high-impact publications, strategic documents and extensive outreach efforts. 

    Our Mission

    Our goal is to develop AI solutions that address challenges in sectors such as healthcare, energy, manufacturing, and space, while promoting sustainability and ethics

    The ENFIELD Project is co-funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or DG CNECT. Neither the European Union nor the granting authority can be held responsible for them.  The University of Nottingham’s participation in the Horizon Europe Project ENFIELD is supported by UKRI grant number 10094603. 

    NTNU (Norwegian University of Science and Technology)

    View Organizer Website

  • INN-the-Loop: Human-Centered Artificial Intelligence

    INN-the-Loop: Human-Centered Artificial Intelligence

    “INN-the-Loop” was a research project on artificial intelligence (AI) for critical sectors in 2024 that analysed limitations and risks related to ML, GenAI and Agentic AI. It identified significant risks and barriers, as well as solutions for adoption of AI in healthcare and other critical sectors that demand high standards of accuracy, safety, security, and privacy.

    The project was managed by the University of Inland Norway (INN) with contributions from the Norwegian Computing Center (NR), SINTEF, NTNU SFI NORCICS (Norwegian Center for Cybersecurity in Critical Sectors), and VentureNet AS. The project received co-financing by the Regional Research Fund (RFF) Innlandet supported by Innlandet County and the Research Council of Norway.

    Abstract

    Building on rapid development and investment in Artificial Intelligence (AI), the year 2025 heralded in “Agentic AI” as the new frontier for Generative AI (GenAI). The implication is that virtual assistants will be able to autonomously solve problems, set goals, and increase productivity by automating workflows, generating documents, and enhancing the productivity of humans who use AI-supported systems.

    However, for Agentic AI to be suitable for use in critical sectors, a solution is needed to address inherent limitations of AI related to accuracy, safety, security, adaptivity, trustworthiness, and sustainability. This article summarizes results from a research project in 2024 with leading Norwegian research institutions titled the “INN-the-Loop”. The aim of the project was to pre-qualify a framework to design, develop and test human-centric AI-systems for critical sectors, with a focus on smart healthcare as a use case. The project’s findings on AI risks shed light on the importance of digital regulation to ensure safety and security, while also presenting possible solutions for compliance automation to cost-effectively cope with changing regulatory, technical and risk landscapes.

    This article describes a framework, methodology and system/toolkit to develop trustworthy and sustainable AI-systems with Humans-In-The-Loop (HITL). The framework aims to address limitations and risks of current AI approaches by combining human-centred design with “Data Space” technologies, including privacy-enhancing technologies (PETs) for decentralised identity and data access management.

    The project’s results are aligned with European initiatives to develop federated, sustainable and sovereign digital infrastructure for high performance (HPC) and edge computing. The results can inform design and planning of next-generation digital infrastructure, including local digital twins (LDT) and interconnected digital marketplaces, which can strengthen supply chain resilience in critical sectors.

    Download the full research report.

  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

en_USEnglish