Category: Research Article

  • Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry

    Mahdis Moradi, Vidar Hepsø, Per Morten Schiefloe,
    Institutional complexity and governance in open-source ecosystems: A case study of the oil and gas industry,
    Journal of Innovation & Knowledge, Volume 9, Issue 3, 2024, 100523, ISSN 2444-569X, https://doi.org/10.1016/j.jik.2024.100523.
    (https://www.sciencedirect.com/science/article/pii/S2444569X24000623)

    Abstract

    There has been a growing interest in open-source innovation and collaborative software development ecosystems in recent years, particularly in industries dominated by intellectual property and proprietary practices.

    However, consortiums engaged in these collaborative efforts often face difficulties in effectively balancing the competing dynamics of trust and power. Collaborative knowledge creation is pivotal in ensuring long-term sustainability of the ecosystem; knowledge sharing can take place by steering trust judgments toward fostering reciprocity.

    Drawing on a longitudinal case study of the Open Subsurface Data Universe ecosystem, we investigate the intricate interplay between trust and power and its pivotal influence on ecosystem governance. Our investigation charts the trajectory of trust and power institutionalization and reveals how it synergistically contributes to the emergence of comprehensive hybrid governance strategies.

    We make the following two contributions to extant research. First, we elucidate a perspective on the conceptual interplay between power and trust, conceiving these notions as mutual substitutes and complements. Together, they synergistically foster the institutionalization and dynamic governance processes in open-source ecosystems. Second, we contribute to the governance literature by emphasizing the significance of viewing governance as a configuration of institutionalization processes and highlighting the creation of hybrid forms of governance in complex innovation initiatives.

    Keywords: Open source; Innovation; Cocreation; Governance; Institutional trust; Power

  • INN-the-Loop: Human-Centered Artificial Intelligence

    INN-the-Loop: Human-Centered Artificial Intelligence

    “INN-the-Loop” was a research project on artificial intelligence (AI) for critical sectors in 2024 that analysed limitations and risks related to ML, GenAI and Agentic AI. It identified significant risks and barriers, as well as solutions for adoption of AI in healthcare and other critical sectors that demand high standards of accuracy, safety, security, and privacy.

    The project was managed by the University of Inland Norway (INN) with contributions from the Norwegian Computing Center (NR), SINTEF, NTNU SFI NORCICS (Norwegian Center for Cybersecurity in Critical Sectors), and VentureNet AS. The project received co-financing by the Regional Research Fund (RFF) Innlandet supported by Innlandet County and the Research Council of Norway.

    Abstract

    Building on rapid development and investment in Artificial Intelligence (AI), the year 2025 heralded in “Agentic AI” as the new frontier for Generative AI (GenAI). The implication is that virtual assistants will be able to autonomously solve problems, set goals, and increase productivity by automating workflows, generating documents, and enhancing the productivity of humans who use AI-supported systems.

    However, for Agentic AI to be suitable for use in critical sectors, a solution is needed to address inherent limitations of AI related to accuracy, safety, security, adaptivity, trustworthiness, and sustainability. This article summarizes results from a research project in 2024 with leading Norwegian research institutions titled the “INN-the-Loop”. The aim of the project was to pre-qualify a framework to design, develop and test human-centric AI-systems for critical sectors, with a focus on smart healthcare as a use case. The project’s findings on AI risks shed light on the importance of digital regulation to ensure safety and security, while also presenting possible solutions for compliance automation to cost-effectively cope with changing regulatory, technical and risk landscapes.

    This article describes a framework, methodology and system/toolkit to develop trustworthy and sustainable AI-systems with Humans-In-The-Loop (HITL). The framework aims to address limitations and risks of current AI approaches by combining human-centred design with “Data Space” technologies, including privacy-enhancing technologies (PETs) for decentralised identity and data access management.

    The project’s results are aligned with European initiatives to develop federated, sustainable and sovereign digital infrastructure for high performance (HPC) and edge computing. The results can inform design and planning of next-generation digital infrastructure, including local digital twins (LDT) and interconnected digital marketplaces, which can strengthen supply chain resilience in critical sectors.

    Download the full research report.

  • Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    VentureNet participated in a research project “INN-the-Loop” (2024-2025), which produced eye-opening analysis of risks and limitations of AI deployment in critical sectors. The project also analysed solutions and produced a roadmap for development of sovereign digital infrastructure for deploying trustworthy AI in critical sectors such as healthcare.

    View or download the report on Showstoppers: Limitations and Risks of AI.

en_USEnglish