Tag: Data Sharing

  • ARES Conference 2025 – 20th International Conference on Availability, Reliability and Security

    August 11 @ 12:00 pm August 14 @ 3:00 pm CEST

    (EVENT)

    Ghent University. “ARES-Conference”. Accessed 07.08.2025. https://2025.ares-conference.eu.

    ARES focuses since 2006 on rigorous and novel research in the field of dependability, computer and information security. In cooperation with the conference several workshops are held covering a huge variety of security topics.

    SBA Research

    View Organizer Website

    Campus Tweekerken, UGent

    Tweekerkenstraat 2, Faculteit Economie en Bedrijfskunde, Ghent University
    Ghent, 9000 Belgium
    + Google Map
  • Trustworthy AI

    Trustworthy AI

    Haukaas C.A., Fredriksen P.M., Abie H., Pirbhulal S., Katsikas S., Lech C.T., Roman D. (2025). “INN-the-Loop: Human-Guided Artificial Intelligence.” 26-27.

    To be trustworthy, a system needs to be resilient and consistently deliver outcomes that are aligned with stakeholder interests and expectations. Several factors can impact digital trust, such as a security vulnerability or biased data that can lead to erroneous analysis, misinformation or device failure. In operational environments, dynamic factors, such as security and safety risks could impact digital trust.

    The European Commission’s High-Level Expert Group on AI (AI HLEG) have defined 7 guidelines for Trustworthy AI being: 1) Human agency and oversight, 2) Technical robustness and safety, 3) Privacy and data governance, 4) Transparency, 5) Diversity, non-discrimination and fairness, 6) societal and environmental well-being, and 7) Accountability (AI HLEG 2019).[i]

    IBM has summarized similar principles for trustworthy AI, being accountability, explainability, fairness, interpretability and transparency, privacy, reliability, robustness, security and safety (Gomstyn 2024).[ii]

    For purpose of discussion, the definition of Trustworthy AI can be simplified as being continuously aligned with the interests and objectives of the system’s stakeholders. To achieve this, a Trustworthy AI-system requires technological components that enable adaptive compliance with user preferences relating to privacy, sharing data and objectives for using an AI-enabled system, and compliance with changing regulatory and technical requirements, as well as changing digital trust levels and threats.

    ‘Data Space’ technologies include many of the technical standards and building blocks needed to develop Trustworthy AI, which can demonstrate compliance with regulations, user preferences, and the AI HLEG guidelines using DIDs with verifiable credentials (VCs).

    There are more than 20 notable public, non-profit and private organisations that are developing trust frameworks and services to manage digital trust and data exchange with VCs. Some examples are the EU eIDAS regulation, ETSI, EBSI, EUDI Wallet initiative and EUROPEUM-EDIC, Gaia-X, FIWARE, iShare foundation, Eclipse foundation, MyData Global, and the U.S.-based NIST, IETF, W3C, Trust over IP and Linux Foundation.

    To promote harmonization of digital solutions, such as trust frameworks across initiatives, the EU passed an Interoperability Act in 2024. The Interoperability Act is accompanied with a framework, label, and checklist to ensure that publicly funded digital services adhere to requirements for openness and reuse. The Act is supported by ongoing research and development projects on interoperability, such as the EU NGI eSSIF-Lab project, which developed a Trust Management Infrastructure (TRAIN). Interoperability is particularly important for trust frameworks to enable automation in data exchange and VC policy enforcement. Interoperability and automation are important to enable adaptivity in trust frameworks.

    Trust frameworks are generally based on three predominant models: credentials-based trust, reputation-based trust and trust in information resources based on credentials and past behaviour of entities.[iii] Research is needed to develop more sophisticated and autonomous systems that are also adaptive.

    One area of research is directed toward developing frameworks and scorecards for measuring trust, risk and privacy. Trust and risk assessment frameworks, such as the NIST AI Risk Management Framework provide guidelines and metrics for measuring AI system risk and compliance. Additional frameworks are being developed to measure digital trust of entities, devices and digital supply chains, and when these frameworks are combined with Adaptive AI, there is potential to automate compliance with rapidly changing landscapes for technologies, security risks and user context.

    Understanding and engineering human factors in these equations is an important research area with high relevance for Industry 5.0, 6.0 and lifelong learning. Secure systems for data exchange with HITL models are needed to monitor, experiment and build knowledge of human factors in AI and immersive systems. Knowledge of human factors can inform strategies for productivity enhancement and risk mitigation, and this can inform development of HITL models for trustworthy AI systems.


    [i] AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

    [ii] Gomstyn A., McGrath A., Jonker A. (2024). What is trustworthy AI? IBM Blog. https://www.ibm.com/think/topics/trustworthy-ai#:~:text=Trustworthy%20AI%20refers%20to%20artificial,among%20stakeholders%20and%20end%20users.

    [iii] Tith D., Colin J.N. (2025). A Trust Policy Meta-Model for Trustworthy and Interoperability of Digital Identity Systems. Procedia Computer Science. International Conference on Digital Sovereignty (ICDS). DOI: 10.1016/j.procs.2025.02.067

  • Privacy-enhancing technologies

    Privacy-enhancing technologies

    The Royal Society. “Privacy-enhancing technologies.” Accessed 18.08.2025. https://royalsociety.org/news-resources/projects/privacy-enhancing-technologies/.

    What are Privacy Enhancing Technologies (PETs)? 

    Privacy Enhancing Technologies (PETs) are a suite of tools that can help maximise the use of data by reducing risks inherent to data use. Some PETs provide new tools for anonymisation, while others enable collaborative analysis on privately-held datasets, allowing data to be used without disclosing copies of data. PETs are multi-purpose: they can reinforce data governance choices, serve as tools for data collaboration or enable greater accountability through audit. For these reasons, PETs have also been described as “Partnership Enhancing Technologies” or “Trust Technologies”.

    What is data privacy, and why is it important?

    The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our 2017 British Academy and Royal Society report Data management and use: Governance in the 21st century.

    How can technology support data governance and enable new, innovative uses of data for public benefit?

    The Royal Society’s Privacy Enhancing Technologies programme investigates the potential for tools and approaches collectively known as Privacy Enhancing Technologies, or PETs, in maximising the benefit and reducing the harms associated with data use.

    Our 2023 report, From privacy to partnership: the role of Privacy Enhancing Technologies in data governance and collaborative analysis (PDF), was undertaken in close collaboration with the Alan Turing Institute, and considers the potential for PETs to revolutionise the safe and rapid use of sensitive data for wider public benefit. It considers the role of these technologies in addressing data governance issues beyond privacy, addressing the following questions:

    • How can PETs support data governance and enable new, innovative uses of data for public benefit? 
    • What are the primary barriers and enabling factors around the adoption of PETs in data governance, and how might these be addressed or amplified? 
    • How might PETs be factored into frameworks for assessing and balancing risks, harms and benefits when working with personal data? 

    In answering these questions, our report integrates evidence from a range of sources, including the advisement of an expert Working Group, consultation with a range of stakeholders across sectors, as well as a synthetic data explainer and commissioned reviews on UK public sector PETs adoption (PDF) and PETs standards and assurances (PDF), which are available for download.

  • Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    Showstoppers: Limitations and Risks of AI Deployment in Critical Sectors

    VentureNet participated in a research project “INN-the-Loop” (2024-2025), which produced eye-opening analysis of risks and limitations of AI deployment in critical sectors. The project also analysed solutions and produced a roadmap for development of sovereign digital infrastructure for deploying trustworthy AI in critical sectors such as healthcare.

    View or download the report on Showstoppers: Limitations and Risks of AI.

  • North European Cyber Days 2025

    North European Cyber Days 2025

    November 4 @ 8:00 am November 6 @ 5:00 pm CET

    (EVENT)

    The North European Cyber Days brings together stakeholders and communities from cybersecurity, critical sectors and artificial intelligence (AI) to discuss common challenges, collaboration and financing opportunities to strengthen cyber resilience and data-driven innovation in critical sectors in Europe.

    Event Webpage:  North European Cyber Days 2025.

    Registration Link: https://nettskjema.no/a/north-european-cyber-days-2025  

    Location: Oslo Science Park, Gaustadalléen 21, 0349 Oslo, Norway

    Dates and Times:

    • Pre-conference evening reception at Oslo Town Hall | Monday 3 November | 18:00 – 19:30 (CET)
    • Day 1 | ECSO Investor Day | Tuesday, 4 November | 08:30 – 17:00 Norwegian time (CET)
    • Day 2 | ECSO Solution and NECC Industry Day | Wednesday, 5 November 2025 | 08:30 – 17:00 (CET)
    • Day 3 | North European Brokerage Day | Thursday, 6 November | 08:30 – 17:00 (CET)
    • (Optional) Day 4 | activities in the Oslo Region, 7 November 2025

    Organized in collaboration with the European Cyber Security Organisation (ECSO), North European Cybersecurity Cluster (NECC), VentureNet, Norwegian Computing Center (NR), NTNU and the Norwegian Center for Cybersecurity in Critical Sectors (SFI NORCICS), the Norwegian Ecosystem for Secure IT-OT Integration (NESIOT), International Alliance for Healthcare Security and Privacy (CybAlliance), the University of Jyväskylä, and the Oslo Science Park.

    The event is tailored for CTOs, CISOs, and senior decision makers from the following organisation types:

    • cybersecurity solution providers
    • public sector and municipalities
    • industry (corporations and SMEs)
    • industry clusters
    • investors
    • innovation accelerators
    • financial and development institutions
    • experts from research institutions

    Topics and Activities

    The 3-day event will focus on cybersecurity as a key enabler of trustworthy artificial intelligence, secure IT-OT integration, privacy-preserving data sharing, collective intelligence and defence, adaptive security, supply chain resilience, digital sovereignty, competitiveness, productivity and data-driven innovation gains in critical sectors and smart communities. The final Brokerage Day will feature 3 parallel afternoon sessions focusing on the cross-sector application of cybersecurity and trustworthy AI solutions in critical sectors, particularly healthcare, energy, mobility, and smart communities. The event is structured as follows:

    1. ECSO Investor Day 4/11: Showcase leading innovations and investment opportunities in cybersecurity and AI, and demo some of latest digital platforms for community building and knowledge exchange.
    2. ECSO Solution and NECC Industry Day 5/11: Build knowledge of key issues and solutions in cybersecurity and data-driven innovation in critical sectors with a balance of insightful keynotes, panel discussions and networking.
    3. North European Brokerage Day 6/11: Learn more about major international initiatives, financing and collaboration opportunities where you can join research and commercial development projects and programs to build secure infrastructure and resilient value chains in critical sectors.

    We look forward to welcoming you to the North European Cyber Days in Oslo!

    Details

    Start:
    November 4 @ 8:00 am CET
    End:
    November 6 @ 5:00 pm CET
    Cost:
    Free admission
    Event Categories:
    , , ,
    Event Tags:
    , , , , , , , , ,
    Website:
    https://ecs-org.eu/events/the-north-european-cyber-days/

    Organizers

    European Cyber Security Organisation (ECSO)
    VentureNet
    North European Cybersecurity Cluster
    Norwegian Computing Center

    Campus Tweekerken, UGent

    Tweekerkenstraat 2, Faculteit Economie en Bedrijfskunde, Ghent University
    Ghent, 9000 Belgium
    + Google Map
en_USEnglish