watermark-img

trustworthy ai cluster

HORIZON-CLA-2021-HUMAN-01 is the call of the European Commission under which nine projects were funded. During these projects solid scientific developments will be complemented, by tools and processes for design, testing and validation, certification, software engineering methodologies, as well as approaches to modularity and interoperability, aimed at real-world applications. The funded projects propose standardisation methods to foster AI industry, helping to create, and guarantee trustworthy and ethical AI, and in support of the Commission regulatory framework.

The funded projects are focus on advancing the state of the art in one of the major research areas below:

– Novel or promising learning as well as symbolic and hybrid approaches. The objective is to advance “intelligence” and autonomy of AI-based systems, essential to scale-up deployment, in solving a wider set of more complex problems, adapting to new situations (making them “smarter”, more accurate, robust, dependable, versatile, reliable, secured, safer, etc.), and addressing real-time performance requirements, where relevant, for both robotics and non-embodied AI systems.

– Advanced transparency in AI, including advances in explainability, in transparency investigating novel or improved approaches increasing users’ understanding of AI system behaviour, and therefore increasing trust in such systems.

– Greener AI, increasing data and energy efficiency. This covers research towards lighter, less data-intensive and energy-consuming models, optimised learning processes to require less input (data efficient AI), or optimised models, data augmentation, synthetic data, transfer learning, one-shot learning, continuous / lifelong learning, and optimised architectures for energy-efficient hardware, framework that optimises calculations for energy reduction in big data analytics.

– Advances in edge AI networks, bringing intelligence near sensors, in embedded systems with limited computational, storage and communication resources, as well as the integration of advanced and adaptive sensors and perception, but also optimising edge vs cloud AI to maximise the capabilities of the overall system (both globally and for individual users).

– Complex systems & socially aware AI: able to anticipate and cope with the consequences of complex network effects in large scale mixed communities of humans and AI systems interacting over various temporal and spatial scales. This includes the ability to balance requirements related to individual users and the common good and societal concerns, including sustainability, non-discrimination, equity, diversity etc.

Under this call, the following projects were funded, including EVENFLOW:

Human-compatible AI with guarantees, the Horizon Europe project (“AutoFair”), seeks to address needs for trusted AI and user-in-the-loop tools and systems in a range of industry applications through:

Comprehensive and flexible certification of fairness At one end we can consider risk averse a priori guarantees on certain bias measures as hard constraints in the training process. At the other end, we can consider post hoc comprehensible but thorough presentation of all of the tradeoffs involved in the design of an AI pipeline and their effect on industrial and bias outcomes.

User-in-the-loop in continuous iterative engagement among AI systems, their developers and users. We seek to both inform the users thoroughly in regards to the possible algorithmic choices and their expected effects, and at the same time to learn their preferences in regards to different fairness measures and subsequently guide decision making bringing together the benefits of automation in a human-compatible manner.

Toolkits for the automatic identification of various types of bias, and their joint compensation by automatically optimizing various and potentially conflicting objectives (fairness/accuracy/runtime/resources), visualising the tradeoffs, and making it possible to communicate the tradeoffs to the industrial user, government agency, NGO, or members of the public, where appropriate.

Twitter: @AutoFair_EU

advisory_board_member_image

ENEXA is a European project developing human-centered explainable machine learning approaches for real world knowledge graphs.

Human-centred, transparent and explainable AIs are key to achieving a human-centred and ethical development of digital and industrial solutions. ENEXA builds upon novel and promising results in knowledge representation and machine learning to develop scalable, transparent, and explainable hybrid machine learning algorithms that combine symbolic and sub-symbolic learning. The project focuses on knowledge graphs with rich semantics as knowledge representation mechanism because of their increasing popularity across domains and industries in Europe.

Some explainable and transparent machine learning approaches for knowledge graphs are known to already provide guarantees with respect to their completeness and correctness. However, they are still impossible or impractical to deploy on real-world data due to the scale, incompleteness and inconsistency of knowledge graphs in the wild.

ENEXA devises new machine learning approaches that maintain formal guarantees pertaining to completeness and correctness while exploiting different representations (formal logics, embeddings and tensors) of knowledge graphs in a concurrent fashion. With our new methods, we plan to achieve significant advances in the scalability of machine learning, especially on knowledge graphs. A key innovation of ENEXA lies in its approach to explainability. Here, we focus on devising human-centred explainability techniques based on the concept of co-construction, where human and machine enter a conversation to co-construct human-understandable explanations. The resulting approach is deployed in three sectors of European significance, i.e., business services, geospatial intelligence and brand marketing.

Twitter: @enexa_eu

advisory_board_member_image

The REXASI-PRO project aims to release a novel engineering framework to develop greener and Trustworthy Artificial Intelligence solutions. The project will develop in parallel the design of novel trustworthy-by-construction solutions for social navigations and a methodology to certify the robustness of AI-based autonomous vehicles for people with reduced mobility. The trustworthy-by-construction social navigation algorithms will exploit mathematical models of social robots. The robots will be trained by using both implicit and explicit communication. A novel learning paradigm embeds safety requirements in Deep Neural Network for planning algorithms, runtime monitoring based on conformal prediction regions, trustable sensing, and secure communication. The methodology will be used to certify the robustness of both autonomous wheelchairs and flying robots. The flying robots will be equipped with unbiased machine learning solutions for people detection that will be reliable also in an emergency. Thus, REXASI-PRO will make the AI solutions greener. The REXASI-PRO framework will be demonstrated by enabling the collaboration among autonomous wheelchairs and flying robots to help people with reduced mobility.

Twitter: @REXASIPRO_EU

LinkedIn: REXASI-PRO

advisory_board_member_image

The EU-funded SAFEXPLAIN (Safe and Explainable Critical Embedded Systems based on AI) project launched on 1 October 2022. It seeks to lay the foundation for Critical Autonomous AI-based Systems (CAIS) applications that are smarter and safer by ensuring that they adhere to functional safety requirements in environments that require quick and real-time response times that are increasingly run on the edge. 

Deep Learning (DL) technology that supports AI is key for most future advanced software functions in CAIS, however, there is a fundamental gap between its Functional Safety (FUSA) requirements and the nature of DL solutions. The lack of transparency (mainly explainability and traceability) and the data-dependent and stochastic nature of DL software clash with the need for clear, verifiable and pass/fail test-based software solutions for CAIS.  SAFEXPLAIN tackles this challenge by providing a novel and flexible approach for the certification – and hence adoption – of DL-based solutions in CAIS.

This three-year project brings together a six-partner consortium representing academia and industry

Twitter: @SafexplainAI

LinkedIn: SAFEXPLAIN

advisory_board_member_image

Sustain ML: Application Aware, Life-Cycle Oriented Model-Hardware Co-Design Framework for Sustainable, Energy Efficient ML Systems.

This project is based on the insight that in order to significantly reduce the CO2 footprint of ML applications power-aware applications must be as easy to develop as standard ML systems are today. Users with little or no understanding of the tradeoffs between different architecture choices and energy footprint should be able to easily reduce the power consumption of their applications.

We envision a sustainable, interactive ML framework development for Green AI that will comprehensively prioritize and advocate energy efficiency across the entire life cycle of an application and avoid AI-waste.

  • O1: Model the requirements of specific ML applications.
  • O2: Resource-aware optimization methods based on models from previous objectives.
  • O3: Footprint and AI-waste transparent interactive design assistant that guides the developers through the entire process.
  • O4: Collection of efficient methods and cores as catalogs and libraries of energy-optimized parameterized ML models.
  • O5: Dedicated toolchain implementation.

advisory_board_member_image

The expected diversity of services, use cases, and applications in I5.0 requires a flexible, adaptable, and programmable AI architecture that optimises the edge vs cloud AI to maximise the performance of the overall system. In the face of this challenge, TALON introduces an AI orchestrator that envisions transforming the I5.0 into an automated intelligent platform by exploiting advances in edge networks and bringing intelligence near sensors in embedded systems with limited computational, storage, and communication resources, as well as the integration of advanced and adaptive sensors and perception.

In this direction, TALON’s AI orchestrator maximises both global and individual users’ and systems’ capabilities without violating the design parameters of each application. In particular, the orchestrator selects AI datasets, algorithms, metrics, and models based on the application. This creates a new system architecture that makes the most by

  • jointly optimising the edge and cloud resources,
  • enabling centralised, distributed, as well as hybrid intelligence and
  • transforming the AI network into a low-power computer, which will be able to use underutilised (commercial and business) resources.

Likewise, by following a holistic optimisation approach and leveraging the developments in blockchain, TALON aims at supporting end-to-end (e2e) personalised and perpetual security and privacy. Finally, to accommodate the particularities of the TALON architecture that are generated due to the use of novel building blocks, such as AI orchestrator, blockchain, edge networking, and DTs, a new experimentally-verified theoretical framework will be presented.

Twitter: @talon_project

LinkedIn: TALON

advisory_board_member_image

People are increasingly aware of both the great benefits and risks of applying artificial intelligence (AI) in real-world settings. This has given rise to a number of initiatives to identify the principles underlying trustworthy AI systems, to understand the technical requirements and to provide guidelines for their development. However, such guidelines remain very general and abstract, leaving open the novel technical foundations, algorithmic approaches, and tools necessary to implement trustworthy AI systems.

The overall aim of TUPLES is to respond to these guidelines, by providing such foundations, approaches and tools in a particularly relevant area of AI research and applications: planning and scheduling (P&S).

TUPLES (TrUstworthy Planning and scheduling with Learning and ExplanationS) is a 3-year project that will contribute to a more integrated and human-centered approach to the development of P&S tools, in order to increase confidence in these systems and accelerate their adoption.

Our ambition is to obtain scalable, yet transparent, robust and safe algorithmic solutions for planning & scheduling, by designing methods that combine the power of data-driven and knowledge-based symbolic AI.

Twitter: @TuplesAI

LinkedIn: TUPLES Trustworthy AI

YouTube: @tuples_ai

advisory_board_member_image

The principle of hybrid AI is the coalescence of data-driven AI algorithms, finding patterns in data, and model-driven AI algorithms, relying on physical models and constraints. This fusion of methods gives the correct context to data, essentially improving quality for the learning process and improving AI algorithms behaviour.

Data-driven and model-driven AI clearly complement each other and form a critical foundation for the adoption of AI solutions in industry. However, hybrid AI does not fully address the issue of trust, namely validity, transparency, explainability, and ethics, that must be tackled to achieve world-class hybrid AI technologies that are beneficial to humans individually, organisationally, and societally, and that adheres to European values.

The goal of the ULTIMATE (mUlti-Level Trustworthiness to IMprove the Adoption of hybrid arTificial intelligencE) European project is to pioneer the development of industrial-grade hybrid AI based on three stages to ensure trustworthiness, relying on interdisciplinary data sources and adhering to physical constraints (1st stage), as well as the development of tools for explaining, evaluating and validating hybrid AI algorithms and asserting their adherence to ethical and legal regulations (2nd stage). These will be exemplified using real-world industrial use cases considering operational conditions (3rd stage) in the Robotics (collaboration between human and robots for logistics activities) and Space domains (Failure detection for satellites) to promote the widespread adoption of hybrid AI in industrial settings.

LinkedIn: ULTIMATE – HorizonEurope 

Go Up