Reasoning-Based Forecast Interpreter

Explainable AI for trustworthy event forecasting

What is it about?

The Reasoning-Based Forecast Interpreter is a software component which allows adding specific rules to AI models to enable better predictions in event forecasting applications. The concept behind it is a hybrid neuro-symbolic framework, which also allows users to trace the logic behind an AI system’s decisions, especially in scenarios that involve multiple time-dependent variables. Hence, the tool addresses the “black-box” nature of deep learning, making complex forecasts more understandable and, potentially, more trustworthy.

Who is it for?

  • Data Scientists and AI developers who need to justify or debug forecasting models
  • Research institutions focused on Explainable AI (XAI) research
  • Business analysts who want to interpret model outputs for decision-making
  • Compliance officers in regulated sectors (e.g., healthcare, finance) who require model explainability

Why use it?

  • Brings clarity and transparency to neural model outputs
  • Helps users in analyzing mispredictions and retrain models more effectively
  • Promotes the trust of stakeholders in automated decision systems

How to access the tool?

The component is available as an open-source project on the EVENFLOW GitHub:

🔗 https://github.com/EVENFLOW-project-EU

Additional engagement includes:

  • Conference presentations
  • Academic publications
  • Collaboration offers research teams for model testing and feedback

Further development will continue via research proposals and an open-source community of developers and users focused on XAI.

Who is involved?

Developed and maintained by the National Centre for Scientific Research ‘Demokritos’ (NCSR) within the EVENFLOW project framework.

Related publications:

Go Up