EVENFLOW Verification Tool

Trustworthy AI forecasting starts with verification

What is it about?

The Verification Toolkit is an open-source component that helps users to verify the robustness and reliability of complex neural models used in Complex Event Forecasting (CEF). The component implements novel methods for the validation of online neuro-symbolic learning, enabling AI systems to make stable predictions even when input data changes slightly or behaves unexpectedly. Therefore, the verification toolkit addresses the challenges of high-dimensional, time-sensitive data streams typical in domains like manufacturing, healthcare, energy, and finance.

Who is it for?

  • AI Engineers working on neural models for event prediction
  • Verification experts involved in critical industries (e.g., predictive maintenance, fraud detection)
  • Research teams and start-ups who seek advanced tooling to validate AI model performance

Why use it?

  • Supports AI models and makes them more resilient to noise and potential input variations
  • Reduces the risk of sudden AI models’ failures and promotes trust in automated forecasting systems
  • Integrates with ease into existing machine learning pipelines

How to access the tool?

The toolkit is available as an open-source project on the EVENFLOW GitHub:

🔗 https://github.com/EVENFLOW-project-EU

Users and contributors can explore:

  • Documentation
  • Use cases and examples

Who is involved?

Developed and maintained by Imperial College London (ICL) within the EVENFLOW project framework. The National Centre for Scientific Research ‘Demokritos’ (NCSR) is also involved.

Related publications:

Go Up