Reliable AI Models: How to Deal With the Unknown?
Improving the reliability and robustness of modern AI models have received close attention lately due to its importance for critical applications. A significant challenge relates to the lack of indications from unknown data. As a result, models can produce an overconfident prediction on unseen inputs leading to erroneous model outcomes. This situation impacts model reliability resulting in compromises concerning security, monetary, competitiveness and eventually trust. Out-of-Distribution (OOD) research has focused on mitigating this problem by enabling detection techniques covering different fronts of this issue. This talk introduces the OOD concept and its implications on models’ security. Subsequently, it highlights relevant use cases in which the impact of OOD can be observed. Finally, the presentation concludes by pointing out the latest advances and the shortcomings still to be solved in future research directions.
Emmanuel Ferreyra Olivares, PhD, is an AI & Data Security researcher with Fujitsu Research of Europe. In this role, Emmanuel is involved in designing and developing reliable and secure data-driven AI solutions relevant for highly regulated industries by practising close collaboration with global partners both in the industry and academia. Emmanuel’s research interests are in the broad areas of Cyber Security, Explainable AI, Computational Intelligence and Smart Simulation.