Project overview
The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent socio-economic needs. Despite the numerous recommendations and standards, most AI practitioners and decision makers, still prioritize system performance as the main metric in their workflows often neglecting to verify and quantify core attributes of trustworthiness including traceability, robustness, security, transparency and usability. In addition, trustworthiness is not assessed throughout the lifecycle of AI system development so developers often fail to grasp a holistic view across different AI risks. Last, the lack of a unified, multi-disciplinary AI, Data and Robotics ecosystem for assessing trustworthiness across several critical AI application domains hampers the definition and implementation of a robust AI paradigm shift framework towards increased trustworthiness and accelerated AI adoption.
To address this critical unmet needs, FAITH innovation action will develop and validate a human-centric, trustworthiness optimization ecosystem, which enables measuring, optimizing and counteracting the risks associated with AI adoption and trustworthiness in critical domains, namely robotics, education, media, transport, healthcare, active ageing, and industrial processes through seven international Large Scale Pilots. Notably, cross-fertilization actions will create a joint outcome, which will bring together the visions and specificities of all the pilots. To this end, the project will adopt a dynamic risk management approach following EU legislative instruments and ENISA guidelines and deliver tools to be widely used across different countries and settings while diverse stakeholders’ communities will be engaged in the each pilot delivering seven sector-specific reports on trustworthiness to accelerate AI take-up.
To address this critical unmet needs, FAITH innovation action will develop and validate a human-centric, trustworthiness optimization ecosystem, which enables measuring, optimizing and counteracting the risks associated with AI adoption and trustworthiness in critical domains, namely robotics, education, media, transport, healthcare, active ageing, and industrial processes through seven international Large Scale Pilots. Notably, cross-fertilization actions will create a joint outcome, which will bring together the visions and specificities of all the pilots. To this end, the project will adopt a dynamic risk management approach following EU legislative instruments and ENISA guidelines and deliver tools to be widely used across different countries and settings while diverse stakeholders’ communities will be engaged in the each pilot delivering seven sector-specific reports on trustworthiness to accelerate AI take-up.