Trustworthy AI often focuses on the trust we should, or shouldn’t, place in AI systems – whether they are capable, reliable, resilient, transparent, etc. However, it is just as important to know when to trust, or not to trust, the developers of AI systems. In this AI for Good Discovery, Dr Shahar Avin will describe why it is currently hard to evaluate the trustworthiness of AI developers, and then will outline how the combination of numerous interlocking mechanisms, from red teaming to third party audits, could create a system where such evaluation of trustworthiness is easier.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.