Jindong is a Senior Research Scientist at Google, specializing in advancing the reliability and safety of AI technologies. Additionally, Jindong is a Senior Research Fellow at the University of Oxford. Previously, Jindong worked on ensuring the safety of foundation models within the Gemini Safety team at DeepMind. Currently, Jindong focuses on enhancing the reliability and robustness of AI agents as part of the CAIR team.
Jindong earned a Ph.D. in 2022 from the Tresp Lab at the University of Munich, Germany, under the supervision of Prof. Volker Tresp. Following this, Jindong worked as a Postdoctoral Researcher in the Torr Vision Group, led by Prof. Philip Torr, at the University of Oxford, UK, in 2023.
Jindong’s research aims to build Responsible AI, with a particular focus on interpretability, robustness, privacy, and safety. Specific areas of interest include:
- Visual Perception
- Foundation Model-based Understanding and Reasoning
- Robotic Policy and Planning
- The integration of these fields toward the development of General Intelligence Systems.
Lecture: Responsible Generative AI – Ensuring Safety in Textual and Visual Generation
This talk addresses key safety challenges in generative AI, focusing on text-to-image and image-to-text systems. For text-to-image generation using diffusion-based models, I will cover detecting harmful prompts, removing inappropriate content during generation, and tracing the origins of problematic images. In the image-to-text domain, I will present how images can be used to mislead multi-task prompting and chain-of-thought processes of multimodal LLM, and jailbreak the alignment of multimodal LLMs. This talk provides a comprehensive overview of current practices and emerging solutions in ensuring the safety and reliability of generative AI systems.