What’s new?

  • (2024.02) Our paper titled “Prompt Stealing Attacks Against Text-to-Image Generation Models” got accepted in USENIX 2024! See you in Philadelphia!
  • (2023.09) Our research on Jailbreak Prompts got covered by New Scientist and Deutschlandfunk Nova!
  • (2023.05) Our paper titled “Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models” got accepted in CCS 2023! See you in Copenhagen!
  • (2023.04) We released a new technical report In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT on the trustworthiness of ChatGPT!
  • (2023.04) I will serve on the Poster Program Committee of IEEE S&P 2023.
  • (2023.03) We released MGTBench, a benchmark for the current machine-generated text (by ChatGPT) detection methods!
  • (2023.01) Our paper titled Prompt Stealing Attacks Against Text-to-Image Generation Models is online, serving as the first study on prompt stealing attacks!
  • (2022.10) Our paper titled “Backdoor Attacks in the Supply Chain of Masked Image Modeling” is online. You can read it here!
  • (2022.07) I have successfully passed the Qualifying Exam!
  • (2022.03) Our paper titled “On Xing Tian and the Perseverance of Anti-China Sentiment Online” got accepted in ICWSM 2022!
  • (2021.04) I will be a Ph.D. Student at CISPA Helmholtz Center for Information Security!
  • (2020.11) Our paper titled “Evil Under the Sun: Understanding and Discovering Attacks on Ethereum Decentralized Applications” got accepted in USENIX Security 2021!