Publications
During PhD
JailbreakRadar: Comprehensive Assessment of Jailbreak Attacks Against LLMs
Junjie Chu, Yugeng Liu, Ziqing Yang, Xinyue Shen, Michael Backes, Yang Zhang
Selected as ACL’25 oral, rate ≈ 8%
Neeko: Model Hijacking Attacks Against Generative Adversarial Networks
Junjie Chu, Yugeng Liu, Xinlei He, Michael Backes, Yang Zhang, Ahmed Salem
Generated Data with Fake Privacy: Hidden Dangers of Fine-Tuning Large Language Models on Generated Data
Atilla Akkus, Mingjie Li, Junjie Chu, Michael Backes, Yang Zhang, Sinem SavReconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models
Junjie Chu, Zeyang Sha, Michael Backes, Yang Zhang
Assigned as EMNLP’24 oral