I am a Ph.D. candidate at Stony Brook University, supervised by Dr. Ting Wang.
My expertise is in ensuring the Safety and Trustworthiness of Large Language Models (LLMs). I work on identifying Security Challenges and developing Defensive Strategies to protect these models from adversarial threats. My recent work includes:
In addition, I possess extensive expertise and a strong interest in post-training, prompt engineering, inference optimization, LLM agents, and ensuring LLM alignment.
GraphRAG under Fire
Jiacheng Liang, Yuhui Wang, Changjiang Li, Rongyi Zhu, Tanqiu Jiang, Neil Gong, Ting Wang
WaterPark: A Robustness Assessment of Language Model Watermarking
Jiacheng Liang, Zian Wang, Lauren Hong, Shouling Ji, Ting Wang
RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Tanqiu Jiang, Zian Wang, Jiacheng Liang, Changjiang Li, Yuhui Wang, Ting Wang
International Conference on Learning Representations (ICLR’25)
Model Extraction Attacks Revisited
Jiacheng Liang, Ren Pang, Changjiang Li, Ting Wang
Asia Conference on Computer and Communications Security (Asia CCS’24)
Data to Defense: The Role of Curation in Customizing LLMs Against Jailbreaking Attacks
Xiaoqun Liu*, Jiacheng Liang*, Luoxi Tang, Muchao Ye, Weicheng Ma, Zhaohan Xi
Teammates at ALPS-Lab: Ren Pang (Amazon), Zhaohan Xi (Binghamton University), Changjiang Li, Tanqiu Jiang, Zian Wang, Yuhui Wang, Rongyi Zhu
Previous Collaborators: Bochuan Cao (PSU), Qihua Zhou (SZU), Yanjing Ren (CUHK), Guoli Wei (USTC), Zicong Hong (HKUST), Jun Pan (HKPolyU)
Previous Advisors: Jingwei Li (UESTC), Song Guo (HKUST), Songze Li (SEU, HKUST)
Powered by Jekyll and Minimal Light theme.