Tianle Gu(顾天乐)

Graduate Student @ IIGroup, Intern @ ShLab

life.jpg

"理想主义者 / 一个尚未有作品的诗人"

"An idealist / a poet without works yet."

-- 詹青云(Qingyun Zhan)

I am currently a master’s student in Big Data Engineering at Tsinghua University, advised by Prof. Yujiu Yang, and a research intern at Shanghai Artificial Intelligence Laboratory in Shanghai. I received my Bachelor’s degree in Computer Science and Technology from Hunan University.

My research focuses on the safety, alignment, and interpretability of large (multimodal) language models. I have worked on topics such as MLLMs Safety Evaluation, LLM Unlearning, and LLM Watermarking.

My long-term vision is to solve real problems through research that is minimal in design, grounded in theory, and genuinely useful in practice.

My Safety Stack
Layer 1 — Data Safety

Safety at the training distribution level.

Layer 2 — Representation Safety

Safety encoded in internal activations and geometry.

Layer 3 — Behavioral Alignment

Observable model behavior under safety policies.

Layer 4 — System & Agent Safety

Safety in tool use, environment interaction, and deployment constraints.

Selected Publications

  1. NeurIPS
    scholar66
    MLLMGUARD: a multi-dimensional safety evaluation suite for multimodal large language models
    Tianle Gu , Zeyang Zhou , Kexin Huang , and 8 more authors
    In Proceedings of the 38th International Conference on Neural Information Processing Systems, 2024
  2. ACL
    (Findings)
    scholar17
    From Evasion to Concealment: Stealthy Knowledge Unlearning for LLMs
    Tianle Gu , Kexin Huang , Ruilin Luo , and 5 more authors
    In Findings of the Association for Computational Linguistics: ACL 2025, Jul 2025
  3. Morphmark: Flexible adaptive watermarking for large language models
    Zongqi Wang , Tianle Gu , Baoyuan Wu , and 1 more author
    arXiv preprint arXiv:2505.11541, 2025
  4. EMNLP
    (Oral)
    scholar3
    Invisible Entropy: Towards Safe and Efficient Low-Entropy LLM Watermarking
    Tianle Gu , Zongqi Wang , Kexin Huang , and 4 more authors
    arXiv preprint arXiv:2505.14112, 2025
  5. Benchmarking large language models under data contamination: A survey from static to dynamic evaluation
    Simin Chen , Yiming Chen , Zexin Li , and 8 more authors
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025