This is LCM-Lab, an open-source research team within the OpenNLG Group that focuses on long sequence modeling and optimization. Below is a list of our work—please feel free to explore!
If you have any questions about the code or paper details, please don’t hesitate to open an issue or contact this email [email protected].
-
LongRM: Pushing the limits of reward modeling beyond 128K tokens
Zecheng Tang, Baibei Ji, Quantong Qiu, Haitian Wang, Xiaobo Liang, Juntao Li, Min Zhang.
-
MemoryRewardBench: Benchmarking Reward Models for Long-Term Memory Management in Large Language Models
Zecheng Tang, Baibei Ji, Ruoxi Sun, Haitian Wang, WangJie You, Zhang Yijun, Wenpeng Zhu, Ji Qi, Juntao Li, Min Zhang.
-
LOOM-Eval: A comprehensive and efficient framework for long-context model evaluation
Zecheng Tang, Haitian Wang, Quantong Qiu, Baibei Ji, Ruoxi Sun, Keyan Zhou, Juntao Li, Min Zhang .
-
L-CiteEval (ACL 2025): A faithfulness-oriented benchmark for long-context citation
Zecheng Tang, Keyan Zhou, Juntao Li, Baibei Ji, Jianye Hou, Min Zhang.
-
MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models
Keyan Zhou, Zecheng Tang, Lingfeng Ming, Guanghao Zhou, Qiguang Chen, Dan Qiao, Zheming Yang, Libo Qin, Minghui Qiu, Juntao Li, Min Zhang.
-
Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers
Zecheng Tang, Quantong Qiu, Yi Yang, Zhiyi Hong, Haiya Xiang, Kebin Liu, Qingqing Dang, Juntao Li, Min Zhang.
-
CDT (ICLR 2026): Context Denoising Training for Long-Context Modeling
Zecheng Tang, Baibei Ji, Juntao Li, Lijun Wu, Haijia Gui, Min Zhang.
-
LOGO (ICML 2025): Long cOntext aliGnment via efficient preference Optimization
Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, Min Zhang.
-
Global-Mamba (ACL 2025): Efficient long-context modeling architecture
Wangjie You, Zecheng Tang, Juntao Li, Lili Yao, Min Zhang.