no code implementations • ECCV 2020 • Zilong Ji, Xiaolong Zou, Xiaohan Lin, Xiao Liu, Tiejun Huang, Si Wu
By iteratively learning with the two strategies, the attentive regions are gradually shifted from the background to the foreground and the features become more discriminative.
no code implementations • 5 May 2024 • Xiaohan Lin, Qingxing Cao, Yinya Huang, Zhicheng Yang, Zhengying Liu, Zhenguo Li, Xiaodan Liang
We conduct extensive experiments to investigate whether current LMs can generate theorems in the library and benefit the problem theorems proving.
1 code implementation • 14 Feb 2024 • Yinya Huang, Xiaohan Lin, Zhengying Liu, Qingxing Cao, Huajian Xin, Haiming Wang, Zhenguo Li, Linqi Song, Xiaodan Liang
Recent large language models (LLMs) have witnessed significant advancement in various tasks, including mathematical reasoning and theorem proving.
no code implementations • 2 May 2023 • Jun Zhang, Xiaohan Lin, Weinan E, Yi Qin Gao
Multiscale molecular modeling is widely applied in scientific research of molecular properties over large time and length scales.
no code implementations • 16 Mar 2023 • Yupeng Huang, Hong Zhang, Siyuan Jiang, Dajiong Yue, Xiaohan Lin, Jun Zhang, Yi Qin Gao
In this study, we take the advantage of both traditional and machine-learning based methods, and present a method Deep Site and Docking Pose (DSDP) to improve the performance of blind docking.
no code implementations • 28 Jul 2019 • Yuanyuan Mi, Xiaohan Lin, Xiaolong Zou, Zilong Ji, Tiejun Huang, Si Wu
Spatiotemporal information processing is fundamental to brain functions.