知识花园 (Knowledge Garden)

The Knowledge Mining team $\sqsubseteq$ The Websoft research group

User Tools

Site Tools


en:team:publications

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
en:team:publications [2025/11/09 09:06] – [2025] whuen:team:publications [2025/12/15 17:30] (current) – [2026 年] whu
Line 8: Line 8:
   - Xiangyu Liu, Haodi Lei, Yi Liu, Yang Liu, Wei Hu$^\star$.\\ ProtSAE: Disentangling and interpreting protein language models via semantically-guided sparse autoencoders.\\ In: //AAAI//, 2026.   - Xiangyu Liu, Haodi Lei, Yi Liu, Yang Liu, Wei Hu$^\star$.\\ ProtSAE: Disentangling and interpreting protein language models via semantically-guided sparse autoencoders.\\ In: //AAAI//, 2026.
   - Yi Liu, Xiangyu Liu, Zequn Sun, Wei Hu$^\star$.\\ Answering the unanswerable is to err knowingly: Analyzing and mitigating abstention failures in large reasoning models.\\ In: //AAAI//, 2026.   - Yi Liu, Xiangyu Liu, Zequn Sun, Wei Hu$^\star$.\\ Answering the unanswerable is to err knowingly: Analyzing and mitigating abstention failures in large reasoning models.\\ In: //AAAI//, 2026.
-  - Jianhao Chen, Zishuo Xun$^\triangle$, Bocheng Zhou$^\triangle$, Han Qi$^\triangle$, Hangfan Zhang$^\triangle$, Qiaosheng Zhang$^\triangle$, Yang Chen$^\triangle$, Wei Hu, Yuzhong Qu, Shuyue Hu$^\triangle$.\\ Do we truly need so many samples? Multi-LLM repeated sampling efficiently scales test-time compute.\\ In: //AAAI//, 2026.+  - Jianhao Chen, Zishuo Xun$^\triangle$, Bocheng Zhou$^\triangle$, Han Qi$^\triangle$, Hangfan Zhang$^\triangle$, Qiaosheng Zhang$^\triangle$, Yang Chen$^\triangle$, Wei Hu, Yuzhong Qu$^\star$, Shuyue Hu$^{\triangle,\star}$.\\ Do we truly need so many samples? Multi-LLM repeated sampling efficiently scales test-time compute.\\ In: //AAAI//, 2026.
  
  
en/team/publications.txt · Last modified: 2025/12/15 17:30 by whu