知识花园 (Knowledge Garden)

The Knowledge Mining team $\sqsubseteq$ The Websoft research group

User Tools

Site Tools


en:team:publications

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:team:publications [2026/02/04 14:13] – [2026 年] whuen:team:publications [2026/04/16 21:15] (current) – [2026 年] whu
Line 6: Line 6:
  
   - Ziqi Wang, Jingzhe Zhang, Wei Hu$^\star$.\\ WoW: A window-to-window incremental index for range-filtering approximate nearest neighbor search.\\ In: //SIGMOD//, 3(6):378, 2026. [[https://arxiv.org/abs/2508.18617|arXiv]] [[https://github.com/nju-websoft/WoW|GitHub]]   - Ziqi Wang, Jingzhe Zhang, Wei Hu$^\star$.\\ WoW: A window-to-window incremental index for range-filtering approximate nearest neighbor search.\\ In: //SIGMOD//, 3(6):378, 2026. [[https://arxiv.org/abs/2508.18617|arXiv]] [[https://github.com/nju-websoft/WoW|GitHub]]
 +  - Yuhan Wu, Huan Zhang, Wei Cheng, Chen Shen, Jingyue Yang, Wei Hu$^\star$.\\ Bootstrapping code translation with weighted multilanguage exploration.\\ In: //ACL//, 2026.
 +  - Wei Cheng, Chen Shen, Huan Zhang, Yuhan Wu, Jingyue Yang, Wei Hu$^\star$.\\ BCaLLM: Call graph-guided Python breaking change detection with large language models.\\ In: //ISSTA//, 2026.
   - Xiangyu Liu, Haodi Lei, Yi Liu, Yang Liu, Wei Hu$^\star$.\\ ProtSAE: Disentangling and interpreting protein language models via semantically-guided sparse autoencoders.\\ In: //AAAI//, 2026.   - Xiangyu Liu, Haodi Lei, Yi Liu, Yang Liu, Wei Hu$^\star$.\\ ProtSAE: Disentangling and interpreting protein language models via semantically-guided sparse autoencoders.\\ In: //AAAI//, 2026.
   - Yi Liu, Xiangyu Liu, Zequn Sun, Wei Hu$^\star$.\\ Answering the unanswerable is to err knowingly: Analyzing and mitigating abstention failures in large reasoning models.\\ In: //AAAI//, 2026.   - Yi Liu, Xiangyu Liu, Zequn Sun, Wei Hu$^\star$.\\ Answering the unanswerable is to err knowingly: Analyzing and mitigating abstention failures in large reasoning models.\\ In: //AAAI//, 2026.
   - Huan Zhang, Wei Cheng, Wei Hu$^\star$.\\ Self-improving code generation via semantic entropy and behavioral consensus.\\ In: //ICPC//, 2026.   - Huan Zhang, Wei Cheng, Wei Hu$^\star$.\\ Self-improving code generation via semantic entropy and behavioral consensus.\\ In: //ICPC//, 2026.
-  - Haoyang Chen, Yi Liu, Chenyang Li, Wei Hu$^\star$.\\ Decomposing complexity: Difficulty-aware multi-agent framework for open-domain knowledge graph construction.\\ In: //DASFAA//, 2026.+  - Haoyang Chen, Yi Liu, Chenyang Li, Wei Hu$^\star$.\\ Decomposing complexity: A difficulty-aware multi-agent framework for open-domain knowledge graph construction.\\ In: //DASFAA//, 2026. 
 +  - Haoyang Chen, Yi Liu, Jianzhi Shao$^\S$, Tao Zhang$^\S$, Chengfu Huo$^\S$, Wei Hu$^\star$.\\ How do answer tokens read reasoning traces? Self-reading patterns in thinking LLMs for quantitative reasoning.\\ In: //Findings of ACL//, 2026. 
 +  - Wei Cheng, Yongchang Cao$^\S$, Chen Shen, Binhua Li$^\S$, Jue Chen$^\S$, Yongbin Li$^{\S,\star}$, Wei Hu$^\star$.\\ To diff or not to diff? Structure-aware and adaptive output formats for efficient LLM-based code editing.\\ In: //Findings of ACL//, 2026.
   - Jianhao Chen, Zishuo Xun$^\S$, Bocheng Zhou$^\S$, Han Qi$^\S$, Hangfan Zhang$^\S$, Qiaosheng Zhang$^\S$, Yang Chen$^\S$, Wei Hu, Yuzhong Qu$^\star$, Shuyue Hu$^{\S,\star}$.\\ Do we truly need so many samples? Multi-LLM repeated sampling efficiently scales test-time compute.\\ In: //AAAI//, 2026.   - Jianhao Chen, Zishuo Xun$^\S$, Bocheng Zhou$^\S$, Han Qi$^\S$, Hangfan Zhang$^\S$, Qiaosheng Zhang$^\S$, Yang Chen$^\S$, Wei Hu, Yuzhong Qu$^\star$, Shuyue Hu$^{\S,\star}$.\\ Do we truly need so many samples? Multi-LLM repeated sampling efficiently scales test-time compute.\\ In: //AAAI//, 2026.
 +  - Yuheng Bao, Wenhao Zhou, Xuan Wu, Wei Hu, Dingkun Xu, Mingjia Qian, Yuzhong Qu$^\star$.\\ SQA: SPARQL query annotating with question-answer pairs.\\ In: //ESWC//, 2026.
  
  
en/team/publications.1770185595.txt.gz · Last modified: 2026/02/04 14:13 by whu