知识花园 (Knowledge Garden)

The Knowledge Mining team $\sqsubseteq$ The Websoft research group

User Tools

Site Tools


en:team:publications

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:team:publications [2025/10/26 10:53] – [2025] whuen:team:publications [2025/10/28 08:48] (current) – [2025] whu
Line 3: Line 3:
 **Marks**: $^\star$//Corresponding author//; $^\dagger$//Equal contributor//; $^\triangle$//External collaborator// **Marks**: $^\star$//Corresponding author//; $^\dagger$//Equal contributor//; $^\triangle$//External collaborator//
  
-==== 2026 ====+==== 2026 年 ==== 
 + 
 +  - Ziqi Wang, Jingzhe Zhang, Wei Hu$^\star$.\\ WoW: A window-to-window incremental index for range-filtering approximate nearest neighbor search.\\ In: //SIGMOD//, 3(6):378, 2026. [[https://arxiv.org/abs/2508.18617|arXiv]] [[https://github.com/nju-websoft/WoW|GitHub]]
  
-  - Ziqi Wang, Jingzhe Zhang, Wei Hu$^\star$.\\ WoW: A window-to-window incremental index for range-filtering approximate nearest neighbor search.\\ In: //SIGMOD//, 2026. 
  
 ==== 2025 ==== ==== 2025 ====
Line 18: Line 19:
   - Yang Liu, Zequn Sun$^\star$, Zhoutian Shao, Yuanning Cui, Wei Hu.\\ Are LLMs really knowledgeable for knowledge graph completion?\\ In: //ISWC (Resource Track)//, 2025. [[https://github.com/nju-websoft/ProbeKGC|GitHub]]   - Yang Liu, Zequn Sun$^\star$, Zhoutian Shao, Yuanning Cui, Wei Hu.\\ Are LLMs really knowledgeable for knowledge graph completion?\\ In: //ISWC (Resource Track)//, 2025. [[https://github.com/nju-websoft/ProbeKGC|GitHub]]
   - Zitao Wang, Xinyi Wang, Wei Hu$^\star$.\\ Mixture of LoRA experts for continual information extraction with LLMs.\\ In: //EMNLP Findings//, 2025.   - Zitao Wang, Xinyi Wang, Wei Hu$^\star$.\\ Mixture of LoRA experts for continual information extraction with LLMs.\\ In: //EMNLP Findings//, 2025.
-  - Yi Liu, Xiangrong Zhu, Xiangyu Liu, Wei Wei, Wei Hu$^\star$.\\ Avoiding knowledge edit skipping in multi-hop question answering with guided decomposition.\\ In: //EMNLP Findings//, 2025.+  - Yi Liu, Xiangrong Zhu, Xiangyu Liu, Wei Wei, Wei Hu$^\star$.\\ Avoiding knowledge edit skipping in multi-hop question answering with guided decomposition.\\ In: //EMNLP Findings//, 2025. [[https://arxiv.org/abs/2509.07555|arXiv]]
   - Xinyi Wang, Xiangrong Zhu, Wei Hu$^\star$.\\ Evidence selection via multi-aspect query diversification for cross-document relation extraction.\\ Journal of Intelligent Information Systems, 2025. [[https://doi.org/10.1007/s10844-025-00952-6|Springer]]   - Xinyi Wang, Xiangrong Zhu, Wei Hu$^\star$.\\ Evidence selection via multi-aspect query diversification for cross-document relation extraction.\\ Journal of Intelligent Information Systems, 2025. [[https://doi.org/10.1007/s10844-025-00952-6|Springer]]
   - Wei Hu, Zequn Sun.\\ Knowledge fusion.\\ In: //Handbook on Neurosymbolic AI and Knowledge Graphs//, 300--318, 2025. [[https://ebooks.iospress.nl/volume/handbook-on-neurosymbolic-ai-and-knowledge-graphs|IOS]]   - Wei Hu, Zequn Sun.\\ Knowledge fusion.\\ In: //Handbook on Neurosymbolic AI and Knowledge Graphs//, 300--318, 2025. [[https://ebooks.iospress.nl/volume/handbook-on-neurosymbolic-ai-and-knowledge-graphs|IOS]]
Line 37: Line 38:
   - Xinyi Wang, Wenzheng Zhao, Xiangrong Zhu, Wei Hu$^\star$.\\ Can ChatGPT solve relation extraction? An extensive assessment via design choice exploration.\\ In: //NLPCC//, 346--358, 2024.   - Xinyi Wang, Wenzheng Zhao, Xiangrong Zhu, Wei Hu$^\star$.\\ Can ChatGPT solve relation extraction? An extensive assessment via design choice exploration.\\ In: //NLPCC//, 346--358, 2024.
   - Jianhao Chen, Haoyuan Ouyang, Junyang Ren, Wentao Ding, Wei Hu, Yuzhong Qu.\\ Timeline-based sentence decomposition with in-context learning for temporal fact extraction.\\ In: //ACL//, 3415--3432, 2024.    - Jianhao Chen, Haoyuan Ouyang, Junyang Ren, Wentao Ding, Wei Hu, Yuzhong Qu.\\ Timeline-based sentence decomposition with in-context learning for temporal fact extraction.\\ In: //ACL//, 3415--3432, 2024. 
 +
 ==== 2023 ==== ==== 2023 ====
  
Line 75: Line 77:
   - Kexuan Xin$^\triangle$, Zequn Sun, Wen Hua$^{\triangle,\star}$, Wei Hu$^\star$, Xiaofang Zhou$^\triangle$.\\ Informed multi-context entity alignment.\\ In: //WSDM//, 1197--1205, 2022.   - Kexuan Xin$^\triangle$, Zequn Sun, Wen Hua$^{\triangle,\star}$, Wei Hu$^\star$, Xiaofang Zhou$^\triangle$.\\ Informed multi-context entity alignment.\\ In: //WSDM//, 1197--1205, 2022.
   - Kexuan Xin$^\triangle$, Zequn Sun, Wen Hua$^{\triangle,\star}$, Wei Hu, Jianfeng Qu$^\triangle$, Xiaofang Zhou$^\triangle$.\\ Large-scale entity alignment via knowledge graph merging, partitioning and embedding.\\ In: //CIKM//, 1197--1205, 2022.   - Kexuan Xin$^\triangle$, Zequn Sun, Wen Hua$^{\triangle,\star}$, Wei Hu, Jianfeng Qu$^\triangle$, Xiaofang Zhou$^\triangle$.\\ Large-scale entity alignment via knowledge graph merging, partitioning and embedding.\\ In: //CIKM//, 1197--1205, 2022.
 +
 ==== 2021 ==== ==== 2021 ====
  
en/team/publications.1761447194.txt.gz · Last modified: 2025/10/26 10:53 by whu