NLPCC 2026
NLPCC 2026 Shared Task 10: Reliability of AI-Assisted Scientific Reporting
Two complementary tracks: claim-level faithfulness to experimental results and citation-level faithfulness to external evidence.
Macau, China
November 3-5, 2026
Registration is now open for participating teams.
Registration

To register, please fill out the Shared Task Registration Form: https://ocn0wnz7cc7b.feishu.cn/share/base/form/shrcnLJgG87xZ808RqZxypMTq5b

Alternatively, you may send your team information to: nlp2ct.runzhe@gmail.com

Introduction

As generative AI and agentic AI become increasingly integrated into scientific workflows, they are now widely used to assist with scientific writing, including summarizing experimental results, drafting conclusions, and generating citation-supported statements.

Recent studies have shown that AI-assisted scientific reporting often overgeneralizes conclusions beyond what the source evidence justifies. Therefore, this shared task is deliberately scoped to the reporting layer of AI-assisted research and centers on the following question: given scientific evidence and an AI-generated scientific statement, can a system determine whether the statement faithfully reflects the evidence it summarizes or cites?

Tracks

Track 1: Claim-level faithfulness to experimental results

Systems are provided with a compact evidence bundle and an AI-generated claim paragraph segmented into individual sentences for evaluation. Participants are required to assign a label to each sentence, indicating whether it is supported by the evidence or, if not, what type of unsupported reporting it contains.

In scientific writing, unsupported reporting often appears as one or two problematic sentences embedded within an otherwise plausible paragraph. This track focuses on detecting such fine-grained reporting errors.

Track 2: Citation-level faithfulness to external evidence

Systems are given an atomic AI-generated scientific claim and the full text of the cited paper in structured textual form. They must determine whether the paper directly supports the claim, partially supports it, is only topically related without providing evidential support, or is entirely irrelevant.

In addition, participants are required to submit a ranked list of evidence paragraph IDs so that evaluation captures not only labeling accuracy but also the ability to identify the relevant supporting evidence.

Tentative Schedule

March 20, 2026Shared task announcement and call for participation
March 20, 2026Registration opens
April 15, 2026Release of detailed task guidelines and training data
May 25, 2026Registration deadline
June 11, 2026Test data release
June 20, 2026Deadline for participants to submit results
June 30, 2026Evaluation results released and call for system reports and conference papers

Registration

To register, please fill out the online form, or send your team information to the registration email nlp2ct.runzhe@gmail.com.

Registration Form: https://ocn0wnz7cc7b.feishu.cn/share/base/form/shrcnLJgG87xZ808RqZxypMTq5b

Organizers

This shared task is organized by the University of Macau.

  • Runzhe Zhan | University of Macau | nlp2ct.runzhe@gmail.com
  • Derek F. Wong | University of Macau
  • Yutong Yao | University of Macau
  • Junchao Wu | University of Macau
  • Jingkun Ma | University of Macau
  • Yanming Sun | University of Macau
  • Fengying Ye | University of Macau
NLPCC 2026 共享任务:AI辅助科学报告的可靠性
本任务设置两个互补赛道,分别关注实验结果的陈述级忠实性和外部文献的引文级忠实性。
Macau, China
November 3-5, 2026
现已开放报名。
报名

有意参赛的团队请填写在线注册表格: https://ocn0wnz7cc7b.feishu.cn/share/base/form/shrcnLJgG87xZ808RqZxypMTq5b

或者,将团队信息发送至报名邮箱: nlp2ct.runzhe@gmail.com

简介

随着生成式人工智能和智能体人工智能不断融入科学发现和研究流程,相关技术已被广泛用于辅助科学写作,例如总结实验结果、撰写研究结论,以及生成带有引文支持的科学表述。然而,近期研究表明,AI 生成的科学报告常常会对结论作出超出原始证据支持范围的泛化表述。基于此,本共享任务聚焦于 AI 辅助科研中的“报告层”问题,围绕以下核心问题展开:给定科学证据与一条由 AI 生成的科学陈述,系统能否判断该陈述是否忠实反映了其所概括或引用的证据?

赛道说明

Track 1:面向实验结果的陈述级忠实性判定

参赛系统将获得一个紧凑的证据集合,以及一段由 AI 生成的陈述性段落。该段落将被切分为若干独立句子,参赛者需要对每个句子进行标注,判断其是否得到证据支持;若不被支持,则需进一步指出其所属的失实表述类型。

在科学写作中,失实表述往往并非表现为整段内容完全错误,而是以一至两个存在问题的句子嵌入在整体看似合理的段落中。因此,本赛道重点考察系统对细粒度报告偏差的识别能力。

Track 2:面向外部证据的引文级忠实性判定

参赛系统将获得一条原子化的 AI 生成科学陈述,以及被引用论文的结构化全文文本。系统需要判断该论文与陈述之间的关系属于以下哪一类:直接支持、部分支持、仅主题相关但不构成证据支持、完全无关。

此外,参赛系统还需提交一个按相关性排序的证据段落编号列表。这样,评测不仅关注标签判定的准确性,也关注系统定位关键支持证据的能力。

初步日程

2026年3月20日共享任务发布及参赛征集
2026年3月20日报名开始
2026年4月15日发布详细任务指南和训练数据
2026年5月25日报名截止
2026年6月11日测试数据发布
2026年6月20日参赛队伍提交结果截止
2026年6月30日公布评测结果,并征集系统报告和会议论文

报名方式

有意参赛的团队请填写在线注册表格,或者将团队信息发送至报名邮箱 nlp2ct.runzhe@gmail.com 完成注册。

报名表: https://ocn0wnz7cc7b.feishu.cn/share/base/form/shrcnLJgG87xZ808RqZxypMTq5b

组织者

本共享任务由澳门大学主办。

  • 詹润哲 | 澳门大学 | nlp2ct.runzhe@gmail.com
  • 黄辉 | 澳门大学
  • 姚宇同 | 澳门大学
  • 吴俊潮 | 澳门大学
  • 马景坤 | 澳门大学
  • 孙燕明 | 澳门大学
  • 叶沣颖 | 澳门大学