Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
CoCoOne commited on
Commit
6b1842f
·
verified ·
1 Parent(s): a672324

Sync README and assets with InternScience/SGI-DeepResearch

Browse files
README.md CHANGED
@@ -27,3 +27,280 @@ configs:
27
  - split: test
28
  path: data/test-*
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  - split: test
28
  path: data/test-*
29
  ---
30
+
31
+ <div align="center">
32
+ <h1>Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows</h1>
33
+ </div>
34
+
35
+ <!-- <p align="center">
36
+ <a href="https://internscience.github.io/SGI-Page/"><b>🌐Official Site</b></a> ·
37
+ <a href="https://arxiv.org/pdf/2512.16969"><b>📜arXiv</b></a> ·
38
+ <a href="https://huggingface.co/collections/InternScience/sgi-bench"><b>🤗Hugging Face</b></a> ·
39
+ <a href="https://github.com/InternScience/SGI-Bench"><b>💻GitHub</b></a>
40
+ </p> -->
41
+
42
+ <div align="center">
43
+
44
+ [![Official Site](https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage)](https://internscience.github.io/SGI-Page/)&#160;
45
+ <a href="https://arxiv.org/pdf/2512.16969" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" height="21px"></a>
46
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-gray)](https://huggingface.co/collections/InternScience/sgi-bench)&#160;
47
+ [![GitHub](https://img.shields.io/badge/GitHub-000000?logo=github&logoColor=white)](https://github.com/InternScience/SGI-Bench)&#160;
48
+ <!-- [![PDF](https://img.shields.io/badge/📄%20PDF-ff69b4)](https://internscience.github.io/SGI-Page/paper.pdf)&#160; -->
49
+
50
+ Welcome to the official repository for the SGI-Bench! 👏
51
+
52
+ </div>
53
+
54
+ <p align="center">
55
+ <img src="assets/teaser.png" alt="SGI Overview" width="850">
56
+ </p>
57
+
58
+ Scientist-aligned benchmark for evaluating Scientific General Intelligence (SGI) across the full inquiry cycle: Deliberation, Conception, Action, and Perception. The benchmark spans 10 disciplines and more than 1,000 expert‑curated samples inspired by Science’s 125 Big Questions, with an agentic evaluation framework and multi‑metric protocol.
59
+
60
+ ---
61
+
62
+ ## 🆕 Latest News
63
+
64
+ 🚩 **Update** (2025-12-22) We release SGI-Bench [paper](https://arxiv.org/pdf/2512.16969) on arXiv.
65
+
66
+ 🚩 **Update** (2025-12-19) SGI-Bench is adapted to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit/pull/1358) and [SciEvalKit](https://github.com/InternScience/SciEvalKit), both of which are highly efficient and comprehensive evaluation toolkits.
67
+
68
+ 🎤 **Talk** (2025-12-18) We are invited to give a talk on *large language model evaluation* at the [AI Insight Talk](https://www.bilibili.com/video/BV16yqdBnE82/?share_source=copy_web&vd_source=7b9d898a8c3bbebf65c411956ed7f8ce) jointly organized by [OpenMMLab](https://openmmlab.com/), [Zhihu](https://www.zhihu.com/), and [ModelScope](https://www.modelscope.cn/).
69
+
70
+ 🚩 **Update** (2025-12-12) We evaluate the newly released `GPT-5.2-Pro` on SGI-Bench.
71
+
72
+ <details>
73
+ <summary>👉 More News (Click to expand)</summary>
74
+
75
+ 🚩 **Update** (2025-12-10) We update the paper [PDF](https://internscience.github.io/SGI-Page/paper.pdf) on the page.
76
+
77
+ 🚩 **Update** (2025-12-03) We officially release the [data](https://huggingface.co/collections/InternScience/sgi-bench) and [code](https://github.com/InternScience/SGI-Bench) of SGI-Bench.
78
+ </details>
79
+
80
+ ---
81
+
82
+ ## 🔬 What is Scientific General Intelligence (SGI)?
83
+ SGI denotes an AI system that can autonomously navigate the full, iterative cycle of scientific inquiry—Deliberation, Conception, Action, and Perception—with the versatility and proficiency of a human scientist. SGI‑Bench operationalizes this definition via four scientist‑aligned task families: scientific deep research, idea generation, dry/wet experiments, and multimodal experimental reasoning.
84
+
85
+ ---
86
+
87
+ ## 🎯 Framework & Tasks
88
+
89
+ <p align="center">
90
+ <img src="assets/pipeline.png" alt="SGI-Bench Pipeline" width="850">
91
+ </p>
92
+
93
+ - **Deliberation (Scientific Deep Research)**: Multi‑hop retrieval, synthesis, and meta‑analysis style reasoning.
94
+ - **Conception (Idea Generation)**: Structured ideation and multi‑dimensional comparative evaluation.
95
+ - **Action (Dry/Wet Experiment)**: Code generation, lab protocol development and verification.
96
+ - **Perception (Experimental Reasoning)**: Process/observation/simulation/experiment/visualization image reasoning.
97
+
98
+ Grounded in the Practical Inquiry Model (PIM), SGI‑Bench treats science as an iterative cycle linking deliberation, conception, action and perception. Under this lens, SGI captures the capacity to integrate knowledge retrieval, idea formation, action execution, and interpretation into a unified loop of inquiry.
99
+
100
+ ---
101
+
102
+ ## 📂 Scientist‑Aligned Data Construction
103
+
104
+ <p align="center">
105
+ <img src="assets/subjects.png" alt="Scientist-Aligned Data Construction" width="850">
106
+ </p>
107
+
108
+ - **Raw Corpus**: Expert‑curated texts/images across 10 domains, inspired by Science’s 125 Big Questions.
109
+ - **Question Construction**: 100+ Master's and PhD holders with continuous expert‑in‑the‑loop review.
110
+ - **Data Cleaning**: Rules + model checks + expert QA to ensure executability and unique answers.
111
+ - **Difficulty Filtering**: Removes samples solved by >50% strong LLMs to maintain high challenge.
112
+
113
+ Result: High‑fidelity, scientist‑aligned tasks that are authentic, challenging, and broadly representative.
114
+
115
+ ---
116
+
117
+ ## 💯 Agentic Evaluation Framework
118
+
119
+ <p align="center">
120
+ <img src="assets/evaluation-framework.png" alt="Agentic Evaluation Framework" width="850">
121
+ </p>
122
+
123
+ - **Four Stages**: Question Selection → Metric Customization → Predict & Eval → Report Generation
124
+ - **Tool Pool**: Web search, PDF parser, Python interpreter, file reader, metric functions
125
+ - **Task Metrics**: EM/SLA; Implementation Similarity; PassAll@k/SER; MCA/RV
126
+ - **Customizable**: Add scientist‑aligned metrics (e.g., rigor, feasibility) on demand
127
+
128
+ This agent‑based stack formalizes scoring into traceable stages, improves reproducibility, mitigates evaluator–model coupling bias, and yields actionable, scientist‑aligned insights.
129
+
130
+ ---
131
+
132
+ ## 🚀 Test‑Time Reinforcement Learning (TTRL)
133
+
134
+ <p align="center">
135
+ <img src="assets/grpo_reward_curves.png" alt="TTRL Training Dynamics" width="850">
136
+ </p>
137
+
138
+ - **Objective**: Address no‑ground‑truth idea generation by optimizing novelty at test time with online retrieval as a moving baseline.
139
+ - **Reward Design**:
140
+ R = R_format + R_novelty
141
+ Enforce XML format and strict structure (e.g., &lt;think&gt;, &lt;answer&gt;); reward embedding dissimilarity from retrieved works, gated by thresholds.
142
+ - **Setup**: GRPO on Qwen3‑8B (ms‑swift), G=8, high temperature, bfloat16, online retrieval n=4.
143
+ - **Dynamics**: Format reward saturates quickly; novelty steadily increases. Average novelty improved from 49.36 → 62.06 without labels.
144
+
145
+ TTRL converts open‑ended ideation into measurable test‑time optimization and extends to multi‑objective rewards (rigor, feasibility, safety, cost).
146
+
147
+ ---
148
+
149
+ ## 🏆 Leaderboard Highlights
150
+
151
+ | Model | Deep Research | Idea Generation | Dry Experiment | Wet Experiment | Experimental Reasoning | SGI-Score |
152
+ | --------------------- | ------------: | --------------: | -------------: | -------------: | ---------------------: | --------: |
153
+ | Gemini-3-Pro 🥇 | **18.48** | 39.68 | **36.64** | 32.45 | **41.92** | **33.83** |
154
+ | Claude-Sonnet-4.5 🥈 | 13.84 | 43.20 | 35.79 | 30.15 | 37.80 | 32.16 |
155
+ | Qwen3-Max 🥉 | 15.38 | 39.83 | 33.21 | 33.62 | 37.80 | 31.97 |
156
+ | GPT-4.1 | 11.32 | 36.49 | 34.32 | **36.63** | 38.49 | 31.45 |
157
+ | GPT-5.2-Pro | 15.72 | 55.03 | 28.04 | 17.50 | 39.18 | 31.09 |
158
+ | GPT-5 | 14.47 | **55.40** | 29.89 | 16.31 | 38.14 | 30.84 |
159
+ | o3 | 12.89 | 46.07 | 31.73 | 30.04 | 32.65 | 30.68 |
160
+ | Claude-Opus-4.1 | 12.93 | 40.29 | 34.69 | 25.38 | 38.83 | 30.42 |
161
+ | o4-mini | 11.95 | 40.78 | 35.79 | 28.86 | 33.33 | 30.14 |
162
+ | GPT-5.1 | 11.64 | 47.12 | 31.00 | 22.77 | 34.02 | 29.31 |
163
+ | Grok-4 | 13.31 | 37.12 | 33.71 | 29.01 | 30.24 | 28.68 |
164
+ | Qwen3-VL-235B-A22B | 11.97 | 39.28 | 28.41 | 30.30 | 31.62 | 28.32 |
165
+ | Gemini-2.5-Pro | 15.09 | 39.95 | 22.51 | 22.05 | 41.24 | 28.17 |
166
+ | Intern-S1 | 15.74 | 38.09 | 28.79 | 29.02 | 28.87 | 28.10 |
167
+ | GPT-4o | 7.86 | 35.95 | 26.94 | 31.31 | 32.30 | 26.87 |
168
+ | Gemini-2.5-Flash | 10.69 | 39.13 | 21.03 | 18.55 | 34.36 | 24.75 |
169
+ | Llama-4-Scout | 7.86 | 29.72 | 20.37 | 21.66 | 25.77 | 21.08 |
170
+ | Qwen3-8B | 8.18 | 35.78 | 18.45 | 9.96 | 23.37 | 19.15 |
171
+ | Intern-S1-mini | 11.06 | 36.04 | 16.97 | 12.42 | 16.84 | 18.67 |
172
+
173
+
174
+ ---
175
+
176
+ ## 🔥 Quick Start
177
+
178
+ ```bash
179
+ git clone https://github.com/InternScience/SGI-Bench.git
180
+ cd SGI-Bench/evaluation
181
+
182
+ export OPENAI_API_KEY="xxxxx"
183
+ export OPENAI_BASE_URL="xxxxx"
184
+
185
+ conda create -n sgi python=3.13.7
186
+ conda activate sgi
187
+ pip install -r requirements.txt
188
+ ```
189
+
190
+ ### 📚 Task 1 Deep Research
191
+
192
+ ```bash
193
+ conda activate sgi
194
+ python task_1_deep_research/step_1_get_answer.py gpt-5.2-pro
195
+ python task_1_deep_research/step_2_score.py gpt-5.2-pro
196
+ ```
197
+
198
+ ### 💡 Task 2 Idea Generation
199
+
200
+ 1. Install the environment dependencies for evaluating idea generation.
201
+
202
+ ```bash
203
+ conda create -n idea python=3.10.18
204
+ conda activate idea
205
+ pip install -r task_2_idea_generation/idea_generation_requirements.txt
206
+ ```
207
+
208
+ 2. Start the evaluation.
209
+
210
+ ```bash
211
+ conda activate idea
212
+ python task_2_idea_generation/step_1_get_answer.py gpt-5.2-pro
213
+ python task_2_idea_generation/step_2_score.py gpt-5.2-pro
214
+ ```
215
+
216
+ ### 🖥️ Task 3.1 Dry Experiment (Code Generation)
217
+
218
+ 1. Install the environment dependencies for running the dry experiment code.
219
+
220
+ ```bash
221
+ conda create -n dryexp python=3.10.18
222
+ conda activate dryexp
223
+ pip install -r task_3_dry_experiment/dry_experiment_requirements.txt
224
+ ```
225
+
226
+ 2. Create code folder and initialize data (only need to run once).
227
+
228
+ ```bash
229
+ conda activate sgi
230
+ python task_3_dry_experiment/step_1_build.py
231
+ ```
232
+
233
+ > Note: If some scripts time out during execution, please enter the corresponding folder and manually run the script to complete the data initialization.
234
+
235
+ 3. Start the evaluation.
236
+
237
+ ```bash
238
+ conda activate sgi
239
+ python task_3_dry_experiment/step_2_get_answer.py gpt-5.2-pro
240
+ python task_3_dry_experiment/step_3_run_code.py gpt-5.2-pro
241
+ python task_3_dry_experiment/step_4_score.py gpt-5.2-pro
242
+ ```
243
+
244
+ ### 🧪 Task 3.2 Wet Experiment (Lab Protocol)
245
+
246
+ ```bash
247
+ conda activate sgi
248
+ python task_3_wet_experiment/step_1_get_answer.py gpt-5.2-pro
249
+ python task_3_wet_experiment/step_2_score.py gpt-5.2-pro
250
+ ```
251
+
252
+ ### 📊 Task 4 Experimental Reasoning
253
+
254
+ ```bash
255
+ conda activate sgi
256
+ python task_4_experimental_reasoning/step_1_get_answer.py gpt-5.2-pro
257
+ python task_4_experimental_reasoning/step_2_score.py gpt-5.2-pro
258
+ ```
259
+
260
+ ### 💎 SGI-Score
261
+
262
+ ```bash
263
+ conda activate sgi
264
+ python sgi_score.py gpt-5.2-pro
265
+ ```
266
+
267
+ ---
268
+
269
+ ## 📬 Contact Us
270
+
271
+ - 💬 **GitHub Issues**: Please open an issue for bug reports or feature requests
272
+
273
+ - 📧 **Email**: [xu_wanghan@sjtu.edu.cn](https://black-yt.github.io/)
274
+
275
+ - 🤝 **Community**:
276
+
277
+ <p align="center">
278
+ <img src="https://raw.githubusercontent.com/InternScience/SGI-Bench/main/assets/wechat.jpg" alt="WeChat" width="200">
279
+ </p>
280
+
281
+ ---
282
+
283
+ ## 📜 Citation
284
+
285
+ If you would like to cite our work, please use the following BibTeX.
286
+
287
+ ```bib
288
+ @article{xu2025probing,
289
+ title={Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows},
290
+ author={Xu, Wanghan and Zhou, Yuhao and Zhou, Yifan and Cao, Qinglong and Li, Shuo and Bu, Jia and Liu, Bo and Chen, Yixin and He, Xuming and Zhao, Xiangyu and others},
291
+ journal={arXiv preprint arXiv:2512.16969},
292
+ year={2025}
293
+ }
294
+ ```
295
+
296
+ ---
297
+
298
+ ## 🌟 Star History
299
+
300
+ If you find this work helpful, please consider to **star⭐** this [repo](https://github.com/InternScience/SGI-Bench). Thanks for your support! 🤩
301
+
302
+ [![InternScience/SGI-Bench Stargazers](https://reporoster.com/stars/InternScience/SGI-Bench)](https://github.com/InternScience/SGI-Bench/stargazers)
303
+
304
+ [![Star History Chart](https://api.star-history.com/svg?repos=InternScience/SGI-Bench,TIGER-AI-Lab/MMLU-Pro,MMMU-Benchmark/MMMU,idavidrein/gpqa,SuperGPQA/SuperGPQA&type=date&legend=top-left)](https://www.star-history.com/#InternScience/SGI-Bench&TIGER-AI-Lab/MMLU-Pro&MMMU-Benchmark/MMMU&idavidrein/gpqa&SuperGPQA/SuperGPQA&type=date&legend=top-left)
305
+
306
+ <p align="right"><a href="#top">🔝Back to top</a></p>
assets/evaluation-framework.png ADDED

Git LFS Details

  • SHA256: fb573647ed1aeabf9dcbac5e2af23b29e9a940de873d67067643e30b5215f7e9
  • Pointer size: 131 Bytes
  • Size of remote file: 365 kB
assets/grpo_reward_curves.png ADDED

Git LFS Details

  • SHA256: f49dde29b816c9bd1d52d5ac55ddcbad9cc09c0ed35231cbaa9786cd274c5dea
  • Pointer size: 131 Bytes
  • Size of remote file: 160 kB
assets/pipeline.png ADDED

Git LFS Details

  • SHA256: 14a38d31c7e13a5e8f3b73a8d854f28ec6f7499db6685aff12c7d8d40d02b94a
  • Pointer size: 131 Bytes
  • Size of remote file: 952 kB
assets/subjects.png ADDED

Git LFS Details

  • SHA256: 681e170d9a0d70c076c8b299b1754f40998143a4beee1202a540ba241b1f694f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.89 MB
assets/teaser.png ADDED

Git LFS Details

  • SHA256: 6b9b393b53006a6276433b0c966183b9083474adcbadd87e1d3aa52549411019
  • Pointer size: 132 Bytes
  • Size of remote file: 2 MB