admarcosai 's Collections Reasoning | Planning
updated
Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive
Learning for Code Generation
Paper
• 2310.18628
• Published
• 8
TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language
Modeling Likewise
Paper
• 2310.19019
• Published
• 9
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
Paper
• 2311.02262
• Published
• 14
Thread of Thought Unraveling Chaotic Contexts
Paper
• 2311.08734
• Published
• 7
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought
Generation
Paper
• 2311.04254
• Published
• 15
Language Models can be Logical Solvers
Paper
• 2311.06158
• Published
• 20
LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers
Paper
• 2310.15164
• Published
• 3
UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations
Paper
• 2311.08469
• Published
• 11
ToolTalk: Evaluating Tool-Usage in a Conversational Setting
Paper
• 2311.10775
• Published
• 9
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
Paper
• 2311.12022
• Published
• 35
System 2 Attention (is something you might need too)
Paper
• 2311.11829
• Published
• 43
Memory Augmented Language Models through Mixture of Word Experts
Paper
• 2311.10768
• Published
• 19
Digital Socrates: Evaluating LLMs through explanation critiques
Paper
• 2311.09613
• Published
• 1
PathFinder: Guided Search over Multi-Step Reasoning Paths
Paper
• 2312.05180
• Published
• 10
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
Paper
• 2312.04474
• Published
• 34
Modeling Complex Mathematical Reasoning via Large Language Model based
MathAgent
Paper
• 2312.08926
• Published
• 9
K-Level Reasoning with Large Language Models
Paper
• 2402.01521
• Published
• 18
Large Language Models Are Neurosymbolic Reasoners
Paper
• 2401.09334
• Published
• 3
On the Prospects of Incorporating Large Language Models (LLMs) in
Automated Planning and Scheduling (APS)
Paper
• 2401.02500
• Published
• 1
SymbolicAI: A framework for logic-based approaches combining generative
models and solvers
Paper
• 2402.00854
• Published
• 22
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate
Structural Recursion
Paper
• 2401.12947
• Published
• 4
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper
• 2402.03620
• Published
• 117
Premise Order Matters in Reasoning with Large Language Models
Paper
• 2402.08939
• Published
• 28
Chain-of-Thought Reasoning Without Prompting
Paper
• 2402.10200
• Published
• 109
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and
Local Refinements
Paper
• 2402.10963
• Published
• 12
Are Your LLMs Capable of Stable Reasoning?
Paper
• 2412.13147
• Published
• 93
Compressed Chain of Thought: Efficient Reasoning Through Dense
Representations
Paper
• 2412.13171
• Published
• 35