venue
stringclasses
17 values
paper_openreview_id
stringlengths
9
13
title
stringlengths
4
192
abstract
stringlengths
2
4.99k
paper_decision
stringclasses
41 values
paper_pdf_link
stringlengths
31
63
ICLR.cc/2025/Conference
gz8Rr1iuDK
Geometric and Physical Constraints Synergistically Improve Neural PDE Integration
Neural PDE surrogates can improve on cost-accuracy tradeoffs of classical solvers, but often generalize poorly to new initial conditions, accumulate errors over time. To close the performance gap between training and long-term inference, we constrain neural surrogates with symmetry equivariance and physical conservatio...
Rejected_Submission
/pdf/8b8f83d5b7436a41c9e60bb448f12faf90c684c5.pdf
ICLR.cc/2025/Conference
dgR6i4TSng
Quantum-PEFT: Ultra parameter-efficient fine-tuning
This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient _quantum unitary parameterization_. With the use o...
ICLR 2025 Poster
/pdf/3007fb7ad498e8cf02a02e31c8b7b737d81ec792.pdf
ICLR.cc/2024/Conference
ky2JYPKkml
Towards Explainable and Efficient Multi-Modality Learning: Domain-Agnostic Concept Space Paired with Domain-Specific Projection Models
In an effort to create a more explainable AI system, we introduce a novel multi-modality learning framework in this study. This framework leverages a domain-agnostic concept space designed to be transparent and interpretable and a set of domain-specific projection models tailored to process distinct modality inputs and...
Rejected_Submission
/pdf/2bc2dcc4b9c0a38e67fce6e3933378f48989f422.pdf
ICLR.cc/2024/Conference
DQTxr8JtPX
Detecting Influence Structures in Multi-Agent Reinforcement Learning
We consider the problem of quantifying the amount of influence one agent can exert on another in the setting of multi-agent reinforcement learning (MARL). As a step towards a unified approach to express agents' interdependencies, we introduce the total and state influence measurement functions. Both of these are valid...
Rejected_Submission
/pdf/21f97843e972b03c8c2795f09248905af16ee51f.pdf
ICLR.cc/2024/Conference
zeobgjmUCc
Using Machine Learning Models to Predict Genitourinary Involvement Among Gastrointestinal Stromal Tumour Patients
Gastrointestinal stromal tumors (GISTs) can lead to involvement of other organs, including the genitourinary (GU) system. Machine learning may be a valuable tool in predicting GU involvement in GIST patients, and thus improving prognosis. This study aims to evaluate the use of machine learning algorithms to predict GU ...
Rejected_Submission
/pdf/fdb01852b05d7c4c57825a6c86900f4c1174776d.pdf
ICLR.cc/2025/Conference
AKsfpHc9sN
Alignment-Aware Model Extraction Attacks on Large Language Models
Model extraction attacks (MEAs) on large language models (LLMs) have received increasing attention in recent research. However, existing attack methods typically adapt the extraction strategies originally developed for deep neural networks (DNNs). They neglect the underlying inconsistency between the training tasks of ...
Rejected_Submission
/pdf/6ac60234efc3414007e613608d267870dfc4ea18.pdf
ICLR.cc/2025/Conference
1qP3lsatCR
NetMoE: Accelerating MoE Training through Dynamic Sample Placement
Mixture of Experts (MoE) is a widely used technique to expand model sizes for better model quality while maintaining the computation cost constant. In a nutshell, an MoE model consists of multiple experts in each model layer and routes the training tokens to only a fixed number of experts rather than all. In distribute...
ICLR 2025 Spotlight
/pdf/b92193e3eb230e379ba2c07078799b70a54ecc40.pdf
ICLR.cc/2024/Conference
8gZtt8nrpI
Diffusion Models With Learned Adaptive Noise Processes
Diffusion models have gained traction as powerful algorithms for synthesizing high-quality images. Central to these algorithms is the diffusion process, which maps data to noise according to equations inspired by thermodynamics, and which can significantly impact performance. In this work, we explore whether a diffusio...
Rejected_Submission
/pdf/7a923bd0a05a5d408ec7f4541f2398e7d3a5776f.pdf
ICLR.cc/2024/Conference
dsd04MYKax
Sum-of-Parts Models: Faithful Attributions for Groups of Features
An explanation of a machine learning model is considered "faithful" if it accurately reflects the model's decision-making process. However, explanations such as feature attributions for deep learning are not guaranteed to be faithful, and can produce potentially misleading interpretations. In this work, we develop Sum-...
Rejected_Submission
/pdf/13c5eb197bc773cf05aa1ccfe7b43fec6f4ab47e.pdf
ICLR.cc/2024/Conference
UCfz492fM8
CrossLoco: Human Motion Driven Control of Legged Robots via Guided Unsupervised Reinforcement Learning
Human motion driven control (HMDC) is an effective approach for generating natural and compelling robot motions while preserving high-level semantics. However, establishing the correspondence between humans and robots with different body structures is not straightforward due to the mismatches in kinematics and dynamics...
ICLR 2024 poster
/pdf/5dc2d26064e3720009a09cd274c6ff48e2c64fd2.pdf
ICLR.cc/2025/Conference
rlgplAuN2p
OCEAN: Offline Chain-of-thought Evaluation and Alignment in Large Language Models
Offline evaluation of LLMs is crucial in understanding their capacities, though current methods remain underexplored in existing research. In this work, we focus on the offline evaluation of the chain-of-thought capabilities and show how to optimize LLMs based on the proposed evaluation method. To enable offline feedba...
ICLR 2025 Poster
/pdf/c7da56491c73e5c2aa992a5fd538e0b3d04bcaca.pdf
ICLR.cc/2024/Conference
HCCkCjClO0
Online Weight Approximation for Continual Learning
Continual Learning primarily focuses on studying learning scenarios that challenge a learner’s capacity to adapt to new problems, while reducing the loss of previously acquired knowledge. This work addresses challenges arising when training a deep neural network across numerous tasks. We propose an Online Weight Approx...
Rejected_Submission
/pdf/74318d759c907d798296f4c7926d335bd003462b.pdf
ICLR.cc/2025/Conference
2NqrA1wYi6
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the utilization of past information, adaptation to novel environments, and improved sample efficiency. However, the term ``memory'' encompas...
Rejected_Submission
/pdf/08af374d83f12433a201b74045234edb52f5050f.pdf
ICLR.cc/2024/Conference
rIt0sJsZw9
Clustering Entity Specific Embeddings Towards a Prescribed Distribution
Now ubiquitous in deep learning is the transformer architecture, which has advanced the state-of-the-art (SOTA) in a variety of disciplines. When employed with a bidirectional attention mask, a special [CLS] token is often appended to the sequence being processed, serving as a summary of the sequence as a whole once pr...
Rejected_Submission
/pdf/387619b6470834c695fbde9c1b65eba01adf4b34.pdf
ICLR.cc/2025/Conference
xlxGsX1pc7
U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs
The current evaluation of mathematical skills in LLMs is limited, as existing benchmarks are relatively small, primarily focus on elementary and high-school problems, or lack diversity in topics. Additionally, the inclusion of visual elements in tasks remains largely under-explored. To address these gaps, we introduc...
Rejected_Submission
/pdf/675ef035c768c97a4711078fbf30a6c5263b25a0.pdf
ICLR.cc/2025/Conference
v5BouOktUP
Multivariate Time-series Forecasting with SPACE: Series Prediction Augmented by Causality Estimation
The analysis of multivariate time series (MTS) presents a complex yet crucial task with substantial applications in areas such as weather forecasting, policy formulation, and stock market prediction. It is important to highlight three key characteristics of MTS that contribute to the challenging and multifaceted nature...
Rejected_Submission
/pdf/fd62db3a4e937fb3bc598cbae54038fef716c955.pdf
ICLR.cc/2025/Conference
pDDODPtpx9
Distribution-Free Data Uncertainty for Neural Network Regression
Quantifying uncertainty is an essential part of predictive modeling, especially in the context of high-stakes decision-making. While classification output includes data uncertainty by design in the form of class probabilities, the regression task generally aims only to predict the expected value of the target variable....
ICLR 2025 Poster
/pdf/763c2430cbe272fedf9975b8a70e6635341bdd56.pdf
ICLR.cc/2025/Conference
VAvZ4oinpa
Video Generation with Learned Action Prior
Long-term stochastic video generation remains challenging, especially with moving cameras. This scenario introduces complex interactions between camera movement and observed pixels, resulting in intricate spatio-temporal dynamics and partial observability issues. Current approaches often focus on pixel-level image reco...
Rejected_Submission
/pdf/7e0cbd10c17c4802fbb1a9110ab3deec2d204710.pdf
ICLR.cc/2024/Conference
aiPcdCFmYy
Sinkhorn Distributional Reinforcement Learning
The empirical success of distributional reinforcement learning~(RL) highly depends on the representation of return distributions and the choice of distribution divergence. In this paper, we propose \textit{Sinkhorn distributional RL~(SinkhornDRL)} algorithm that learns unrestricted statistics, i.e., deterministic sampl...
Rejected_Submission
/pdf/4e32cd2066a29a25c979fb31c1143e26862f889c.pdf
ICLR.cc/2024/Conference
LY3ukUANko
Zoology: Measuring and Improving Recall in Efficient Language Models
Attention-free language models that combine gating and convolutions are growing in popularity due to their efficiency and increasingly competitive performance. To better understand these architectures, we pretrain a suite of 17 attention and gated-convolution language models, finding that SoTA gated-convolution archite...
ICLR 2024 poster
/pdf/80fadc4fc3c3c1a12600b7f7e0e3d13e04ac334b.pdf
ICLR.cc/2025/Conference
du9reSRIo1
RouteFinder: Towards Foundation Models for Vehicle Routing Problems
This paper introduces RouteFinder, a comprehensive foundation model framework to tackle different Vehicle Routing Problem (VRP) variants. Our core idea is that a foundation model for VRPs should be able to represent variants by treating each as a subset of a generalized problem equipped with different attributes. We pr...
Rejected_Submission
/pdf/301514926e65171a033cf7aa02ecc7b8e707dfe7.pdf
ICLR.cc/2025/Conference
1eQT9OzfNQ
Long Context Compression with Activation Beacon
Long context compression is a critical research problem due to its significance in reducing the high computational and memory costs associated with LLMs. In this paper, we propose Activation Beacon, a plug-in module for transformer-based LLMs that targets effective, efficient, and flexible compression of long contexts....
ICLR 2025 Poster
/pdf/2cb117a00ef812283da61295a1d36584936a89ef.pdf
ICLR.cc/2025/Conference
vG9dVXwXQV
Pre-Trained Vision-Language Model Selection and Reuse for Downstream Tasks
Pre-trained Vision-Language Models (VLMs) are becoming increasingly popular across various visual tasks, and several open-sourced VLM variants have been released. However, selecting the best-performing pre-trained VLM for a specific downstream task is challenging since no single VLM can achieve promising performance on...
Rejected_Submission
/pdf/b165dc48a44bb4ec5000295d400dbc3a8b1b0c5d.pdf
ICLR.cc/2024/Conference
rYyu3jpk8z
Open-Domain Text Evaluation via Contrastive Distribution Methods
Recent advancements in open-domain text generation, driven by the power of large pre-trained language models (LLMs), have demonstrated remarkable performance. However, assessing these models for specific attributes remains a challenge. Traditional reference-based metrics like BLEU, ROUGE, and METEOR measure the similar...
Rejected_Submission
/pdf/b7848de1814cf111b213295b5cfb4d0cdb54890b.pdf
ICLR.cc/2024/Conference
ta26LtNq2r
Learning to Reject Meets Long-tail Learning
Learning to reject (L2R) is a classical problem where one seeks a classifier capable of abstaining on low-confidence samples. Most prior work on L2R has focused on minimizing the standard misclassification error. However, in many real-world applications, the label distribution is highly imbalanced, necessitating alter...
ICLR 2024 spotlight
/pdf/2d19da352871db6a1e7954f4651e9b44594126b8.pdf
ICLR.cc/2024/Conference
pzZjyYee6L
Don't Reinvent the Steering Wheel
To make safe and informed decisions, autonomous driving systems can benefit from the capability of predicting the intentions and trajectories of other agents on the road in real-time. Trajectory forecasting for traffic scenarios has seen great strides in recent years in parallel with advancements in attention-based ne...
Rejected_Submission
/pdf/80c9543fa3df213c57fb1f5d770354ac82701d39.pdf
ICLR.cc/2025/Conference
7mlvOHL6qJ
LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models
Recent advances in Large Language Models (LLMs) have stimulated a significant paradigm shift in evolutionary optimization, where hand-crafted search heuristics are gradually replaced with LLMs serving as intelligent search operators. However, these studies still bear some notable limitations, including a challenge to b...
ICLR 2025 Poster
/pdf/9dbb4a438c60a6230727c2ea1de47058a6532a78.pdf
ICLR.cc/2025/Conference
wgnMdxS2nZ
MQFL-FHE: Multimodal Quantum Federated Learning Framework with Fully Homomorphic Encryption
The integration of fully homomorphic encryption (FHE) in federated learning (FL) has led to significant advances in data privacy. However, during the aggregation phase, it often results in performance degradation of the aggregated model, hindering the development of robust representational generalization. In this work,...
Rejected_Submission
/pdf/66f3ac55eea2e1562f305dd606132c32e6c5ff57.pdf
ICLR.cc/2024/Conference
2kvDzdC5rh
IntentGPT: Few-Shot Intent Discovery with Large Language Models
In today's digitally driven world, dialogue systems play a pivotal role in enhancing user interactions, from customer service to virtual assistants. In these dialogues, it is important to identify user's goals automatically to resolve their needs promptly. This has necessitated the integration of models that perform In...
Rejected_Submission
/pdf/6210db85a05d7d212a203f7f9d053e8d377074ba.pdf
ICLR.cc/2024/Conference
Tj3xLVuE9f
On the Foundations of Shortcut Learning
Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on *predictivity*---how reliably a feature indicates training-set labels---but also on *availability*---how easily the feature can be extracted from inputs. The literature on shortcut learning has note...
ICLR 2024 spotlight
/pdf/3f47b29f0e35691e7047d9fbfa0e4c47ea966e49.pdf
ICLR.cc/2024/Conference
ze7DOLi394
On the Joint Interaction of Models, Data, and Features
Learning features from data is one of the defining characteristics of deep learning, but the theoretical understanding of the role features play in deep learning is still in early development. To address this gap, we introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data an...
ICLR 2024 oral
/pdf/86a102e47488a58d90fc222cf560db16f68dc65d.pdf
ICLR.cc/2025/Conference
DIAaRdL2Ra
Convergence of Adafactor under Non-Convex Smooth Stochastic Optimization
Adafactor, a memory-efficient variant of Adam, has emerged as one of the popular choices for training deep learning tasks, particularly large language models. However, despite its practical success, there is limited theoretical analysis of Adafactor's convergence. In this paper, we present a comprehensive analysis of A...
Rejected_Submission
/pdf/e150eb67733d39b6f2fe76ce113f4b2149ec89fe.pdf
ICLR.cc/2025/Conference
0rS9o1uKqu
Training-Like Data Reconstruction
Machine Learning models are often trained on proprietary and private data that cannot be shared, though the trained models themselves are distributed openly assuming that sharing model weights is privacy preserving, as training data is not expected to be inferred from the model weights. In this paper, we present Traini...
Rejected_Submission
/pdf/c66d73e577d9a978703daa6453138c1817655849.pdf
ICLR.cc/2024/Conference
kOBkxFRKTA
Dynamic Sparse Training with Structured Sparsity
Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference. Although the resulting models are highly sparse and theoretically less computationally expensive, achieving speedups with un...
ICLR 2024 poster
/pdf/df0ecee276b46da96414263a1adb2d466de60dfc.pdf
ICLR.cc/2024/Conference
53gU1BASrd
Evaluating and Finetuning Models For Financial Time Series Forecasting
Time series forecasting is a challenging task as it is subject to a lot of noise, and the predictions often depend on external events. Still, recent deep learning techniques advanced the state-of-the-art on certain datasets, while they keep failing on other noisy datasets. This paper studies the case of financial time ...
Rejected_Submission
/pdf/cd9f81e1a403a5f9c0fe5f4dffc5a838ea94cb13.pdf
ICLR.cc/2025/Conference
gSGRSxVcRP
Detecting and Approximating Redundant Computational Blocks in Neural Networks
Deep neural networks often learn similar internal representations, both across different models and within their own layers. While inter-network similarities have enabled techniques such as model stitching and merging, intra-network similarities present new opportunities for designing more efficient architectures. In t...
Rejected_Submission
/pdf/f1bd6c5299ae5d150166f59e387980f9117d669b.pdf
ICLR.cc/2025/Conference
3ENBquM4b4
Plasticity from Structured Sparsity: Mastering Continual Reinforcement Learning through Fine-grained Network Allocation and Dormant Neuron Exploration
Continual reinforcement learning faces a central challenge in striking a balance between plasticity and stability to mitigate catastrophic forgetting. In this paper, we introduce SSDE, a novel structure-based method that aims to improve plasticity through a fine-grained allocation strategy with Structured Sparsity and ...
Rejected_Submission
/pdf/6f53ff6c86224cfcb3879dd0c27233a5d346ae3b.pdf
ICLR.cc/2025/Conference
uMEsKEiB7J
NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens
Recent advancements in Large Language Models (LLMs) have pushed the boundaries of natural language processing, especially in long-context understanding. However, the evaluation of these models' long-context abilities remains a challenge due to the limitations of current benchmarks. To address this gap, we introduce Nov...
ICLR 2025 Poster
/pdf/ea57723467d92dbbcdddcf4648c2649a76c1bdc6.pdf
ICLR.cc/2024/Conference
MFCjgEOLJT
Learning interpretable control inputs and dynamics underlying animal locomotion
A central objective in neuroscience is to understand how the brain orchestrates movement. Recent advances in automated tracking technologies have made it possible to document behavior with unprecedented temporal resolution and scale, generating rich datasets which can be exploited to gain insights into the neural contr...
ICLR 2024 poster
/pdf/44b4027edc563e0589bd44a76a7cdd91d74f932b.pdf
ICLR.cc/2025/Conference
UQJ7CDW8nb
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-...
ICLR 2025 Poster
/pdf/efd2169a71f1800808f58038f0bf1023ce051103.pdf
ICLR.cc/2025/Conference
MBBRHDuiwM
URLOST: Unsupervised Representation Learning without Stationarity or Topology
Unsupervised representation learning has seen tremendous progress. However, it is constrained by its reliance on domain specific stationarity and topology, a limitation not found in biological intelligence systems. For instance, unlike computer vision, human vision can process visual signals sampled from highly irregul...
ICLR 2025 Poster
/pdf/cd82b4c7d59caf976f9195bbb1e11d981bf27897.pdf
ICLR.cc/2025/Conference
0EP01yhDlg
Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition
We propose a new model for multi-token prediction in transformers, aiming to enhance sampling efficiency without compromising accuracy. Motivated by recent work that predicts the probabilities of subsequent tokens using multiple heads, we connect this approach to rank-1 canonical tensor decomposition. By generalizing i...
Rejected_Submission
/pdf/6fe4ded46a26ce5b63b0b8d91cd7d350f5112c33.pdf
ICLR.cc/2025/Conference
KnoS9XxIlK
A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules
Training large models is both resource-intensive and time-consuming, making it crucial to understand the quantitative relationship between model performance and hyperparameters. In this paper, we derive an empirical law that predicts pretraining loss for large language models for every intermediate training step across...
ICLR 2025 Poster
/pdf/212cefb36307e11021cced0a49c98a00931e6431.pdf
ICLR.cc/2024/Conference
nUBLhhVM1l
Tight Rates in Supervised Outlier Transfer Learning
A critical barrier to learning an accurate decision rule for outlier detection is the scarcity of outlier data. As such, practitioners often turn to the use of similar but imperfect outlier data from which they might \emph{transfer} information to the target outlier detection task. Despite the recent empirical success ...
ICLR 2024 poster
/pdf/0421a646b97f3333e121cd90e499e5b17866e497.pdf
ICLR.cc/2024/Conference
OMVFYTgj0H
Continual Reinforcement Learning by Reweighting Bellman Targets
One major obstacle to the general AI agent is the inability to solve new problems without forgetting previously acquired knowledge. This deficiency is highly linked to the fact that most reinforcement learning~(RL) methods are based upon the key assumption that the environment transition dynamics and reward functions a...
Rejected_Submission
/pdf/51749459e3d905e07f499f07cbf13e327f907bbb.pdf
ICLR.cc/2024/Conference
xh0XzueyCJ
Plug-And-Play Controllable Graph Generation With Diffusion Models
Diffusion models for graph generation present transformative capabilities in generating graphs for various downstream applications. However, controlling the properties of the generated graphs remains a challenging task for these methods. Few approaches tackling this challenge focus on the ability to control for a soft ...
Rejected_Submission
/pdf/5587ce85d8b85c1e649b68b5b71b53058f11d047.pdf
ICLR.cc/2025/Conference
VnLhUogHYE
K-HALU: Multiple Answer Korean Hallucination Benchmark for Large Language Models
Recent researchers and companies have been developing large language models (LLMs) specifically designed for particular purposes and have achieved significant advancements in various natural language processing tasks. However, LLMs are still prone to generating hallucinations—results that are unfaithful or inconsistent...
ICLR 2025 Poster
/pdf/011316b9ff90cb226a3dd709e2256f165ff85df9.pdf
ICLR.cc/2025/Conference
5pd78GmXC6
Charting the Design Space of Neural Graph Representations for Subgraph Matching
Subgraph matching is vital in knowledge graph (KG) question answering, molecule design, scene graph, code and circuit search, etc. Neural methods have shown promising results for subgraph matching. Our study of recent systems suggests refactoring them into a unified design space for graph matching networks. Existing me...
ICLR 2025 Poster
/pdf/53d17f44d595659cf050490be6c4b87018988e38.pdf
ICLR.cc/2024/Conference
4KqkizXgXU
Curiosity-driven Red-teaming for Large Language Models
Large language models (LLMs) hold great potential for many natural language applications but risk generating incorrect or toxic content. To probe when an LLM generates unwanted content, the current paradigm is to recruit a $\textit{red team}$ of human testers to design input prompts (i.e., test cases) that elicit undes...
ICLR 2024 poster
/pdf/3d980d9fedcf67ee1a60555b571fd716325f2f03.pdf
ICLR.cc/2024/Conference
2uHTuvDkLZ
Physics-aware Causal Graph Network for Spatiotemporal Modeling
Interpretable physics equations are widely recognized as valuable inductive biases for constructing robust spatiotemporal models. To harness these valuable pieces of knowledge, existing approaches often presuppose access to the exact underlying equations. However, such an assumption usually doesn't hold, especially in ...
Rejected_Submission
/pdf/b680101f5e19ac6f8f83c61e3ae0f38cbe0cd7bc.pdf
ICLR.cc/2018/Conference
H1tSsb-AW
Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent ba...
Accept (Oral)
/pdf/9b96677031b946ffbd1bc1632375cf5e2a190309.pdf
ICLR.cc/2025/Conference
MK6E6IgROl
ProcBench: Benchmark for Multi-Step Reasoning and Following Procedure
Reasoning is central to a wide range of intellectual activities, and while the capabilities of large language models (LLMs) continue to advance, their performance in reasoning tasks remains limited. The processes and mechanisms underlying reasoning are not yet fully understood, but key elements include path exploration...
Rejected_Submission
/pdf/b80dfee1504937c1dc47fcb61809bc6e1727738e.pdf
ICLR.cc/2025/Conference
WWymYrA48K
Test Time Learning for Time Series Forecasting
We propose the use of Test-Time Training (TTT) modules in a cascade architecture to enhance performance in long-term time series forecasting. Through extensive experiments on standard benchmark datasets, we demonstrate that TTT modules consistently outperform state-of-the-art models, including Mamba-based TimeMachine, ...
Rejected_Submission
/pdf/810930cf247905bae4385ec4c728ed2cea6f2c24.pdf
ICLR.cc/2024/Conference
GwBTlCIGs5
Addressing Sample Inefficiency in Multi-View Representation Learning
Non-contrastive self-supervised learning (NC-SSL) methods like BarlowTwins and VICReg have shown great promise for label-free representation learning in computer vision. Despite the apparent simplicity of these techniques, researchers must rely on several empirical heuristics to achieve competitive performance, most no...
Rejected_Submission
/pdf/b1038e553b0b33ffbcedc2df5ea2df744fd7d9d8.pdf
ICLR.cc/2024/Conference
D6pHf8AiO7
Pruning neural networks using FishLeg estimation
In many domains, the most successful AI models tend to be the largest, indeed often too large to be handled by AI players with limited computational resources. To mitigate this, a number of compression methods have been developed, including methods that prune the network down to high sparsity whilst retaining performan...
Rejected_Submission
/pdf/f29bc4b540a1b9f8f1b4c0756c579093279b8311.pdf
ICLR.cc/2025/Conference
ursX3k1rTO
Wyckoff Transformer: Generation of Symmetric Crystals
We propose Wyckoff Transformer, a generative model for materials conditioned on space group symmetry. Most real--world inorganic materials have internal symmetry beyond lattice translation. Symmetry rules that atoms obey play a fundamental role in determining the physical, chemical, and electronic properties of crystal...
Rejected_Submission
/pdf/0ee6297e89c3299941b4409715b7b58408ed0201.pdf
ICLR.cc/2025/Conference
vVhZh9ZpIM
The Pitfalls of Memorization: When Memorization Hurts Generalization
Neural networks often learn simple explanations that fit the majority of the data while memorizing exceptions that deviate from these explanations. This behavior leads to poor generalization when the learned explanations rely on spurious correlations. In this work, we formalize $\textit{the interplay between memorizati...
ICLR 2025 Poster
/pdf/589f629fc7b43054f6b1ad600284df6690349c80.pdf
ICLR.cc/2025/Conference
MGZyUtaYUb
RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
Deep reinforcement learning (RL) has recently shown significant benefits in solving combinatorial optimization (CO) problems, reducing reliance on domain expertise, and improving computational efficiency. However, the field lacks a unified benchmark for easy development and standardized comparison of algorithms across ...
Desk_Rejected_Submission
/pdf/2f8da563f503057eaa8c82b48fa70ce03f78a659.pdf
ICLR.cc/2024/Conference
xXtD9P2lvH
Directed Graph Generation with Heat Kernels
Existing work on graph generation has, so far, mainly focused on undirected graphs. In this paper we propose a denoising autoencoder-based generative model that exploits the global structure of directed graphs (also called digraphs) via their Laplacian dynamics and enables one-shot generation. Our noising encoder uses...
Rejected_Submission
/pdf/6c664dd0fb50cebd339d3b4f0e58d69d509d612c.pdf
ICLR.cc/2024/Conference
ZhY1XSYqO4
Deep Variational Multivariate Information Bottleneck - A Framework for Variational Losses
Variational dimensionality reduction methods are known for their high accuracy, generative abilities, and robustness. These methods have many theoretical justifications. Here we introduce a unifying principle rooted in information theory to rederive and generalize existing variational methods and design new ones. We ba...
Rejected_Submission
/pdf/d44552061c889c0d4659d16fe86a8b73e3a17495.pdf
ICLR.cc/2025/Conference
D2EdWRWEQo
FreeFlow: Latent Flow Matching for Free Energy Difference Estimation
Estimating free energy differences between molecular systems is fundamental for understanding molecular interactions and accelerating drug discovery. Current techniques use molecular dynamics to sample the Boltzmann distributions of the two systems and of several intermediate "alchemical" distributions that interpolate...
Rejected_Submission
/pdf/8b4a2effeab98e515ccda2cc625cb8f21fa402f6.pdf
ICLR.cc/2025/Conference
aWXnKanInf
TopoLM: brain-like spatio-functional organization in a topographic language model
Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building o...
ICLR 2025 Oral
/pdf/611b6267012e28d00c7126738ae312fe73e2c856.pdf
ICLR.cc/2025/Conference
veiSkPqIXm
OpenPL: Realistic Evaluation of Prompt Learning for VLM in Open Environments
Vision-language models (VLMs) have demonstrated impressive zero-shot capabilities across various image classification tasks. Their performance can be further enhanced through prompt learning methods. To evaluate the effectiveness of prompt learning, it is important to assess its robustness to new classes and distributi...
Rejected_Submission
/pdf/4fb2d7076dc96e78c0bcd86c3d59f120e1af4eb0.pdf
ICLR.cc/2024/Conference
rhaQbS3K3R
Does Progress On Object Recognition Benchmarks Improve Generalization on Crowdsourced, Global Data?
For more than a decade, researchers have measured progress in object recognition on the ImageNet dataset along with its associated generalization benchmarks such as ImageNet-A, -C, and -R. Recent advances in foundation models, trained on orders of magnitude more data, have begun to saturate performance on these benchma...
ICLR 2024 poster
/pdf/c1b53b959a0293a7d8bfd6315666036aaf3b5c33.pdf
ICLR.cc/2018/Conference
H1VjBebR-
The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings
We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, i...
Accept (Poster)
/pdf/28602396bbae8435a659b18b701d49a96f937048.pdf
ICLR.cc/2025/Conference
nCrJD7qPJN
Distilling Dataset into Neural Field
Utilizing a large-scale dataset is essential for training high-performance deep learning models, but it also comes with substantial computation and storage costs. To overcome these challenges, dataset distillation has emerged as a promising solution by compressing the large-scale dataset into a smaller synthetic datase...
ICLR 2025 Poster
/pdf/6e92b5dd21f95a8dcae9d1d51a427c09da5b2e4b.pdf
ICLR.cc/2025/Conference
jqff3wzkLT
Variance-Covariance Regularization Improves Representation Learning
Transfer learning plays a key role in advancing machine learning models, yet conventional supervised pretraining often undermines feature transferability by prioritizing features that minimize the pretraining loss. In this work, we adapt a self-supervised learning regularization technique from the VICReg method to supe...
Rejected_Submission
/pdf/a2002403a15518318219333188ac2475a04d0848.pdf
ICLR.cc/2025/Conference
pwNIOcr8fU
Towards Syn-to-Real IQA: A Novel Perspective on Reshaping Synthetic Data Distributions
Blind Image Quality Assessment (BIQA) has advanced significantly through deep learning, but the scarcity of large-scale labeled datasets remains a challenge. While synthetic data offers a promising solution, models trained on existing synthetic datasets often show limited generalization ability. In this work, we make a...
Rejected_Submission
/pdf/eb697de053a898cb44da0f2b135be854402f559a.pdf
ICLR.cc/2024/Conference
4SrzKsJocx
Simultaneous Dimensionality Reduction: A Data Efficient Approach for Multimodal Representations Learning
Current experiments frequently produce high-dimensional, multimodal datasets—such as those combining neural activity and animal behavior or gene expression and phenotypic profiling—with the goal of extracting useful correlations between the modalities. Often, the first step in analyzing such datasets is dimensionality ...
Rejected_Submission
/pdf/490d723fc6e96e524e28fb689adc29187ed561bd.pdf
ICLR.cc/2024/Conference
KNtcoAM5Gy
BaFTA: Backprop-Free Test-Time Adaptation for Zero-shot Vision Language Models
Large-scale pretrained vision-language models like CLIP have demonstrated remarkable zero-shot image classification capabilities across diverse domains. To enhance CLIP's performance while preserving the zero-shot paradigm, various test-time prompt tuning methods have been introduced to refine class embeddings through ...
Rejected_Submission
/pdf/319cb3a6dca83bd1cce55ba0c701f33cb3f7d42b.pdf
ICLR.cc/2025/Conference
mIl15VP7vt
Reliable and Efficient Amortized Model-based Evaluation
Current generative model evaluation procedures are costly and sensitive to test set selection, making continuous monitoring impractical. In this paper, we employ a model-based evaluation framework using Item Response Theory (IRT), which decouples model performance from the test subset selection, ensuring reliable and e...
Rejected_Submission
/pdf/397ca770a66185f7a77339189f2a6f4dc7beaf50.pdf
ICLR.cc/2025/Conference
oaRaaG1WB1
Unlocking Trilevel Learning with Level-Wise Zeroth Order Constraints: Distributed Algorithms and Provable Non-Asymptotic Convergence
Trilevel learning (TLL) found diverse applications in numerous machine learning applications, ranging from robust hyperparameter optimization to domain adaptation. However, existing researches primarily focus on scenarios where TLL can be addressed with first order information available at each level, which is inadequa...
Rejected_Submission
/pdf/80477b9764e12f49d052bbff9faccea36616467a.pdf
ICLR.cc/2025/Conference
VELhv9BBfn
Neural Dueling Bandits: Preference-Based Optimization with Human Feedback
Contextual dueling bandit is used to model the bandit problems, where a learner's goal is to find the best arm for a given context using observed noisy human preference feedback over the selected arms for the past contexts. However, existing algorithms assume the reward function is linear, which can be complex and non-...
ICLR 2025 Poster
/pdf/875613f857a562bc6de9f80ec0421b5e179060b8.pdf
ICLR.cc/2024/Conference
RsztjXcvUf
A Primal-Dual Approach to Solving Variational Inequalities with General Constraints
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities (VIs) under a limiting assumption that analytic solutions of specific subproblems are available. In this paper, we circumvent this assumption via a warm-starting technique where we solve subproblems app...
ICLR 2024 poster
/pdf/0d149caaf72505899168c0007b2b21c56e09a91b.pdf
ICLR.cc/2024/Conference
TLADT8Wrhn
TiC-CLIP: Continual Training of CLIP Models
Keeping large foundation models up to date on latest data is inherently expensive. To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models. This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines. We introduce the first se...
ICLR 2024 poster
/pdf/4d1ee077cc2138ae7576c2f17c698c8d07a99895.pdf
ICLR.cc/2024/Conference
rNvyMAV8Aw
Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning
Interpretable policy learning seeks to estimate intelligible decision policies from observed actions; however, existing models fall short by forcing a tradeoff between accuracy and interpretability. This tradeoff limits data-driven interpretations of human decision-making process. e.g. to audit medical decisions for b...
Rejected_Submission
/pdf/be831aafd5823ef39f054438df0a72f20ab25d94.pdf
ICLR.cc/2025/Conference
dYTtGFuD3S
Adaptive Drug Interaction Prediction via Enhanced Graph Representation Learning
This paper presents a groundbreaking theoretical framework for drug-drug interaction (DDI) prediction that seamlessly integrates domain adaptation (DA) techniques with advanced mathematical concepts. We introduce GraphPharmNet, a novel architecture that operates on DDI-DA bundles, leveraging gauge-equivariant geometric...
Rejected_Submission
/pdf/6f01dcc6e98b50b9652596fa7ccfe410b014a046.pdf
ICLR.cc/2025/Conference
gcouwCx7dG
Improving the Sparse Structure Learning of Spiking Neural Networks from the View of Compression Efficiency
The human brain utilizes spikes for information transmission and dynamically reorganizes its network structure to boost energy efficiency and cognitive capabilities throughout its lifespan. Drawing inspiration from this spike-based computation, Spiking Neural Networks (SNNs) have been developed to construct event-drive...
ICLR 2025 Spotlight
/pdf/3f959189dd626b35cdbdc9e6c59c2d77a66baffd.pdf
ICLR.cc/2018/Conference
H1xJjlbAZ
INTERPRETATION OF NEURAL NETWORK IS FRAGILE
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts ...
Reject
/pdf/a5fd94e2a9bcd11102bbce82ebd942cb442bd823.pdf
ICLR.cc/2025/Conference
pIVOSU7TFQ
Detecting Discrepancies Between Generated and Natural Images Using Uncertainty
In this work, we propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks. The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images. **The feasibility of distingui...
Rejected_Submission
/pdf/0dbf4bfd6d001c48a67cfff48b1beee4e3b5e483.pdf
ICLR.cc/2025/Conference
OX4Tk43uwv
SERDES Link Training with Edge Inference: Neural-Network Driven Discrete Optimization to Maximize Link Efficiency
Meeting the growing data demands of modern AI applications requires efficient, high-speed communication links. We propose an edge inference framework that dynamically optimizes non-uniform quantization levels in programmable ADC receivers. While integer linear programming (ILP) offers high-quality solutions, its signif...
Rejected_Submission
/pdf/5565944a873e0eaf9d971270d8eb37e2d146ad7e.pdf
ICLR.cc/2025/Conference
Na28j1Drh7
Why Do You Answer Like That? Psychological Analysis on Underlying Connections between LLM's Values and Safety Risks
The application scope of Large Language Models (LLMs) continues to expand, leading to increasing interest in personalized LLMs. However, aligning these models with individual values raises significant safety concerns due to harmful information correlated with certain values. In this paper, we identify specific safety r...
Rejected_Submission
/pdf/375474338585dd7d9b45e661c03eda764b4bc710.pdf
ICLR.cc/2025/Conference
WVzYMa68Of
Tensor Train Decomposition for Adversarial Attacks on Computer Vision Models
Deep neural networks (DNNs) are widely used today, but they are vulnerable to adversarial attacks. To develop effective methods of defense, it is important to understand the potential weak spots of DNNs. Often attacks are organized taking into account the architecture of models (white-box approach) and based on gradien...
Rejected_Submission
/pdf/a19d2158df5048ff2b6157f65f40294e8e0099fb.pdf
ICLR.cc/2024/Conference
DayPQKXaQk
Constrained Decoding for Cross-lingual Label Projection
Zero-shot cross-lingual transfer utilizing multilingual LLMs has become a popular learning paradigm for low-resource languages with no labeled training data. However, for NLP tasks that involve fine-grained predictions on words and phrases, the performance of zero-shot cross-lingual transfer learning lags far behind su...
ICLR 2024 poster
/pdf/e1e1f83484f5de8247ceaca832679cec78ea67ed.pdf
ICLR.cc/2024/Conference
CK5Hfb5hBG
Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words
Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct ...
ICLR 2024 poster
/pdf/d7d08775f142490ef0045c1080bd2fa5c6964f97.pdf
ICLR.cc/2025/Conference
jQiJRxNymY
Black-Box Approximation and Optimization with Hierarchical Tucker Decomposition
We develop a new method HTBB for the multidimensional black-box approximation and gradient-free optimization, which is based on the low-rank hierarchical Tucker decomposition with the use of the MaxVol indices selection procedure. Numerical experiments for 14 complex model problems demonstrate the robustness of the pro...
Rejected_Submission
/pdf/e51623a76ba7f2d1cf56f5f2991315152ceee7d2.pdf
ICLR.cc/2025/Conference
ZaSOGF8Ojq
TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration
Graph Neural Networks (GNNs) have shown remarkable performance in various scientific domains, but their lack of interpretability limits their applicability in critical decision-making processes. Recently, intrinsic interpretable GNNs have been studied to provide insights into model predictions by identifying rationale ...
Rejected_Submission
/pdf/f961e346183aba4bbd2a1c3241031cd43e0af910.pdf
ICLR.cc/2025/Conference
zxqdVo9FjY
Generalization for Least Squares Regression with Simple Spiked Covariances
Random matrix theory has proven to be a valuable tool in analyzing the generalization of linear models. However, the generalization properties of even two-layer neural networks trained by gradient descent remain poorly understood. To understand the generalization performance of such networks, it is crucial to character...
Rejected_Submission
/pdf/a6de441aeef92a7367b1ea318833b6b1e9c5a586.pdf
ICLR.cc/2024/Conference
06lrITXVAx
Dropout Enhanced Bilevel Training
Bilevel optimization problems appear in many widely used machine learning tasks. Bilevel optimization models are sensitive to small changes, and bilevel training tasks typically involve limited datasets. Therefore, overfitting is a common challenge in bilevel training tasks. This paper considers the use of dropout to a...
ICLR 2024 spotlight
/pdf/09304d5bf3e31448450004ee461830870db26085.pdf
ICLR.cc/2024/Conference
tfyLS1cB5W
Encoding Ontologies with Holographic Reduced Representations for Transformers
The ability to encode meaningful structure into deep learning models opens up the potential for incorporating prior knowledge, particularly in fields where domain-specific information is of great importance. However, transformer models trained on NLP tasks with medical data often have randomly initialized embeddings th...
Rejected_Submission
/pdf/5ebe07aa43ab2ae8726e88a5ddae85116082d80d.pdf
ICLR.cc/2024/Conference
qup9xD8mW4
Behaviour Distillation
Dataset distillation aims to condense large datasets into a small number of synthetic examples that can be used as drop-in replacements when training new models. It has applications to interpretability, neural architecture search, privacy, and continual learning. Despite strong successes in supervised domains, such met...
ICLR 2024 poster
/pdf/c401434987a8b2aa4b593e8609ec6cc7085d772b.pdf
ICLR.cc/2025/Conference
8lwWBSa1pJ
Time-aware World Model: Adaptive Learning of Task Dynamics
In this work, we introduce Time-Aware World Model, a model-based approach designed to explicitly incorporate the temporal dynamics of environments. By conditioning on the time step size, $\Delta t$, and training over a diverse range of $\Delta t$ values - rather than relying on a fixed time step size - our model enable...
Rejected_Submission
/pdf/f0a50bd9d7e0e66ecd266102d49da59ab9fb7cdd.pdf
ICLR.cc/2025/Conference
S2WHlhvFGg
Advancing Drug-Target Interaction Prediction via Graph Transformers and Residual Protein Embeddings
Predicting drug-target interactions (DTIs) is essential for advancing drug discovery. This paper presents a unified mathematical framework for unsupervised domain adaptation in drug-target interaction (DTI) prediction, integrating measure theory, functional analysis, information geometry, and optimal transport theory. ...
Rejected_Submission
/pdf/54b275e7697930464b973855ec0d78a7e9cf1a4c.pdf
ICLR.cc/2024/Conference
eoB6JmdmVf
Speech language models lack important brain-relevant semantics
Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of information language models truly predict in the brain. We investigate t...
Rejected_Submission
/pdf/35a2c8f316fc52e16d6e0fa66a7ff798c2a0d32d.pdf
ICLR.cc/2024/Conference
HnVtsfyvap
Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models
Large Vision Foundation Models (VFMs) pretrained on massive datasets exhibit impressive perform on various downstream tasks, especially with limited labeled target data. However, due to their high memory and compute requirements, these models cannot be deployed in resource constrained settings. This raises an important...
Rejected_Submission
/pdf/aef8aa51d377ddce7656d8d30bcdc17e72b61658.pdf
ICLR.cc/2024/Conference
Je5SHCKpPa
Multimodal Patient Representation Learning with Missing Modalities and Labels
Multimodal patient representation learning aims to integrate information from multiple modalities and generate comprehensive patient representations for subsequent clinical predictive tasks. However, many existing approaches either presuppose the availability of all modalities and labels for each patient or only deal w...
ICLR 2024 poster
/pdf/69771720175aee975e307b2eeda7643824f0a5d8.pdf
ICLR.cc/2025/Conference
NPLty3VT1c
Solving Nash Equilibrium Scalably via Deep-Learning-Augmented Iterative Algorithms
Computing the Nash Equilibrium (NE) is a fundamental yet computationally challenging problem in game theory. Although recent approaches have incorporated deep learning techniques to tackle this intractability, most of them still struggle with scalability when the number of players increases, due to the exponential grow...
Rejected_Submission
/pdf/c9ccf5bcc8c83ce8458c5b6b29295dc01166e939.pdf
ICLR.cc/2025/Conference
GlqeLNjH6p
Exploring Complex Trade-offs in Information Bottleneck through Multi-Objective Optimization
Information Bottleneck (IB) theory provides a principled approach to analyze and optimize how neural networks extract and learn latent representations from data, aiming to enhance network performance and generalization. The IB framework has been applied and validated across various domains in deep learning. However, mo...
Rejected_Submission
/pdf/744304ce7c59a7765cfa7dcc5d9034bb77fa43d1.pdf
ICLR.cc/2024/Conference
OLi39lZS9Y
Learning to Solve New sequential decision-making Tasks with In-Context Learning
Training autonomous agents that can generalize to new tasks from a small number of demonstrations is a long-standing problem in machine learning. Recently, transformers have displayed impressive few-shot learning capabilities on a wide range of domains in language and vision. However, the sequential decision-making s...
Rejected_Submission
/pdf/8718618c339883343475a97997586cf2f02f312a.pdf
ICLR.cc/2024/Conference
1tZbq88f27
MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models
The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undiscl...
ICLR 2024 poster
/pdf/2a8800abfa9599958bc3a14cd49aabf9ad0709a5.pdf