paper_id stringlengths 10 19 | venue stringclasses 14
values | focused_review stringlengths 7 8.45k | point stringlengths 60 643 |
|---|---|---|---|
NIPS_2022_1637 | NIPS_2022 | 1. The examples of scoring systems in the Introduction seem out of date, there are many newer and recognized clinical scoring systems. It also should briefly introduce the traditional framework of the scoring system and its difference in methodology and performance with the proposed method. 2. As shown in figure 3, the... | 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with ... |
ICLR_2022_3058 | ICLR_2022 | . At the end of section 2, the authors tried to explain noisy signals are harmful for the OOD detection. It's obvious that with more independent units the variance of the output is higher. But this affects both ID and OOD data. The explanation is not clear.
. The analysis in section 6 is kind of superficial. 1) Lemma 2... | 2) From Figure 4, the range of ID and OOD seems not to be changed much by sparsification. Similarly, Lemma 2 requires approximately identical mean as the assumption. These conditions are crucial for DICE, but is not well discussed, eg., how to ensure DICE meet these conditions. |
ICLR_2022_3205 | ICLR_2022 | This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e )
for all pairs of possible environments e , e ′
. It is not clear that this will be an improvement when scaling up.
At a few points the paper introduces approximations, but the gap to the true value and the... | 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraph... |
NIPS_2022_601 | NIPS_2022 | Although I like the general idea of using DPP, I found there are various issues with the current version of the paper. Please see my detailed comments as follows.
• The paper specifically targets the permutation problems, but I don't see how this permutation property is incorporated into the design of the proposed acqu... | • Besides, the experiments seem not too strong and fair to me. I don't understand why all the baselines use the position kernels, why don't we use the default settings of these baselines in the literature? Besides, it seems like some baselines related to BO with discrete & categorial variables are missing. The paper al... |
NIPS_2022_1770 | NIPS_2022 | Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters.
According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for paramete... | 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; |
ICLR_2023_4878 | ICLR_2023 | 1. The proposed sparsity technique seems to be limited in scope. While it has been shown to work well for a particular misinformation detector, there is no guarantee that it will work well for other networks. 2. Though the experimental results are encouraging, it is not clear why the simple pre-fixed should work well. ... | 10. Looks like all sparsity patterns do almost equally well. No insight provided as to what is happening here. Is this something unique to the sparsity detection problem or is this true for GNN in general? Section 4.3: presentation bits --> representation bits |
ICLR_2023_650 | ICLR_2023 | 1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in interm... | 2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it. |
3vXpZpOn29 | ICLR_2025 | It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; ... | 1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2]. |
NIPS_2018_43 | NIPS_2018 | - Theoretical analyses are not particularly difficult, even if they do provide some insights. That is, the analyses are what I would expect any competent grad student to be able to come up with within the context of a homework assignment. I would consider the contributions there to be worthy of a posted note / arXiv ar... | - Claim (first para of Section 3.2) that "this methodology requires significant additional assumptions" seems too extreme to me. The only additional assumption is that the test set be drawn from the same distribution as the query set, which is natural for many machine learning settings where the train, validation, test... |
ICLR_2021_1504 | ICLR_2021 | W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I ... | 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on ... |
NIPS_2017_337 | NIPS_2017 | of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writ... | - Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts. |
ICLR_2022_497 | ICLR_2022 | I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out.
Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and ther... | - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). |
tsbdcgaCtk | ICLR_2024 | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)?
2. according to fig.1 , the predic... | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)? |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive ex... | 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. |
NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is... | - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). |
ICLR_2021_863 | ICLR_2021 | Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the il... | 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. |
NIPS_2018_430 | NIPS_2018 | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whethe... | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. |
nE1l0vpQDP | ICLR_2025 | - Given the existing literature on the implicit bias of optimization methods, the primary concern is the significance of the results presented. For instance, the classic result by [Z. Ji and M. Telgarsky] demonstrates a convergence rate $\log\log n/\log n$ of GD to the L2-margin solution, which is faster than the rate ... | - The bounded noise assumption, while common, is somewhat restrictive in stochastic optimization literature. There have been several efforts to extend these noise conditions: [A. Khaled and P. Richt´arik]. Better theory for sgd in the nonconvex world. TMLR 2023. [R. Gower, O. Sebbouh, and N. Loizou] Sgd for structured ... |
BSGQHpGI1Q | ICLR_2025 | - The overall motivation of using characteristic function regularization is not clear.
- The abstract states “improves performance … by preserving essential distributional properties…” -> How does the preservation of such properties aid in generalization?
- The abstract states that the method is meant to be used in con... | - The overall motivation of using characteristic function regularization is not clear. |
NIPS_2021_2367 | NIPS_2021 | 1. The paper appears to be limited to a combination of existing techniques: adaptation to an unknown level of corruption (Lykouris et al., 2018); varying variances treated with a weighted version of OFUL (Zhou et al., 2021); variable decision sets (standard in contextual linear bandits). The fact that these results can... | 1. The paper appears to be limited to a combination of existing techniques: adaptation to an unknown level of corruption (Lykouris et al., 2018); varying variances treated with a weighted version of OFUL (Zhou et al., 2021); variable decision sets (standard in contextual linear bandits). The fact that these results can... |
8HG2QrtXXB | ICLR_2024 | - Source of Improvement and Ablation Study:
- Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -... | - Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly. |
NIPS_2018_768 | NIPS_2018 | Weakness] 1: I like the paper's idea and result. However, this paper really REQUIRE the ablation study to justify the effectiveness of different compositions. For example: - In eq2, what is the number of m, and how m affect the results? - In eq3, what is the dimension of w_n? what if the use the Euclidean coordinate in... | 2: Based on the paper's description, I think it will be hard to replicate the result. It would be great if the authors can release the code after the acceptance of the paper. |
ICLR_2022_1675 | ICLR_2022 | Weakness] 1. The paper contains severe writing issues such as grammatical errors, abuses of mathematical symbols, unclear sentences, etc. 2. The paper needs more literature survey, especially about the existing defense methods using the manifold assumption. 3. The paper does not have enough (either theoretical or exper... | 1. The paper contains severe writing issues such as grammatical errors, abuses of mathematical symbols, unclear sentences, etc. |
ICLR_2021_1716 | ICLR_2021 | Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus n... | 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on una... |
NIPS_2018_947 | NIPS_2018 | weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, whi... | - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? |
NIPS_2020_556 | NIPS_2020 | * The visual quality/fidelity of the generated images is quite low. Making sure that the visual fidelity on common metrics such as FID matches or is at least close enough to GAN models will be useful to validate that the approach supports high fidelity (as otherwise it may be the case that it achieves compositionality ... | * The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a pr... |
NIPS_2020_1335 | NIPS_2020 | Given how strong the first four sections (five pages) of the paper were, I was relatively disappointed in the experiments, which were somewhat light. Specifically: 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the expe... | 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the experiments presented, learning a *uniform* state-action-independent weighting would have sufficed. Moreover, since learning a state-action-independent weighting is muc... |
NIPS_2018_985 | NIPS_2018 | Weakness: - One drawback is that the idea of dropping a spatial region in training is not new. Cutout [22] and [a] have been explored this direction. The difference towards previous dropout variants is marginal. [a] CVPR'17. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. - The improvement ove... | - The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numb... |
ICLR_2023_624 | ICLR_2023 | 1. evaluation on a single domain
The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the l... | 1. evaluation on a single domain The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the l... |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, ... | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. |
NIPS_2020_1817 | NIPS_2020 | There are a few points that are not clear from the paper, which I list below: - As far as I understood in the clustered attention (not the improved one), the value of the i-th query becomes the value of the centroid of the cluster that the query belongs to. So after one round of applying the clustered attention, we hav... | - Although the method is presented nicely and the experiments are rather good and complete, a bit of analysis on what the model does, which can be extremely interesting, is missing (check the feedback/suggestions). |
ARR_2022_209_review | ARR_2022 | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. S... | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date? |
K98byXpOpU | ICLR_2024 | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ... | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ... |
NIPS_2021_386 | NIPS_2021 | 1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS... | 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset ... |
oEuTWBfVoe | ICLR_2024 | I think the paper has several weaknesses. Please see the following list and the questions sections.
* The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accep... | * The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accepted that backpropagation is biologically implausible. |
ztT70ubhsc | ICLR_2025 | - The professional sketches (Multi-Gen-20M) considered in this work are in binarised versions of HED edges, which is very different from what a real artist would draw (no artist or professional sketcher would produce lines like those in Figure 1). This makes the basic assumptions/conditions of the paper not very rigoro... | - The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data. |
NIPS_2016_241 | NIPS_2016 | /challenges of this approach. For instance... - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed... | - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. |
NIPS_2022_532 | NIPS_2022 | • It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA.
• In order to apply imitation learning, it... | • In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data. |
47hDbAMLbc | ICLR_2024 | - The paper is mainly dedicated to the existence of robust training. No results on optimization or robust generalization are derived. Given that, the scope seems to be quite limited.
- Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may ha... | - Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may have stronger implications if they are connected to generalization bounds. It is not clear in the paper that the constructions of ReLU networks for robust memorization would lead to rob... |
D0gAwtclWk | EMNLP_2023 | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results.
2 The paper does not discuss the computational efficiency of the proposed me... | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results. |
NIPS_2017_337 | NIPS_2017 | of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writ... | - The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory. |
NIPS_2020_696 | NIPS_2020 | * A big concern for me is that this paper was hard to read. Since it is very applications specific, I am not familiar with a lot of the theory or the inverse problem(s) considered here. As a result, I am unable to appreciate the key aspects of the paper. For example, the introduction directly gets into the details of w... | * Regarding the OOD experiments, this is indeed interesting because the trained network is able to give strong OOD generalization. However, particularly in imaging in the recent few years several papers have shown that untrained NNs (like deep image prior Ulyanov et al., CVPR 2018) can be used to solve inverse problems... |
NIPS_2020_83 | NIPS_2020 | While I think the paper makes a good contribution, there are some limitation at the present stage: - [Remark 3.1] While it has been done in previous works, I think that a deeper understanding of those cases where modelling the pushforward P in (8) as a composition of perturbation in an RKHS does not introduce an error,... | - The experiments are limited to toy data. There is a range of problems with real data where barycenters can be used and it would be interesting to show performance of the method in those settings too. |
ICLR_2021_738 | ICLR_2021 | ---:
1: This paper ensembles some existing compression/NAS approaches to improve the performance of BNNs, which is not significant enough.
The dynamic routing strategy (conditional on input) has been widely explored. For example, the proposed dynamic formulation in this paper has been used in several studies [2, 3].
Va... | 5: More experiments on deeper networks (e.g., ResNet-50) and other network structures (e.g., MobileNet) are needed to further strengthen the paper. References: [1] MoBiNet: A Mobile Binary Network for Image Classification, in WACV 2020. [2] Dynamic Channel Pruning: Feature Boosting and Suppression, in ICLR2019. [3] Lea... |
NIPS_2018_87 | NIPS_2018 | for a wide range of supervisory signals such as, video level action labels, single temporal point, one GT bounding box, temporal bounds etc. The method is experimentally evaluated on the UCF-101-24 and DALY action detection datasets. Paper Strengths: - The paper is clear and easy to understand. - The problem formulatio... | - The experimental results are interesting and promising, clearly demonstrate the significance of varying level of supervision on the detection performance - Table 1. |
NIPS_2022_477 | NIPS_2022 | 1.In experiments, the PRODEN method also uses mixup and consistency training techniques for fair comparisons. What about other competitive baselines? I'd like to see how much the strong CC method could benefit from the representation training technique.
2.It is not clear why the proposed sample selection mechanism help... | 2.It is not clear why the proposed sample selection mechanism helps preserve the label distribution. |
NIPS_2020_232 | NIPS_2020 | - The results/analysis albeit being detailed and comprehensive, only two relatively old and small models are evaluated. - Some of the comparison with other related works, is not completely apple-to-apples, for instance comparing fixed point representation for training, while comparing against AdderNet and DeepShift whi... | - The results/analysis albeit being detailed and comprehensive, only two relatively old and small models are evaluated. |
NIPS_2020_930 | NIPS_2020 | 1. The title is misleading and the authors might overclaim their contribution. Indeed, the stochastic problem in Eq.(1) is a special instance of nonconvex-concave minimax problems and equivalent to nonconvex compositional optimization problem in Eq.(2). Solving such problem is easier than the general case consider in [... | 4. Given the current stochastic problem in Eq.(1), I believe that the prox-linear subproblem can be reformulated using the conjugate function and becomes the same as the subproblem in Algorithm 1. That is to say, we can simply improve prox-linear algorithms for solving stochastic problem in Eq.(1). This makes the motiv... |
j9e3WVc49w | EMNLP_2023 | - The claim is grounded in empirical findings and does not provide a solid mathematical foundation.
- Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, ... | - Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, then LS and KD are equivalent. |
ICLR_2023_1599 | ICLR_2023 | of the proposed method are listed as below:
There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited.
Some important SOTAs are missin... | 35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. |
NIPS_2017_486 | NIPS_2017 | 1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information â which phrase is incorrect, what is the correct phrase, and what is the... | 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? |
NIPS_2017_567 | NIPS_2017 | Weakness:
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly.
Here are some examples:
(1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where a... | 1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly. Here are some examples: (1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{... |
NIPS_2019_165 | NIPS_2019 | of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between ... | 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ab... | 3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting? |
NIPS_2021_121 | NIPS_2021 | Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the... | 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise inj... |
ICLR_2023_1765 | ICLR_2023 | weakness, which are summarized in the following points:
Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be ... | - The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. |
ICLR_2023_3948 | ICLR_2023 | 1.This paper lacks novelty and is only a combination of some existing approaches, such as Qu et al. (2020). Moreover, I find that the equations are similar.
2.The motivation is not clear at all. The introduction should be carefully revised to make this paper easy to follow.
3.I find the experimental analysis is vague, ... | 2.The motivation is not clear at all. The introduction should be carefully revised to make this paper easy to follow. |
NIPS_2022_2813 | NIPS_2022 | 1. The proposed method is a two-stage optimization strategy, which is a bit difficult to balance the two steps optimization. Could it be end-to-end training? 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. | 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. |
iamWnRpMuQ | ICLR_2025 | The results have a few issues which make evaluating the contribution difficult:
1. The paper lacks a comparison with some existing works, particularly methods involve iterative PPO/DPO method that train a reward model simultaneously and reward ensembles [1].
[1] Coste T, Anwar U, Kirk R, et al. Reward model ensembles h... | 2. The alignment of relabeled reward data with human annotator judgments remains insufficiently validated. |
NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy... | * The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might he... |
Wo66GEFnXd | ICLR_2025 | 1. This paper just simply combines neural networks into the physical sciences problems for predicting TDDFT for molecules. Due to the lack of comparison with other learning based methods and insufficient experiment results, I don’t see the novelty and effectiveness of this method from the learning perspective. Maybe th... | 2. This paper only does experiments on a very limited number of molecules and only provides in-distribution testing for these samples. I think the value of this method would be limited if it needs to train for each molecule individually. |
NIPS_2018_461 | NIPS_2018 | 1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If thi... | 1. Symbols are a little bit complicated and takes a lot of time to understand. |
NIPS_2019_1350 | NIPS_2019 | of the method. CLARITY: The paper is well organized, partially well written and easy to follow, in other parts with quite some potential for improvement, specifically in the experiments section. Suggestions for more clarity below. SIGNIFICANCE: I consider the work significant, because there might be many settings in wh... | - Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth? |
NIPS_2016_417 | NIPS_2016 | 1. Most of the human function learning literature has used tasks in which people never visualize data or functions. This is also the case in naturalistic settings where function learning takes place, where we have to form a continuous mapping between variables from experience. All of the tasks that were used in this pa... | 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity... |
NIPS_2018_537 | NIPS_2018 | 1. The motivation or the need for this technique is unclear. It would have been great to have some intuition why replacing last layer of ResNets by capsule projection layer is necessary and why should it work. 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired ... | 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables. |
oKn2eMAdfc | ICLR_2024 | 1. The introduction to orthogonality in Part 2 could be more detailed.
2. No details on how the capsule blocks are connected to each other.
3. The fourth line of Algorithm 1 does not state why the flatten operation is performed.
4.The presentation of the α-enmax function is not clear.
5. Eq. (4) does not specify why Ba... | 1. The introduction to orthogonality in Part 2 could be more detailed. |
ICLR_2022_1216 | ICLR_2022 | of the paper: Overall the paper is reasonably well-written but the writing can improve in certain aspects. Some comments and questions below. 1. It is not apparent to the reader why the authors choose an asymptotic regime to focus on. My understanding is that the primary reason is easier theoretical tractability. It wo... | 4. Given that prior work already theoretically shows that sample-wise multiple descent can occur in linear regression, the main contribution of the paper appears to be the result that optimal regularization can remove double descent even in certain anisotropic settings. If this is not the case, the paper should do a be... |
NIPS_2022_2315 | NIPS_2022 | Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid. | 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. |
7ipjMIHVJt | ICLR_2024 | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundati... | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundati... |
ICLR_2021_2846 | ICLR_2021 | Weakness: There are some concerns authors should further address: 1)The transductive inference stage is essentially an ensemble of a serial of models. Especially, the proposed data perturbation can be considered as a common data augmentation. What if such an ensemble is applied to the existing transductive methods? And... | 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. |
RnYd44LR2v | ICLR_2024 | - Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in... | - Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in... |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dia... | - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The restriction to triplets (or a sliding window of length 3) is quite limiting. Is this a fundamental limitation of the approach or is an extension to longer subsequences (without a sliding window) straightforward? |
of2rhALq8l | ICLR_2024 | 1. A significant weakness of this paper is the lack of clarity in explaining the implementation of the core concept, which involves the use of strictly diagonal matrices and the proposed Gradual Mask (GM). Figure 2 suggests that the GM matrix is element-wise multiplied by the matrix A, but the description implies a dif... | 2. The hyper-parameters $b$ (bit-width) and $\alpha$ (stability factor) may introduce significant computational overhead in the pursuit of determining the optimal trade-off between model size and accuracy. |
NIPS_2019_1180 | NIPS_2019 | --- There are two somewhat minor weakness: presentation and some missing related work. The main points in this paper can be understood with a bit of work, but there are lots of minor missing details and points of confusion. I've listed them roughly in order, with the most important first: * What factors varied in order... | * L167: What is a "sqeuence of episodes" here? Are practice and evaluation the two types of this kind of sequence? Missing related work (seems very related, but does not negate this work's novelty): |
ICLR_2022_1393 | ICLR_2022 | I think that:
The comparison to baselines could be improved.
Some of the claims are not carefully backed up.
The explanation of the relationship to the existing literature could be improved.
More details on the above weaknesses:
Comparison to baselines:
"We did not find good benchmarks to compare our unsupervised, iter... | - I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method. |
NIPS_2018_134 | NIPS_2018 | - Some parts of the work are harder to follow and it helps to have checked [Cohen and Shashua, 2016] for background information. # Typos and Presentation - The citation of Kraehenbuehl and Koltun: it seems that the first and last name of the first author, i.e. Philipp, are swapped. - The paper seems to be using a diffe... | - line 126: by the black *line* in the input # Further Questions - Would it make sense to include and learn AccNet as part of a larger predictor, e.g., for semantic segmentation, that make use of similar operators? |
rwpv2kCt4X | EMNLP_2023 | The primary concerns include,
* The necessity of evaluating the degree of personalization is not clear to me.
- According to this paper, I only found three previous research that did personalized summarizers. And all of them utilize the current common metrics to measure performance. It seems these metrics are enough fo... | * The new proposed metric is only tested on a single dataset. |
NIPS_2019_175 | NIPS_2019 | 1. Weak novelty. Addressing domain-shift via domain specific moments is not new. It was done among others by Bilen & Vedaldi, 2017,âUniversal representations: The missing link between faces, text, planktons, and cat breedsâ. Although this paper may have made some better design decisions about exactly how to do it. ... | 4. The evaluation is a good start with comparing several base DA methods with and without the proposed TransferNorm architecture. It would be stronger if the base DA methods were similarly evaluated with/without the architectural competitors such as AutoDial and AdaBN that are direct competitors to TN. |
NIPS_2022_489 | NIPS_2022 | Concern regarding representativeness of baselines used for evaluation
Practical benefits in terms of communication overhead & training time could be more strongly motivated
Detailed Comments:
Overall, the paper was interesting to read and the problem itself is well motivated. Formulation of the problem as an MPG appear... | - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read. [1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al,... |
NIPS_2021_442 | NIPS_2021 | of the paper:
Strengths: 1) To the best of my knowledge, the problem investigated in the paper is original in the sense that top-m identification has not been studied in the misspecified setting. 2) The paper provides some interesting results:
i) (Section 3.1) Knowing the level of misspecification ε
is a key ingredient... | 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules. Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: |
NIPS_2017_65 | NIPS_2017 | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification
2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail
Detailed comments below:
Methods and Evaluation: The proposed objective is interesting and uti... | 1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3