paper_id stringlengths 10 19 | venue stringclasses 14
values | focused_review stringlengths 7 8.45k | point stringlengths 60 643 |
|---|---|---|---|
NIPS_2020_1592 | NIPS_2020 | Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are ne... | 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the gener... |
NIPS_2017_631 | NIPS_2017 | 1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is ... | 2.L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? |
NIPS_2022_528 | NIPS_2022 | weakness 1 The Algorithm should be presented and described in detail. 2 The background of Sharpness-Aware Minimization (SAM) shoud be described in detail.
1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method. 2 The background of Sharpness-Aware Minimization... | 1 The Algorithm should be presented and described in detail, which is helpful for understanding the proposed method. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - Since the paper mentions the possibility to use Chebyshev polynomials to achieve a speed-up, it would have been interesting to see a runtime comparison at test time. |
NIPS_2020_11 | NIPS_2020 | 1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text. 2. Are the results obtained on Synbols dataset generalizable to large-scale datasets? For example, if you find algo... | 1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text. |
tbRPPWDy76 | EMNLP_2023 | There might be not enough theoretical discussion and in-depth analyses which help readers understand the prompt design. More motivations and insights are needed.
The engineering part might still need refinement.
* Considering that this work is all about evaluation, there might be a lack of experiments currently. It mig... | * Although the style design is clean, the prompts are not well-organized (Table 6, 7). All sentences squeeze together. |
ICLR_2023_2622 | ICLR_2023 | Weakness: 1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA. 2. The experiments results are not significant. 3. Three steps for training are shown in VoLTA, a) switching off CMAF, b) switching on CMA... | 1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA. |
ARR_2022_314_review | ARR_2022 | 1. Although the work is important and detailed, from the novelty perspective, it is an extension of norm-based and rollout aggregation methods to another set of residual connections and norm layer in the encoder block. Not a strong weakness, as the work makes a detailed qualitative and quantitative analysis, roles of e... | 3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (ha... |
KE5QunlXcr | EMNLP_2023 | - The PLMs used (BERT) are by current standards, quite old, and quite small. As work in scaling PLMs up to sizes orders of magnitude greater, performance on syntactic tasks has shown to improve naturally (along with many other useful emergent forms of knowledge). Some comparison to larger models / application of this m... | - Some questionable design choices. Perplexity is used as a measure of the model retaining semantic information after fine-tuning, and while that does relate to the original task, there are also aspects of domain drift which are possible and separate from catastrophic forgetting. How are such factors controlled? |
ICLR_2021_2208 | ICLR_2021 | + Nice idea Consistent improvements over cross entropy for hierarchical class structures Improvements w.r.t other competitors (though not consistent) Good ablation study
The improvements are small The novelty is not very significant
More comments:
Figure 1: - It is not clear what distortion is at this stage - It is not... | 1) p_k symbols are used without definition (tough I think I these are the network predictions p(\hat{y}=k I) 2) the relation of the formula presented to the known EMD is not clear. The latter is a problem solved as linear programming or similar, and not a closed form formula |
NIPS_2021_953 | NIPS_2021 | Although the paper gives detailed theoretical proof, the experiments are somewhat weak. I still have some concerns: 1)The most related works SwaV and Barlow Twins outperform the proposed method in some experimental results, as shown in Table 1,2,5. What are the main advantages of this method compared with SwaV and Barl... | 3)Since the cluster structure is defined by the identity. How does the number of images impact the model performance? Do more training images make the performance worse or better ? BYOL in the abstract should be explained for its first appearance. |
NIPS_2020_1821 | NIPS_2020 | - To my understanding, the experimental section only compares results generated for this paper. This is good because it keeps apples-to-apples comparisons, however it is suspicious since the task is not novel. Some comparison with results from other works (or a justification of why this is not possible/suitable) would ... | - Albeit the observed effects are strong, it remains unclear “why does the method work?” in particular regarding the L_pixel component. Providing stronger arguments or intuitions of why these particular losses are “bound to help” would be welcome. |
NJUzUq2OIi | ICLR_2025 | I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points ... | - The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memor... |
eCXfUq3RDf | EMNLP_2023 | 1. Very limited reproducibility - Unless the authors release their training code, dialogue dataset, as well as model checkpoints, I find it very challenging to reproduce any of the claims in this paper. I encourage the authors to attach their code and datasets via anonymous repositories in the paper submission so that ... | 3. Dependent on the training data - I'm unsure if 44k dialogues is sufficient to capture a wide range of user traits and personalities across different content topics. LLMs are typically trained on trillions of tokens, I do not see how 44k dialogues can capture the combinations of personalities and topics. In theory, t... |
Uj2Wjv0pMY | ICLR_2024 | • Compared to Assembly 101 (error detection), the paper seems like an inferior / less complicated dataset. Claims like higher ratio of error to normal videos needs to be validated.
• Compared to datasets, the dataset prides itself on adding different modalities especially depth channel (RGB-D). The paper fails to valid... | • I’m not convinced that the binary classification is a justifiable baseline metrics. While I agree with the TAL task is really important here and a good problem to solve, I’m not sure how coarse grained binary classification can assess models understanding of fine-grained error like technique error. |
NIPS_2019_653 | NIPS_2019 | of the method. Clarity: The paper has been written in a manner that is straightforward to read and follow. Significance: There are two factors which dent the significance of this work. 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if th... | 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. |
NIPS_2018_857 | NIPS_2018 | Weakness: - Long range contexts may be helpful for object detection as shown in [a, b]. For example, the sofa in Figure 1 may help detect the monitor. But in the SNIPER, images are cropped into chips, which makes the detector cannot benefit from long range contexts. Is there any idea to address this? - The writing shou... | - The writing should be improved. Some points in the paper is unclear to me. |
9Ax0pyaLgh | EMNLP_2023 | 1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore).
2. Often it is not sufficient to show automatic evaluation results. The author does not show any human evaluation results and does not even perform a case study and proper error analysis. This does not reflect well on the qualitativ... | 1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore). |
6iM2asNCjK | ICLR_2024 | 1. My primary concern is with the limited scope of the paper. The paper primarily considers only evaluating sentence embeddings from LLMs, which while important, is a small part of the overall evaluation landscape of LLMs. Consequently, the title "Robustness-Accuracy characterization of Large Language Models using synt... | 4. Additionally, there has been a large amount of work on LLM evaluation [2]. While some of the metrics there do not satisfy the proposed desiderata, it would still be good to see how SynTextBench metric compares to the other metrics proposed in the literature. Concretely, from the paper, it is hard to understand under... |
ICLR_2022_2425 | ICLR_2022 | 1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature... | 1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. |
NIPS_2019_203 | NIPS_2019 | * Technical innovation is fairly limited. The bLVNet is a straightforward extension of bLNet (an image model) to video. The TAM involves the use of 1D temporal convolution and depthwise convolution. Both mechanisms that have been widely leveraged before. On the other hand, the paper does not make bold novelty claims an... | - After having read the other reviews and the author responses, I decide to maintain my initial rating (6). The contribution of this work is mostly empirical. The stronger results compared to more complex models and the promise to release the code imply that this work deserves to be known, even if fairly incremental. |
NIPS_2018_245 | NIPS_2018 | Weakness] 1: Poor writing and annotations are a little hard to follow. 2: Although applying GCN on FVQA is interesting, the technical novelty of this paper is limited. 3: The motivation is to solve when the question doesn't focus on the most obvious visual concept when there are synonyms and homographs. However, from t... | 1: Poor writing and annotations are a little hard to follow. |
WC9yjSosSA | EMNLP_2023 | - The reported experimental results cannot strongly demonstrate the effectiveness of the proposed method.
- In Table 1, for the proposed method, only 6 of the total 14 evaluation metrics achieve SOTA performances.
- In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances... | - In Table 2, for the proposed method, only 8 of the total 14 evaluation metrics achieve SOTA performances. In addition, under the setting of "Twitter-2017 $\rightarrow$ Twitter-2015", why the proposed method achieves best overall F1, while not achieves best F1 in all single types? |
ICLR_2022_2531 | ICLR_2022 | I have several concerns about the clinical utility of this task as well as the evaluation approach.
- First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling... | - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. |
NIPS_2016_238 | NIPS_2016 | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned th... | - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: |
pZk9cUu8p6 | ICLR_2025 | 1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity.
2.Validation Scope:Evaluation focuses mainly on Vision Transformer with CIFAR-10;Could benefit from testing with more diverse m... | 1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity. |
fsDZwS49uY | ICLR_2025 | - The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes.
- Given that a single optimization problem can have multiple valid formulations, it wo... | - The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes. |
RsnWEcuymH | ICLR_2024 | - My main concern is that the performance improvement, though generally better, is not particularly too significant, not to mention that those proxy-based method achieves also pretty good IM results while using only a negligible amount of time compared to BOIM (or other simulation-based method in general)
- Other choic... | - Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in... |
ICLR_2023_3063 | ICLR_2023 | The novelty and technical contribution are limited.
It is unclear for the deformable graph attention module.
It is unclear why the proposed method has lower computational complexity.
Detailed comments:
What is the motivation to choose personalized pagerank score, bfs, and feature similarity as sorting criteria?
For Nod... | 2)“NodeSort differentially sorts nodes depending on the base node.” Does this mean that the base node affects the ordering, affects the key nodes for attention, and further affects the model performance? |
ICLR_2022_176 | ICLR_2022 | There are two main (and easily fixable) weaknesses.
a) I think the role of the normalizing flow is underexplained. It is stated multiple times that the normalizing flow provides the evidence updates and its purpose is to estimate epistemic uncertainty. The remaining questions for me are 1. From which space to which doe... | 2. Why is the arrow in Figure 2 from a Gaussian space into the latent space, rather than from the latent space to n^(i)? I thought the main purpose was to influence n^(i)? |
ICLR_2023_1946 | ICLR_2023 | Weakness: 1. This work raises an essential issue in partial domain adaptation and evaluates many PDA algorithms and model selection strategies. However, it does not present any solution to this problem. 2. The findings of the experiments are a bit trivial. No target label for model selection strategies will hurt the pe... | 4. Many abbreviations lack definition and cause confusion. ‘AR’ in Table 5 stands for domain adaptation tasks and algorithms. |
NIPS_2020_1477 | NIPS_2020 | 1. I think it is a bit overstated (Line 10 and Line 673) to use the term \epsilon-approximate stationary point of J -- there is still function approximation error as in Theorem 4.5. I think the existence of this function approximation error should be explicitly acknowledged whenever the conclusion about sample complexi... | 4. Although using advantage instead of q value is more common in practice, I'm wondering if there is other technical consideration for conducting the analysis with advantage instead of q value. |
jPrl18r4RA | EMNLP_2023 | 1. The setting of Unsupervised Online Adaptation is a little bit strange. As described in Sec 3.1, the model requires a training set, including documents, quires and labels. It seems that the adaptation process is NOT "Unsupervised" because the training set also requires annatations.
2. The problem that this paper focu... | 1. The setting of Unsupervised Online Adaptation is a little bit strange. As described in Sec 3.1, the model requires a training set, including documents, quires and labels. It seems that the adaptation process is NOT "Unsupervised" because the training set also requires annatations. |
ICLR_2022_1923 | ICLR_2022 | Weakness: 1. The novelty of this paper is limited. First, the analysis of the vertex-level imbalance problem is not new, which is a reformulation of the observations in previous works [Rendle and Freudenthaler, 2014; Ding et al., 2019]. Second, the designed negative sampler uses reject sampling to increase the chance o... | - For effectiveness, the performance comparison in Table 1 is unfair. VINS sets different sample weights W u i in the training process, while most compared baselines like DNS, AOBPR, SA, PRIS set all sample weights as 1. |
NIPS_2019_1366 | NIPS_2019 | Weakness: - Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks. - The paper has shown that pure RL al... | - The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning |
NIPS_2022_1807 | NIPS_2022 | Weakness:
1.The authors should provide more descriptions of the wavelet transforms in this paper. It is hard for me to understand the major idea in this paper before learning some necessary knowledge about wavelet whitening, wavelet coefficient, and so on.
2.It is better for authors to display the performance of accele... | 2.It is better for authors to display the performance of accelerating SGMs by involving some other baselines with a different perspective, such as “optimizing the discretization schedule or by modifying the original SGM formulation” [16, 15, 23, 46, 36, 31, 37, 20, 10, 25, 35, 45] |
ICLR_2021_243 | ICLR_2021 | Weakness: 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. 2. The model involves many hyperparameters. Thus, the selection of the hyperparameters in the paper needs further explanation. 3. A... | 3. A brief conclusion of the article and a summary of this paper's contributions need to be provided. |
ICLR_2022_1838 | ICLR_2022 | 1. When introducing the theoretical results, we should make a detailed comparison with the existing cross-entropy loss results. The current writing method cannot reflect the advantages of square loss. 2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability... | 2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability of neural networks, how to explain that the data distribution illustrated in Figure 1 is inseparable from the network model? |
ICLR_2021_2717 | ICLR_2021 | 1: The writing could be further improved, e.g., “via being matched to” should be “via matching to” in Abstract.
2: The “Def-adv” needs to be clarified.
3: The accuracies of the target model using different defenses against the FGSM attack are not shown in Figure 1. Hence, it is unclear the difference between the known ... | 4: Even though authors compare their framework with an advanced defense APE-GAN, they can further compare the proposed framework with a method that is designed to defend against multiple attacks (maybe the research on defense against multiple attacks is relatively rare). The results would be more meaningful if the auth... |
82VzAtBZGk | ICLR_2025 | The problem formulation is incomplete. The paper does not define the safety properties expected from the RL agent.
- Lack of theoretical results. This paper provides only empirical results to support its claims.
- The results are presented in a convoluted way. In particular, the results disregard the safety violations ... | - The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear. |
NIPS_2017_110 | NIPS_2017 | of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to ... | - l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function. |
ICLR_2023_1980 | ICLR_2023 | Motivated by the fact that local learning can limit memory when training the network and the adaptive nature of each individual block, the paper extends local learning to the ResNet-50 to handle large datasets. However, it seems that the results of the paper do not demonstrate the benefits of doing so. The detailed wea... | 5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title. |
NIPS_2019_1377 | NIPS_2019 | - The proof works only under the assumption that the corresponding RNN is contractive, i.e. has no diverging directions in its eigenspace. As the authors point out (line #127), for expansive RNN there will usually be no corresponding URNN. While this is true, I think it still imposes a strong limitation a priori on the... | - Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP). |
NIPS_2021_2304 | NIPS_2021 | There are four limitations: 1. In this experiment, single dataset training and single dataset testing cannot verify the generalizable ability of models, it should conduct experiments on large-scale datasets. 2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application... | 2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems. |
rYhDcQudVI | ICLR_2024 | The methodology appears incremental, building marginally upon JEM's foundation of interpreting classifiers as time-dependent EBMs. The newly introduced self-calibration loss primarily enhances this by applying a standard DSM technique to train the internal score function, thus lacking substantial novelty.
The authors h... | * Minor weaknesses The allocation of Figure 1 is too naive. Overall, you could have edited the space of main paper more wisely. |
xNn2nq5kiy | ICLR_2024 | * The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new... | * The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new... |
NIPS_2016_238 | NIPS_2016 | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned th... | - The first sentence of the abstract needs to be re-written. |
NIPS_2019_1312 | NIPS_2019 | weakness of this paper is its isolation to the plain GP regression setting. Although this is expected given the methodology used to enable tractability, I would have appreciated at least some discussion into whether any of the material presented here can be extended to the classification setting. Of course, one could a... | - It appears that in nearly all experiments, the results are reported for a single held-out test set. Standard practice in most papers on GPs involves using a number of train/test splits or folds which give a more accurate illustration of the methodâs performance. While I imagine that the size of the datasets conside... |
NIPS_2020_335 | NIPS_2020 | - The paper reads too much like LTF-V1++, and at some points assumes too much familiarity of the reader to LTF-V1. Since this method is not well known, I wish the paper was a bit more pedagogical/self-contained. - The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler... | - The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains. |
ICLR_2022_3330 | ICLR_2022 | 1) One very serious problem is that this paper is full of grammatical errors. It is too many and many of them can be detected and corrected by grammatical checker. I only list some in here to justify my observations, instead of all because I don’t want to proofread the authors’ paper. Page 1, learned,, Page 2 and Kurak... | 6) Adding a method on the top of other methods to improve transferability is good but cannot be considered a significant contribution. |
NIPS_2018_15 | NIPS_2018 | - The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: ... | - The hGRU architecture seems pretty ad-hoc and not very well motivated. |
ICLR_2023_1214 | ICLR_2023 | As the authors note, it seems the method still requires a few tweaks to work well empirically. For example, we need to omit the log of the true rewards and scale the KL term in the policy objective to 0.1. While the authors provide a brief intuition on why those modifications are needed, I think the authors should prov... | 2) On algorithm 1 Line 8, shouldn't we use s_n instead of s_t? Questions I am curious of the asymptotic performance of the proposed method. If possible, can the authors provide average return results with more env steps? [1] https://github.com/watchernyu/REDQ |
ICLR_2023_3705 | ICLR_2023 | 1)The main assumption is borrowed from other works but is actually rarely used in the optimization field. Moreover, the benefits of this assumption is not well investigated. For example, a) why it is more reasonable than the previous one? B) why it can add gradient norm L_1 \nabla f(w_1) in Eqn (3) or why we do not add... | 3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al. |
NIPS_2022_285 | NIPS_2022 | Terminology: Introduction l. 24-26 "pixels near the peripheral of the object of interest can generally be challenging, but not relevant to topology." I think this statement is problematic. When considering the inverse e.g. in the case of a surface or vessel, that a foreground pixel changes to background. Such a scenari... | 34 "to force the neural network to memorize them" --> I would tone down this statement, in my understanding, the neural network does not memorize an exact "critical point" as such in TopoNet [24]. Minor: I find the method section to be a bit wordy, it could be compressed on the essential definitions. There exist severa... |
ICLR_2022_2791 | ICLR_2022 | The technical contribution of this paper is limited, which is far from a decent ICLR paper. In particular, All kinds of evaluations, i.e., single-dataset setting (most of existing person re-ID methods), cross-dataset setting [1, 2,3] and live re-id setting [4], have been discussed in previous works. This paper simply m... | 1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5]; |
ICLR_2021_1740 | ICLR_2021 | are in its clarity and the experimental part.
Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach se... | • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. |
NIPS_2021_1604 | NIPS_2021 | ).
Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6].
After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised m... | - Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done. |
ICLR_2021_2047 | ICLR_2021 | As noted below, I have concerns around the experimental results. More specifically, I feel that there is a relative lack of discussion around the (somewhat surprising) outperformance of baselines that VPBNN is aiming to approximate, and I feel that the experiments are missing what I see as key VPBNN results that otherw... | 1: "The uncertainty is defined based on the posterior distribution." For more clarity it could be helpful to update this to say that the epistemic model uncertainty is represented in the prior distribution, and upon observing data, those beliefs can be updated in the form of a posterior distribution, which yields model... |
ICLR_2022_1935 | ICLR_2022 | Weakness:
A semi-supervised feature learning baseline is missing.
This is my main concern about the paper. The key argument in the paper is that feature learning and classifier learning should 1) be decoupled, 2) use random sampling and class-balanced sampling respectively, 3) train on all labels and only ground-truth ... | 1) train a feature extractor ( f in the paper) and a classifier ( g ′ in the paper) using random sampling and any semi-supervised learning method on all data, then |
ARR_2022_40_review | ARR_2022 | - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework.
- Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dial... | - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.