Additional Experiments During Rebuttal Can Worsen Paper Quality
The author discusses how the pressure to provide additional experiments during the rebuttal process for major AI/ML conferences can often be detrimental to the paper. Reviewers are now expected to find issues, leading to requests for numerous supplementary experiments, even on papers that are otherwise accepted.
💡
Why it matters
This issue highlights the tension between rigorous reviewing and maintaining paper quality, which is crucial for the progress of AI/ML research.
Key Points
- 1Reviewers feel obligated to find issues, even on otherwise accepted papers
- 2Requests for 5-10 additional experiments during rebuttal are common
- 3These experiments are often
- 4 scenarios that don't meaningfully improve the paper
- 5Unsuccessful rebuttal experiments can give reviewers a
- 6 moment
Details
The author notes that in the past, it was common to receive
Like
Save
Cached
Comments
No comments yet
Be the first to comment