Deep learning Computer Vision Classification paper
Table of contents
- Deep learning Computer Vision Classification paper
- Classification
- 1. ImageNet Classification with Deep Convolutional Neural Networks
- 2. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
- 3. Visualizing and Understanding Convolutional Networks
- 4. Network In Network
- 5. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
- 6. Very Deep Convolutional Networks for Large-Scale Image Recognition
- 7. Going Deeper with Convolutions
- 8. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
- 9. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- 10. Spatial Transformer Networks
- 11. Rethinking the Inception Architecture for Computer Vision
- 12. Deep Residual Learning for Image Recognition
- 13. Learning Deep Features for Discriminative Localization
- 14. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
- 15. Identity Mappings in Deep Residual Networks
- 16. Wide Residual Networks
- 17. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
- 18. Densely Connected Convolutional Networks
- 19. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
- 20. Deep Pyramidal Residual Networks
- 21. Xception: Deep Learning with Depthwise Separable Convolutions
- 22. Aggregated Residual Transformations for Deep Neural Networks
- 23. PolyNet: A Pursuit of Structural Diversity in Very Deep Networks
- 24. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 25. Dynamic Routing Between Capsules
- 26. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
- 27. Squeeze-and-Excitation Networks
- 28. Non-local Neural Networks
- 29. MobileNetV2: Inverted Residuals and Linear Bottlenecks
- 30. Exploring the Limits of Weakly Supervised Pretraining
- 31. How Does Batch Normalization Help Optimization?
- 32. Understanding Batch Normalization
- 33. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
- 34. Bag of Tricks for Image Classification with Convolutional Neural Networks
- 35. Searching for MobileNetV3
- 36. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- 37. When Does Label Smoothing Help?
- 38. Stand-Alone Self-Attention in Vision Models
- 39. Fixing the train-test resolution discrepancy
- 40. Self-training with Noisy Student improves ImageNet classification
- 41. Adversarial Examples Improve Image Recognition
- 42.Big Transfer (BiT): General Visual Representation Learning
- 43. Fixing the train-test resolution discrepancy: FixEfficientNet
- 44. Sharpness-Aware Minimization for Efficiently Improving Generalization
- 45. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
- 46. Training data-efficient image transformers & distillation through attention
- 47. High-Performance Large-Scale Image Recognition Without Normalization
Classification
1. ImageNet Classification with Deep Convolutional Neural Networks
2012, NIPS, Spotlight
2. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
2014, ECCV
3. Visualizing and Understanding Convolutional Networks
2014, ECCV
4. Network In Network
2014, ICLR
5. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
2014, ICLR
6. Very Deep Convolutional Networks for Large-Scale Image Recognition
2015, ICLR, Oral
7. Going Deeper with Convolutions
2015, CVPR, Oral
8. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015, ICCV
9. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
2015, ICML
10. Spatial Transformer Networks
2015, NIPS
11. Rethinking the Inception Architecture for Computer Vision
2016, CVPR
12. Deep Residual Learning for Image Recognition
2016, CVPR, Oral, Best Paper Award
13. Learning Deep Features for Discriminative Localization
2016, CVPR
14. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
2016, Arxiv
15. Identity Mappings in Deep Residual Networks
2016, ECCV, Spotlight
16. Wide Residual Networks
2016, BMVC
17. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
2017, AAAI
18. Densely Connected Convolutional Networks
2017, CVPR, Oral, Best Paper Award
19. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
2017, ICCV
20. Deep Pyramidal Residual Networks
2017, CVPR
21. Xception: Deep Learning with Depthwise Separable Convolutions
2017, CVPR
22. Aggregated Residual Transformations for Deep Neural Networks
2017, CVPR
23. PolyNet: A Pursuit of Structural Diversity in Very Deep Networks
2017, CVPR
24. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
2017, CoRR
25. Dynamic Routing Between Capsules
2017, NIPS
26. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
2018, CVPR
27. Squeeze-and-Excitation Networks
2018, CVPR, Oral
28. Non-local Neural Networks
2018, CVPR
29. MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018, CVPR
30. Exploring the Limits of Weakly Supervised Pretraining
2018, ECCV
31. How Does Batch Normalization Help Optimization?
2018, NIPS, Oral
32. Understanding Batch Normalization
2018, NIPS
33. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
2019, ECCV
34. Bag of Tricks for Image Classification with Convolutional Neural Networks
2019, CVPR
35. Searching for MobileNetV3
2019, ICCV, Oral
36. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
2019, ICML, Oral
37. When Does Label Smoothing Help?
2019, NIPS, Spotlight
38. Stand-Alone Self-Attention in Vision Models
2019, NIPS
39. Fixing the train-test resolution discrepancy
2019, NIPS
40. Self-training with Noisy Student improves ImageNet classification
2020, CVPR
41. Adversarial Examples Improve Image Recognition
2020, CVPR
42.Big Transfer (BiT): General Visual Representation Learning
2020, ECCV, Spotlight
43. Fixing the train-test resolution discrepancy: FixEfficientNet
2020, Arxiv
44. Sharpness-Aware Minimization for Efficiently Improving Generalization
2021, ICLR, Spotlight
45. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2021, ICLR, Oral
46. Training data-efficient image transformers & distillation through attention
47. High-Performance Large-Scale Image Recognition Without Normalization