Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Deep learning Computer Vision Classification paper

Table of contents

  1. Deep learning Computer Vision Classification paper
  2. Classification
    1. 1. ImageNet Classification with Deep Convolutional Neural Networks
    2. 2. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
    3. 3. Visualizing and Understanding Convolutional Networks
    4. 4. Network In Network
    5. 5. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
    6. 6. Very Deep Convolutional Networks for Large-Scale Image Recognition
    7. 7. Going Deeper with Convolutions
    8. 8. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
    9. 9. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
    10. 10. Spatial Transformer Networks
    11. 11. Rethinking the Inception Architecture for Computer Vision
    12. 12. Deep Residual Learning for Image Recognition
    13. 13. Learning Deep Features for Discriminative Localization
    14. 14. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
    15. 15. Identity Mappings in Deep Residual Networks
    16. 16. Wide Residual Networks
    17. 17. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
    18. 18. Densely Connected Convolutional Networks
    19. 19. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
    20. 20. Deep Pyramidal Residual Networks
    21. 21. Xception: Deep Learning with Depthwise Separable Convolutions
    22. 22. Aggregated Residual Transformations for Deep Neural Networks
    23. 23. PolyNet: A Pursuit of Structural Diversity in Very Deep Networks
    24. 24. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
    25. 25. Dynamic Routing Between Capsules
    26. 26. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
    27. 27. Squeeze-and-Excitation Networks
    28. 28. Non-local Neural Networks
    29. 29. MobileNetV2: Inverted Residuals and Linear Bottlenecks
    30. 30. Exploring the Limits of Weakly Supervised Pretraining
    31. 31. How Does Batch Normalization Help Optimization?
    32. 32. Understanding Batch Normalization
    33. 33. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
    34. 34. Bag of Tricks for Image Classification with Convolutional Neural Networks
    35. 35. Searching for MobileNetV3
    36. 36. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
    37. 37. When Does Label Smoothing Help?
    38. 38. Stand-Alone Self-Attention in Vision Models
    39. 39. Fixing the train-test resolution discrepancy
    40. 40. Self-training with Noisy Student improves ImageNet classification
    41. 41. Adversarial Examples Improve Image Recognition
    42. 42.Big Transfer (BiT): General Visual Representation Learning
    43. 43. Fixing the train-test resolution discrepancy: FixEfficientNet
    44. 44. Sharpness-Aware Minimization for Efficiently Improving Generalization
    45. 45. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
    46. 46. Training data-efficient image transformers & distillation through attention
    47. 47. High-Performance Large-Scale Image Recognition Without Normalization

Classification

1. ImageNet Classification with Deep Convolutional Neural Networks

2012, NIPS, Spotlight

2. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition

2014, ECCV

3. Visualizing and Understanding Convolutional Networks

2014, ECCV

4. Network In Network

2014, ICLR

5. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks

2014, ICLR

6. Very Deep Convolutional Networks for Large-Scale Image Recognition

2015, ICLR, Oral

7. Going Deeper with Convolutions

2015, CVPR, Oral

8. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

2015, ICCV

9. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

2015, ICML

10. Spatial Transformer Networks

2015, NIPS

11. Rethinking the Inception Architecture for Computer Vision

2016, CVPR

12. Deep Residual Learning for Image Recognition

2016, CVPR, Oral, Best Paper Award

13. Learning Deep Features for Discriminative Localization

2016, CVPR

14. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size

2016, Arxiv

15. Identity Mappings in Deep Residual Networks

2016, ECCV, Spotlight

16. Wide Residual Networks

2016, BMVC

17. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

2017, AAAI

18. Densely Connected Convolutional Networks

2017, CVPR, Oral, Best Paper Award

19. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

2017, ICCV

20. Deep Pyramidal Residual Networks

2017, CVPR

21. Xception: Deep Learning with Depthwise Separable Convolutions

2017, CVPR

22. Aggregated Residual Transformations for Deep Neural Networks

2017, CVPR

23. PolyNet: A Pursuit of Structural Diversity in Very Deep Networks

2017, CVPR

24. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

2017, CoRR

25. Dynamic Routing Between Capsules

2017, NIPS

26. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

2018, CVPR

27. Squeeze-and-Excitation Networks

2018, CVPR, Oral

28. Non-local Neural Networks

2018, CVPR

29. MobileNetV2: Inverted Residuals and Linear Bottlenecks

2018, CVPR

30. Exploring the Limits of Weakly Supervised Pretraining

2018, ECCV

31. How Does Batch Normalization Help Optimization?

2018, NIPS, Oral

32. Understanding Batch Normalization

2018, NIPS

33. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

2019, ECCV

34. Bag of Tricks for Image Classification with Convolutional Neural Networks

2019, CVPR

35. Searching for MobileNetV3

2019, ICCV, Oral

36. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

2019, ICML, Oral

37. When Does Label Smoothing Help?

2019, NIPS, Spotlight

38. Stand-Alone Self-Attention in Vision Models

2019, NIPS

39. Fixing the train-test resolution discrepancy

2019, NIPS

40. Self-training with Noisy Student improves ImageNet classification

2020, CVPR

41. Adversarial Examples Improve Image Recognition

2020, CVPR

42.Big Transfer (BiT): General Visual Representation Learning

2020, ECCV, Spotlight

43. Fixing the train-test resolution discrepancy: FixEfficientNet

2020, Arxiv

44. Sharpness-Aware Minimization for Efficiently Improving Generalization

2021, ICLR, Spotlight

45. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

2021, ICLR, Oral

46. Training data-efficient image transformers & distillation through attention


47. High-Performance Large-Scale Image Recognition Without Normalization



Table of contents