Skip to main content

2024 | OriginalPaper | Buchkapitel

Few-Shot Learning Remote Scene Classification Based on DC-2DEC

verfasst von : Ziyuan Wang, Zhiming Ding, Yingying Wang

Erschienen in: Spatial Data and Intelligence

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Few-shot learning image classification (FSLIC) is a task that has gained enhanced focus in recent years, the cost of collecting and annotating large number of data samples in some specialised domains is expensive, Few-shot remote scene classification (FRSSC) is of great utility in scenarios where sample is scarce and labelling is extremely costly, the core problem of this task is how to identify new classes with scarce and expensive few-shot samples. However, existing work prefers complicated feature extraction in various ways and the enhancement results are not satisfactory, this paper aims to improve the effectiveness of FSLIC not only through complicated feature extraction but also by exploring alternative approaches. Here are multiple avenues to improve the performance of few-shot classifiers. Training with a scarce data in a few-shot learning (FSL) task often results in a biased feature distribution. In this paper, we propose a method to address this issue by calibrating the support set data feature using sufficient base class data. (Our data distribution calibration method (DC) is on top of feature extractor), requiring no additional parameters. And the feature extraction model is further optimised and the feature extractor of DC-2DEC is optimised with the task of dealing with the spatial context structure of the image i.e. rotation prediction pretext, specifically rotation prediction. We refer to the proposed method as DC-2DEC, and we apply it to few-shot learning classification in RS image (RS image) scene recognition. Through experiments conducted on traditional few-shot datasets and RS image datasets, we validate the algorithm and present corresponding experimental results. These results demonstrate the competitiveness of DC-2DEC, highlighting its efficacy in few-shot learning classification for RS images.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017) Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
2.
Zurück zum Zitat Yuan, Q., et al.: Deep learning in environmental remote sensing: achievements and challenges. Remote Sens. Environ. 241, 111716 (2020)CrossRef Yuan, Q., et al.: Deep learning in environmental remote sensing: achievements and challenges. Remote Sens. Environ. 241, 111716 (2020)CrossRef
3.
Zurück zum Zitat Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020) Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
4.
5.
Zurück zum Zitat Tukey, J.W., et al.: Exploratory Data Analysis, vol. 2. Reading (1977) Tukey, J.W., et al.: Exploratory Data Analysis, vol. 2. Reading (1977)
6.
Zurück zum Zitat Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., Luo, J.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7260–7268 (2019) Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., Luo, J.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7260–7268 (2019)
7.
Zurück zum Zitat Oreshkin, B., Rodríguez López, P., Lacoste, A.: TADAM: task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems, vol. 31 (2018) Oreshkin, B., Rodríguez López, P., Lacoste, A.: TADAM: task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
9.
Zurück zum Zitat Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10 657–10 665 (2019) Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10 657–10 665 (2019)
10.
Zurück zum Zitat Chu, W.-H., Li, Y.-J., Chang, J.-C., Wang, Y.-C.F.: Spot and learn: a maximum-entropy patch sampler for few-shot image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6251–6260 (2019) Chu, W.-H., Li, Y.-J., Chang, J.-C., Wang, Y.-C.F.: Spot and learn: a maximum-entropy patch sampler for few-shot image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6251–6260 (2019)
11.
Zurück zum Zitat Bartunov, S., Vetrov, D.: Few-shot generative modelling with generative matching networks. In: International Conference on Artificial Intelligence and Statistics, pp. 670–678. PMLR (2018) Bartunov, S., Vetrov, D.: Few-shot generative modelling with generative matching networks. In: International Conference on Artificial Intelligence and Statistics, pp. 670–678. PMLR (2018)
12.
Zurück zum Zitat Liu, W., Zhang, C., Lin, G., Liu, F.: CRNet: cross-reference networks for few-shot segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4165–4173 (2020) Liu, W., Zhang, C., Lin, G., Liu, F.: CRNet: cross-reference networks for few-shot segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4165–4173 (2020)
13.
Zurück zum Zitat Zhang, C., Lin, G., Liu, F., Guo, J., Wu, Q., Yao, R.: Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9587–9595 (2019) Zhang, C., Lin, G., Liu, F., Guo, J., Wu, Q., Yao, R.: Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9587–9595 (2019)
14.
Zurück zum Zitat Yang, Z., Wang, Y., Chen, X., Liu, J., Qiao, Y.: Context-transformer: tackling object confusion for few-shot detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 12 653–12 660 (2020) Yang, Z., Wang, Y., Chen, X., Liu, J., Qiao, Y.: Context-transformer: tackling object confusion for few-shot detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 12 653–12 660 (2020)
15.
Zurück zum Zitat Keshari, R., Vatsa, M., Singh, R., Noore, A.: Learning structure and strength of CNN filters for small sample size training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9349–9358 (2018) Keshari, R., Vatsa, M., Singh, R., Noore, A.: Learning structure and strength of CNN filters for small sample size training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9349–9358 (2018)
17.
Zurück zum Zitat Munkhdalai, T., Yuan, X., Mehri, S., Trischler, A.: Rapid adaptation with conditionally shifted neurons. In: International Conference on Machine Learning, pp. 3664–3673. PMLR (2018) Munkhdalai, T., Yuan, X., Mehri, S., Trischler, A.: Rapid adaptation with conditionally shifted neurons. In: International Conference on Machine Learning, pp. 3664–3673. PMLR (2018)
18.
Zurück zum Zitat Naik, D.K., Mammone, R.J.: Meta-neural networks that learn by learning. In: Proceedings 1992 IJCNN International Joint Conference on Neural Networks, vol. 1, pp. 437–442. IEEE (1992) Naik, D.K., Mammone, R.J.: Meta-neural networks that learn by learning. In: Proceedings 1992 IJCNN International Joint Conference on Neural Networks, vol. 1, pp. 437–442. IEEE (1992)
19.
Zurück zum Zitat Vinyals, O., Blundell, C., Lillicrap, T., Wierstra D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016) Vinyals, O., Blundell, C., Lillicrap, T., Wierstra D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
20.
Zurück zum Zitat Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014) Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)
21.
Zurück zum Zitat Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)CrossRef Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)CrossRef
22.
Zurück zum Zitat Zhang, J., Zhao, C., Ni, B., Xu, M., Yang, X.: Variational few-shot learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1685–1694 (2019) Zhang, J., Zhao, C., Ni, B., Xu, M., Yang, X.: Variational few-shot learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1685–1694 (2019)
23.
Zurück zum Zitat Qin, T., Li, W., Shi, Y., Gao, Y.: Diversity helps: unsupervised few-shot learning via distribution shift-based data augmentation. arXiv preprint arXiv:2004.05805 (2020) Qin, T., Li, W., Shi, Y., Gao, Y.: Diversity helps: unsupervised few-shot learning via distribution shift-based data augmentation. arXiv preprint arXiv:​2004.​05805 (2020)
24.
Zurück zum Zitat Antoniou, A., Storkey, A.: Assume, augment and learn: unsupervised few-shot meta-learning via random labels and data augmentation. arXiv preprint arXiv:1902.09884 (2019) Antoniou, A., Storkey, A.: Assume, augment and learn: unsupervised few-shot meta-learning via random labels and data augmentation. arXiv preprint arXiv:​1902.​09884 (2019)
25.
Zurück zum Zitat Wang, Y.-X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7278–7286 (2018) Wang, Y.-X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7278–7286 (2018)
26.
Zurück zum Zitat Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3018–3027 (2017) Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3018–3027 (2017)
29.
Zurück zum Zitat Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018) Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:​1803.​07728 (2018)
30.
Zurück zum Zitat Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2015) Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2015)
31.
Zurück zum Zitat Cheng, G., et al.: SPNet: siamese-prototype network for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 60, 1–11 (2021) Cheng, G., et al.: SPNet: siamese-prototype network for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 60, 1–11 (2021)
32.
Zurück zum Zitat Zhang, P., Bai, Y., Wang, D., Bai, B., Li, Y.: Few-shot classification of aerial scene images via meta-learning. Remote Sens. 13(1), 108 (2020)CrossRef Zhang, P., Bai, Y., Wang, D., Bai, B., Li, Y.: Few-shot classification of aerial scene images via meta-learning. Remote Sens. 13(1), 108 (2020)CrossRef
33.
Zurück zum Zitat Li, L., Han, J., Yao, X., Cheng, G., Guo, L.: DLA-MatchNet for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 59(9), 7844–7853 (2020)CrossRef Li, L., Han, J., Yao, X., Cheng, G., Guo, L.: DLA-MatchNet for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 59(9), 7844–7853 (2020)CrossRef
34.
Zurück zum Zitat Liu, B., Yu, X., Yu, A., Zhang, P., Wan, G., Wang, R.: Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 57(4), 2290–2304 (2018)CrossRef Liu, B., Yu, X., Yu, A., Zhang, P., Wan, G., Wang, R.: Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 57(4), 2290–2304 (2018)CrossRef
35.
Zurück zum Zitat Deng, W., Gould, S., Zheng, L.: What does rotation prediction tell us about classifier accuracy under varying testing environments? In: International Conference on Machine Learning, pp. 2579–2589. PMLR (2021) Deng, W., Gould, S., Zheng, L.: What does rotation prediction tell us about classifier accuracy under varying testing environments? In: International Conference on Machine Learning, pp. 2579–2589. PMLR (2021)
36.
Zurück zum Zitat Sun, Q., Liu, Y., Chua, T.-S., Schiele, B.: Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 403–412 (2019) Sun, Q., Liu, Y., Chua, T.-S., Schiele, B.: Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 403–412 (2019)
37.
Zurück zum Zitat Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208 (2018) Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208 (2018)
38.
Zurück zum Zitat Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017) Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)
39.
Zurück zum Zitat LiZ., Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017) LiZ., Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: learning to learn quickly for few-shot learning. arXiv preprint arXiv:​1707.​09835 (2017)
41.
Zurück zum Zitat Ji, H., Gao, Z., Zhang, Y., Wan, Y., Li, C., Mei, T.: Few-shot scene classification of optical remote sensing images leveraging calibrated pretext tasks. IEEE Trans. Geosci. Remote Sens. 60, 1–13 (2022) Ji, H., Gao, Z., Zhang, Y., Wan, Y., Li, C., Mei, T.: Few-shot scene classification of optical remote sensing images leveraging calibrated pretext tasks. IEEE Trans. Geosci. Remote Sens. 60, 1–13 (2022)
Metadaten
Titel
Few-Shot Learning Remote Scene Classification Based on DC-2DEC
verfasst von
Ziyuan Wang
Zhiming Ding
Yingying Wang
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2966-1_21

Premium Partner