Skip to main content

2024 | OriginalPaper | Buchkapitel

Decoding the Recommender System: A Comprehensive Guide to Explainable AI in E-commerce

verfasst von : Garima Sahu, Loveleen Gaur

Erschienen in: Role of Explainable Artificial Intelligence in E-Commerce

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The rapid growth of e-commerce has resulted in an increasingly competitive landscape where businesses strive to provide personalized and engaging experiences to their customers. Recommender systems, powered by advanced algorithms and artificial intelligence, are central to this effort, curating tailored suggestions for products, services, and content. However, the complex and opaque decision-making processes of these systems often act as black boxes, limiting user understanding and trust. This chapter delves into the exclusive roles of explainable AI in the decision-making processes of recommender systems within the context of e-commerce, highlighting its importance in fostering trustworthiness, ensuring ethical and legal compliance, and facilitating debugging and model improvement. We explore various types of explanations, techniques for generating explanations, and real-world examples of explainable recommender systems. In conclusion, explainable AI is an indispensable component of recommender systems, playing a critical role in enhancing user trust and engagement, ultimately leading to improved customer satisfaction and increased revenues for e-commerce businesses. As AI systems continue to evolve and become more integrated into our lives, explainability will remain a crucial aspect of their design and implementation.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
5.
Zurück zum Zitat Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., & Eckersley, P., et al. (2020). Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648–657). https://doi.org/10.1145/3351095.3372850. Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., & Eckersley, P., et al. (2020). Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648–657). https://​doi.​org/​10.​1145/​3351095.​3372850.
6.
Zurück zum Zitat Bilgic, M., & Mooney, R. J. (2005). Explaining recommendations: Satisfaction vs. promotion. In Proceedings of Beyond Personalization 2005: A Workshop on the Next Stage of Recommender Systems Research at the 2005 International Conference on Intelligent User Interfaces (pp. 13–18). Bilgic, M., & Mooney, R. J. (2005). Explaining recommendations: Satisfaction vs. promotion. In Proceedings of Beyond Personalization 2005: A Workshop on the Next Stage of Recommender Systems Research at the 2005 International Conference on Intelligent User Interfaces (pp. 13–18).
7.
Zurück zum Zitat Biren, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In Proceedings of the IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI) (pp. 1–9). Biren, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In Proceedings of the IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI) (pp. 1–9).
10.
Zurück zum Zitat Gaur, L., & Sahoo, B. M. (2022). Introduction to explainable AI and intelligent transportation. Explainable artificial intelligence for intelligent transportation systems: Ethics and applications (pp. 1–25). Springer International Publishing.CrossRef Gaur, L., & Sahoo, B. M. (2022). Introduction to explainable AI and intelligent transportation. Explainable artificial intelligence for intelligent transportation systems: Ethics and applications (pp. 1–25). Springer International Publishing.CrossRef
11.
Zurück zum Zitat Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv preprint arXiv:1806.00069. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv preprint arXiv:​1806.​00069.
12.
Zurück zum Zitat Gonzalez, M. F., Liu, W., Shirase, L., Tomczak, D. L., Lobbe, C. E., Justenhoven, R., & Martin, N. R. (2022). Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. Computers in Human Behavior, 130, 107179.CrossRef Gonzalez, M. F., Liu, W., Shirase, L., Tomczak, D. L., Lobbe, C. E., Justenhoven, R., & Martin, N. R. (2022). Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. Computers in Human Behavior, 130, 107179.CrossRef
13.
Zurück zum Zitat Gramegna, A., & Giudici, P. (2021). SHAP and LIME: An evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence, 4, 752558.CrossRef Gramegna, A., & Giudici, P. (2021). SHAP and LIME: An evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence, 4, 752558.CrossRef
14.
Zurück zum Zitat Guidotti, R., Monreale, A., Pedreschi, D., & Giannotti, F. (2021). Principles of explainable artificial intelligence. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications, 9–31. Guidotti, R., Monreale, A., Pedreschi, D., & Giannotti, F. (2021). Principles of explainable artificial intelligence. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications, 9–31.
15.
Zurück zum Zitat Gupta, V., & Sahu, G. (2021). Reviving the Indian hospitality industry after the Covid-19 pandemic: The role of innovation in training. Worldwide hospitality and tourism themes, 13(5), 599–609.CrossRef Gupta, V., & Sahu, G. (2021). Reviving the Indian hospitality industry after the Covid-19 pandemic: The role of innovation in training. Worldwide hospitality and tourism themes, 13(5), 599–609.CrossRef
16.
Zurück zum Zitat Gupta, V., Roy, H., & Sahu, G. (2022). HOW the tourism & hospitality lecturers coped with the transition to online teaching due to COVID-19: An assessment of stressors, negative sentiments and coping strategies. Journal of Hospitality, Leisure, Sport and Tourism Education, 30, 100341.CrossRef Gupta, V., Roy, H., & Sahu, G. (2022). HOW the tourism & hospitality lecturers coped with the transition to online teaching due to COVID-19: An assessment of stressors, negative sentiments and coping strategies. Journal of Hospitality, Leisure, Sport and Tourism Education, 30, 100341.CrossRef
17.
Zurück zum Zitat Haque, A. B., Islam, A. N., & Mikalef, P. (2023). Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change, 186, 122120.CrossRef Haque, A. B., Islam, A. N., & Mikalef, P. (2023). Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change, 186, 122120.CrossRef
19.
Zurück zum Zitat Jiang, J., Kahai, S., & Yang, M. (2022). Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies, 165, 102839.CrossRef Jiang, J., Kahai, S., & Yang, M. (2022). Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies, 165, 102839.CrossRef
20.
Zurück zum Zitat Jiarpakdee, J., Tantithamthavorn, C. K., Dam, H. K., & Grundy, J. (2020). An empirical study of model-agnostic techniques for defect prediction models. IEEE Transactions on Software Engineering, 48(1), 166–185.CrossRef Jiarpakdee, J., Tantithamthavorn, C. K., Dam, H. K., & Grundy, J. (2020). An empirical study of model-agnostic techniques for defect prediction models. IEEE Transactions on Software Engineering, 48(1), 166–185.CrossRef
22.
Zurück zum Zitat Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.MathSciNetCrossRef Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.MathSciNetCrossRef
23.
Zurück zum Zitat Lee, S. (2022). AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems. In Human-Centered Artificial Intelligence (pp. 91–102). Academic Press. Lee, S. (2022). AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems. In Human-Centered Artificial Intelligence (pp. 91–102). Academic Press.
25.
Zurück zum Zitat Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777). Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777).
26.
Zurück zum Zitat Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655.CrossRef Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655.CrossRef
27.
Zurück zum Zitat Meske, C., Abedin, B., Klier, M., & Rabhi, F. (2022). Explainable and responsible artificial intelligence. Electronic Markets, 1–4. Meske, C., Abedin, B., Klier, M., & Rabhi, F. (2022). Explainable and responsible artificial intelligence. Electronic Markets, 1–4.
28.
Zurück zum Zitat Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547. Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:​1712.​00547.
31.
Zurück zum Zitat Paz-Ruza, J., Eiras-Franco, C., Guijarro-Berdiñas, B., & Alonso-Betanzos, A. (2022). Sustainable personalisation and explainability in dyadic data systems. Procedia Computer Science, 207, 1017–1026.CrossRef Paz-Ruza, J., Eiras-Franco, C., Guijarro-Berdiñas, B., & Alonso-Betanzos, A. (2022). Sustainable personalisation and explainability in dyadic data systems. Procedia Computer Science, 207, 1017–1026.CrossRef
32.
Zurück zum Zitat Quadrianto, N., Schuller, B. W., & Lattimore, F. R. (2021). Ethical machine learning and artificial intelligence. Frontiers in Big Data, 4, 742589.CrossRef Quadrianto, N., Schuller, B. W., & Lattimore, F. R. (2021). Ethical machine learning and artificial intelligence. Frontiers in Big Data, 4, 742589.CrossRef
33.
Zurück zum Zitat Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778 Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://​doi.​org/​10.​1145/​2939672.​2939778
34.
Zurück zum Zitat Saeed, W., & Omlin, C. (2023). Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 110273. Saeed, W., & Omlin, C. (2023). Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 110273.
35.
Zurück zum Zitat Sahu, G., Gaur, L., & Singh, G. (2021). Applying niche and gratification theory approach to examine the users’ indulgence towards over-the-top platforms and conventional TV. Telematics and Informatics, 65, 101713.CrossRef Sahu, G., Gaur, L., & Singh, G. (2021). Applying niche and gratification theory approach to examine the users’ indulgence towards over-the-top platforms and conventional TV. Telematics and Informatics, 65, 101713.CrossRef
36.
Zurück zum Zitat Sahu, G., Gaur, L., & Singh, G. (2022, November). Analyzing the Users’ De-familiarity with Thumbnails on OTT Platforms to Influence Content Streaming. In 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 551–556). IEEE. Sahu, G., Gaur, L., & Singh, G. (2022, November). Analyzing the Users’ De-familiarity with Thumbnails on OTT Platforms to Influence Content Streaming. In 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 551–556). IEEE.
37.
Zurück zum Zitat Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 1–12. Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 1–12.
38.
Zurück zum Zitat Weber, P., Carl, K. V., & Hinz, O. (2023). Applications of explainable artificial intelligence in finance—A systematic review of finance, information systems, and computer science literature. Management Review Quarterly, 1–41. Weber, P., Carl, K. V., & Hinz, O. (2023). Applications of explainable artificial intelligence in finance—A systematic review of finance, information systems, and computer science literature. Management Review Quarterly, 1–41.
39.
Zurück zum Zitat Yalcin, O. G. (2021). GDPR compliant data processing and privacy preserving technologies: A literature review on notable Horizon 2020 projects. New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence: The DITTET Collection, 166–177. Yalcin, O. G. (2021). GDPR compliant data processing and privacy preserving technologies: A literature review on notable Horizon 2020 projects. New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence: The DITTET Collection, 166–177.
42.
Zurück zum Zitat Zimmermann, R., Mora, D., Cirqueira, D., Helfert, M., Bezbradica, M., Werth, D., Weitzl, W. J., Riedl, R., & Auinger, A. (2023). Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant application using personalized recommendations and explainable artificial intelligence. Journal of Research in Interactive Marketing, 17(2), 273–298. https://doi.org/10.1108/JRIM-09-2021-0237CrossRef Zimmermann, R., Mora, D., Cirqueira, D., Helfert, M., Bezbradica, M., Werth, D., Weitzl, W. J., Riedl, R., & Auinger, A. (2023). Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant application using personalized recommendations and explainable artificial intelligence. Journal of Research in Interactive Marketing, 17(2), 273–298. https://​doi.​org/​10.​1108/​JRIM-09-2021-0237CrossRef
44.
Zurück zum Zitat Gaur, L., Ratta, M., & Gaur, A. (2022). Future of DeepFakes and Ectypes In: Deepfakes. CRC Press, 9781003231493. Gaur, L., Ratta, M., & Gaur, A. (2022). Future of DeepFakes and Ectypes In: Deepfakes. CRC Press, 9781003231493.
47.
Zurück zum Zitat Gaur, L., Jhanjhi, N. Z., Bakshi, S., & Gupta, P. (2022). Analyzing consequences of artificial intelligence on jobs using topic modeling and keyword extraction. In 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM) (pp. 435–440). https://doi.org/10.1109/ICIPTM54933.2022.9754064. Gaur, L., Jhanjhi, N. Z., Bakshi, S., & Gupta, P. (2022). Analyzing consequences of artificial intelligence on jobs using topic modeling and keyword extraction. In 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM) (pp. 435–440). https://​doi.​org/​10.​1109/​ICIPTM54933.​2022.​9754064.
48.
Zurück zum Zitat Gaur, L., Bhandari, M., Razdan, T., Mallik, S., & Zhao, Z. (2022). Explanation-driven deep learning model for prediction of brain Tumour status using MRI image data. Frontier in Genetics, 13, 822666.CrossRef Gaur, L., Bhandari, M., Razdan, T., Mallik, S., & Zhao, Z. (2022). Explanation-driven deep learning model for prediction of brain Tumour status using MRI image data. Frontier in Genetics, 13, 822666.CrossRef
49.
Zurück zum Zitat Anshu, K., Gaur, L., & Singh, G. (2021). Co-creation: Interface for online affective experience and repurchase intention. International Journal of Business and Economics, 20(2), 161–185. ISSN 1607–0704. Anshu, K., Gaur, L., & Singh, G. (2021). Co-creation: Interface for online affective experience and repurchase intention. International Journal of Business and Economics, 20(2), 161–185. ISSN 1607–0704.
51.
Metadaten
Titel
Decoding the Recommender System: A Comprehensive Guide to Explainable AI in E-commerce
verfasst von
Garima Sahu
Loveleen Gaur
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-55615-9_3

Premium Partner