DECODING THE MACHINE MIND: ENHANCING TRANSPARENCY IN AI MODELS
DOI:
https://doi.org/10.59367/s5vbqy58Keywords:
AI, XAI, AI-driven, AnalysisAbstract
As artificial intelligence (AI) and machine learning models continue to permeate various aspects of our lives, the inherent complexity and opaqueness of these models raise important concerns about their trustworthiness and accountability. This research paper delves into the concept of explainable AI (XAI) to improve transparency in AI models. We explore the challenges posed by "black-box" AI systems and present a comprehensive analysis of the strategies and techniques used to decipher their decision-making processes. Through case studies and real-world applications, we highlight the significance of achieving transparency in AI, not only for the sake of understanding the AI systems but also for addressing ethical and societal concerns. This paper offers an overview of emerging trends in the field of explainable AI and underscores the imperative of making AI models more interpretable, thereby paving the way for a more informed and accountable AI-driven world.
References
Lowlesh Nandkishor Yadav, “Predictive Acknowledgement using TRE System to reduce cost and Bandwidth”. IJRECE VOL. 7 ISSUE 1 (JANUARY-MARCH 2019) pg. no 275-278.
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721-1730).
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Lipton, Z. C. (2016). The mythos of model interpretability. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (Vol. 9).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Rajkomar, A., Oren, E., Chen, K., Dai, A. M., Hajaj, N., Hardt, M., ... & Schaar, M. (2018). Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine, 1(1), 18.
Holzinger, A., Langs, G., Denk, H., & Zatloukal, K. (2019). Causality and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
World Economic Forum. (2018). Ethics and Governance of Artificial Intelligence. Retrieved from https://www.weforum.org/reports/ethics-and-governance-of-artificial-intelligence-a-principled-approach
European Commission. (2019). Ethics Guidelines for Trustworthy AI. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
Almohri, H., Gehr, T., Montazeri, H., Steeg, G. V., & Lakshminarayanan, B. (2019). I am checkingthe certified robustness of neural networks with mixed integer programming. In Proceedings of the 36th International Conference on Machine Learning (ICML).
Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). Learning to explain: An information-theoretic perspective on model interpretation. In Proceedings of the 35th International Conference on Machine Learning (ICML).
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Futuristic Innovation in Arts, Humanities and Management (IJFIAHM)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.