Ensuring Data Integrity Through Robustness and Explainability in AI Models
Abstract
Full Text:
PDFReferences
S. J. Kim, E. K. Lee, and J. K. Lee, “Adversarial Training for Neural Networks: A Comprehensive Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 912–926, Mar. 2020.
K. B. Tjandra and K. A. Barai, “The Impact of Data Augmentation on Deep Learning Models for Image Classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 5, pp. 1102–1114, May 2020.
C. Szegedy, W. Zaremba, I. Sutskever, et al., “Intriguing Properties of Neural Networks,” in Proceedings of the 2014 International Conference on Learning Representations, 2014.
S. Ribeiro, C. Singh, and C. Guestrin, “‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
M. Ribeiro, S. Singh, and C. Guestrin, “Model-Agnostic Interpretability of Machine Learning Models,” in Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 2018, pp. 409–418.
B. L. P. Ribeiro, R. K. Gupta, and C. J. Harris, “Visualization Techniques for Machine Learning Models: A Survey,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 5, pp. 1978–1990, May 2019.
Ronakkumar Bathani (2020) Cost Effective Framework For Schema Evolution In Data Pipelines: Ensuring Data Consistency. (2020). Journal Of Basic Science And Engineering, 17(1), .Retrieved From Https://Yigkx.Org.Cn/Index.Php/Jbse/Article/View/300
Refbacks
- There are currently no refbacks.