Uppsala University with Maya Baghdy Sar had the opportunity to present their results within the StorAIge project during the Data-driven Life Science annual conference on 13-14 November 2024 in Stockholm during the poster session.
Abstract:
Interpretable models of black-box classifiers
Machine Learnig, interpretable classifiers, rule-based models
Maya Baghdy Sar, Girish Pulinkala, Mark Melzer, Jan Komorowski
The rapid rise of AI, Machine Learning (ML) in particular, offers a huge potential, but opaque models such as otherwise very successful Neural Networks or Support Vector Machines classifiers, often hinder trust and adoption due to their lack of interpretations. Compared to non-transparent neural networks, we create interpretable ML models, as opposed to explainable ones, using rule-based methods.
We designed a workflow for the creation of Rule-Based Mirrors (RBM) from Artificial Neural Network (ANN) classifiers. This classifier is called the mirror of the NN. The support sets of the rules allow identification and interpretation of the TP, TN, FP, and FN decisions. We assessed the quality of predictions by performing several experiments on categorical, continuous and on mixed decision tables. Interestingly, the quality of the RBM classifiers was on par with the ANN classifiers for the discrete tables and only somewhat lower for the remaining cases (range over 85%-98%).
More information of the Data-driven life science annual conference: https://www.scilifelab.se/event/ddls-annual-conference-2024/