Abstract: With the wide adoption of black-box models, instance-based \emph{post hoc} explanation tools, such as LIME and SHAP became increasingly popular. These tools produce explanations, pinpointing contributions of key features associated with a given prediction. However, the obtained explanations remain at the raw feature level and are not necessarily understandable by a human expert without extensive domain knowledge. We propose ReEx (Reasoning with Explanations), a method applicable to explanations generated by arbitrary instance-level explainers, ...
(read more)
Topics: 
Artificial intelligence
Natural language processing
Information retrieval