[1]: Interpretable Deep Learning
read on: - 05 Aug 2020Interpretable black-box casual attention shapley concept
| Index | Papers | Our Slides |
|---|---|---|
| 0 | A survey on Interpreting Deep Learning Models | Eli Survey |
| Interpretable Machine Learning: Definitions,Methods, Applications | Arsh Survey | |
| 1 | Explaining Explanations: Axiomatic Feature Interactions for Deep Networks | Arsh Survey |
| 2 | Shapley Value review | Arsh Survey |
| L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data | Bill Survey | |
| Consistent Individualized Feature Attribution for Tree Ensembles | bill Survey | |
| Summary for A value for n-person games | Pan Survey | |
| L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data | Rishab Survey | |
| 3 | Hierarchical Interpretations of Neural Network Predictions | Arsh Survey |
| Hierarchical Interpretations of Neural Network Predictions | Rishab Survey | |
| 4 | Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs | Arsh Survey |
| Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs | Rishab Survey | |
| 5 | Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models | Rishab Survey |
| Sanchit Survey | ||
| Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection | Sanchit Survey | |
| 6 | This Looks Like That: Deep Learning for Interpretable Image Recognition | Pan Survey |
| 7 | AllenNLP Interpret | Rishab Survey |
| 8 | DISCOVERY OF NATURAL LANGUAGE CONCEPTS IN INDIVIDUAL UNITS OF CNNs | Rishab Survey |
| 9 | How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations | Rishab Survey |
| 10 | Attention is not Explanation | Sanchit Survey |
| Pan Survey | ||
| 11 | Axiomatic Attribution for Deep Networks | Sanchit Survey |
| 12 | Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization | Sanchit Survey |
| 13 | Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifier | Sanchit Survey |
| 14 | “Why Should I Trust You?”Explaining the Predictions of Any Classifier | Yu Survey |
| 15 | INTERPRETATIONS ARE USEFUL: PENALIZING EXPLANATIONS TO ALIGN NEURAL NETWORKS WITH PRIOR KNOWLEDGE | Pan Survey |