A game theoretic approach to explain the output of any machine learning model.
-
Updated
Dec 13, 2024 - Jupyter Notebook
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Fit interpretable models. Explain blackbox machine learning.
Model interpretability and understanding for PyTorch
A collection of infrastructure and tools for research in neural network interpretability.
A curated list of awesome responsible machine learning resources.
StellarGraph - Machine Learning on Graphs
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Algorithms for explaining machine learning models
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
A JAX research toolkit for building, editing, and visualizing neural networks.
[ICCV 2017] Torch code for Grad-CAM
A collection of research materials on explainable AI/ML
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
moDel Agnostic Language for Exploration and eXplanation
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
ReFT: Representation Finetuning for Language Models
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."