Andrew Fairless, Ph.D.
/About/Bio
/Projects
/Reposts
/Tags
/Categories
Entries tagged :: interpretability
.
2025-08-13
What I Read: Prompt Optimization
2025-08-12
What I Read: JAX, P-splines
2025-04-08
What I Read: Shapley Interactions
2025-03-25
What I Read: autoencoders, interpretability
2025-02-24
What I Read: degree certainty
2025-01-08
What I Read: Neural Networks, Understandable
2024-12-19
What I Read: Toy Models, Superposition?
2024-12-18
What I Read: Is SHAP doomed?
2024-06-25
What I Read: How Machines ‘Grok’ Data
2024-06-13
What I Read: Generalized Additive Models
2024-01-09
What I Read: Neural algorithmic reasoning
2023-12-11
What I Read: Tiny Language Models
2023-11-16
What I Read: Estimate Token Importance in LLM Prompts
2023-11-02
What I Read: Features Are Important?
2023-10-03
What I Read: Bonsai Networks, RNNs
2023-10-02
What I Read: Models Memorize or Generalize?
2023-03-02
What I Read: Shapley Values
2023-02-09
What I Read: Biases, Saliency
2022-11-21
What I Read: explainability, survival analysis
2022-03-23
What I Read: Visual Explanation of Classifiers
2022-03-17
What I Read: Aristotle, Deep Learning
2022-02-14
What I Read: Interpretable Time Series
2021-12-13
What I Read: Non-Technical Guide to Interpreting SHAP
2021-10-07
What I Read: Bayesian Media Mix Modeling
2021-09-22
What I Read: AI Story Generation
2021-09-13
What I Read: XGBoost, Order Does Matter
2021-07-30
What I Read: CNN Heat Maps, Class Activation Mapping
2021-06-24
What I Read: Human-Centered Explainable AI
2021-06-09
What I Read: Be Careful Interpreting Predictive Models, Causal Insights
2021-05-05
What I Read: Weight Banding
2021-05-04
What I Read: Branch Specialization
2021-05-03
What I Read: Visualizing Weights
2021-04-22
What I Read: What did COVID do to models?
2021-03-21
What I Read: Deep learning, black box
2021-03-12
What I Read: Interpretation for Image Recognition
2021-03-08
What I Read: Medicines Machine Learning Problem
2021-03-07
What I Read: definitive guide to AI monitoring
2021-02-25
What I Read: Simplicity Creates Inequity, Fairness, Stereotypes, and Interpretability
2021-02-23
What I Read: Deploying Machine Learning, a Survey of Case Studies
2021-02-20
What I Read: Interpretability in Machine Learning
2021-02-05
What I Read: Explainable AI, 2-Stage Approach
2021-02-04
What I Read: “Less than one”-shot learning
2021-01-27
What I Read: AI Can Help Patients If Doctors Understand It
2021-01-19
What I Read: Maintaining Machine Learning in Production
2021-01-05
What I Read: making machine learning actually useful
2021-01-02
What I Read: Nitpicking ML Technical Debt
2020-12-18
What I Read: Introduction to Circuits
2020-02-22
SHAP tutorial
How do we use Shapley values to interpret machine learning models?
Read more ⟶
2017-10-09
Classifying medicine
How do patients experience conventional and alternative medicine differently? Yelp, random forests, ROC curves, and so much more!
Read more ⟶