Andrew Fairless, Ph.D.
/About/Bio
/Projects
/Reposts
/Tags
/Categories
Entries tagged :: linear algebra
.
2025-08-14
What I Read: Scale
2025-08-12
What I Read: JAX, P-splines
2025-07-30
What I Read: Automatic Sparse Differentiation
2025-06-18
What I Read: Domain specific architectures
2025-04-27
What I Read: cosine similarity
2025-03-25
What I Read: autoencoders, interpretability
2025-02-27
What I Read: shape, matrix
2025-02-13
What I Read: Gaussians
2025-02-11
What I Read: Mamba, State Space Models
2025-02-05
What I Read: cosine similarity
2025-01-27
What I Read: Transformers Inference Optimization
2025-01-07
What I Read: sphering transform
2025-01-06
What I Read: embedding models
2024-12-19
What I Read: Toy Models, Superposition?
2024-12-16
What I Watch: How LLMs store facts
2024-12-12
What I Watch: compare high dimensional vectors
2024-11-07
What I Read: Regularization, polynomial bases
2024-10-30
What I Read: Visual Guide, Quantization
2024-10-24
What I Read: Kernel, Convolutional Representations
2024-10-16
What I Read: Use-cases, inverted PCA
2024-10-09
What I Read: Illustrated AlphaFold
2024-09-30
What I Read: Sliding Window Attention
2024-08-15
What I Read: Merge Large Language Models
2024-08-14
What I Read: Transformers by Hand
2024-07-29
What I Read: Platonic Hypothesis
2024-07-18
What I Read: time series, Gaussian processes
2024-06-26
What I Read: Flow Matching
2024-06-18
What I Read: Attention, transformers
2024-06-17
What I Read: Linear Algebra, Random
2024-06-10
What I Read: Mamba Explained
2024-05-22
What I Read: High-Dimensional Variance
2024-05-09
What I Read: Mamba, Easy Way
2024-05-01
What I Read: Mamba
2024-04-30
What I Read: Structured State Space Sequence Models
2024-04-04
What I Read: LoRA from Scratch
2024-03-26
What I Read: polynomial monster
2024-03-25
What I Read: polynomial features
2024-03-04
What I Read: Self-Attention in GPT
2024-02-05
What I Read: Finetuning LLMs Using LoRA
2023-11-30
What I Read: Visualizing Matrix Multiplication
2023-10-18
What I Read: Differentiable Trees
2023-10-16
What I Read: To Understand Transformers, Focus on Attention
2023-09-13
What I Read: Attention Off By One
2023-09-06
What I Read: Accelerating PyTorch
2023-07-12
What I Read: What, Why ChatGPT
2023-06-20
What I Read: Computation, Artificial Intelligence
2023-06-13
What I Read: Open Source, AlphaTensor
2023-01-30
What I Read: Matrix Multiplication
2023-01-03
What I Read: Russian Roulette
2022-12-07
What I Read: Sins, Numerical Linear Algebra
2022-11-02
What I Read: Neural Tangent Kernel
2022-10-20
What I Read: AI Researcher, Bitter Medicine
2022-10-06
What I Read: Emergent Features
2022-07-11
What I Read: Weak Supervision
2022-02-21
What I Read: Bayesian Geometry
2022-01-26
What I Read: Einstein Summation in Deep Learning
2022-01-25
What I Read: How Kalman filter works
2021-12-07
What I Read: what is a Gaussian process?
2021-06-30
What I Read: Basis and Change of Basis
2021-04-09
What I Read: New Algorithm, Linear Equations
2021-03-10
What I Read: Why I’m lukewarm on graph neural networks
2021-03-09
What I Read: How Transformers work