Andrew Fairless, Ph.D.
/About/Bio
/Projects
/Reposts
/Tags
/Categories
Entries tagged :: optimization
.
2025-08-14
What I Read: Scale
2025-08-13
What I Read: Prompt Optimization
2025-07-30
What I Read: Automatic Sparse Differentiation
2025-07-07
What I Read: Polynomial
2025-06-25
What I Read: Newton update
2025-06-12
What I Read: reinforcement learning
2025-05-22
What I Read: Group relative policy optimization
2025-04-16
What I Read: optimizing softmax
2025-03-19
What I Read: polars, pandas
2025-02-06
What I Read: Multi Objective Optimisation
2024-12-02
What I Read: sparsity, PyTorch, Hadamard product
2024-11-25
What I Read: tilted loss
2024-05-29
What I Read: predicate pushdown
2024-05-15
What I Read: reliance on AI-assisted decisions
2024-05-13
What I Read: diffusion distillation
2024-04-08
What I Read: Diffusion models, new theoretical perspective
2024-03-28
What I Read: SQL order
2024-03-26
What I Read: polynomial monster
2024-02-06
What I Read: Adversarial Attacks on LLMs
2024-02-05
What I Read: Finetuning LLMs Using LoRA
2024-01-29
What I Read: Gaussian Processes Extrapolate
2024-01-08
What I Read: SAT Solvers
2024-01-03
What I Read: Finetuning LLMs with LoRA and QLoRA
2023-10-18
What I Read: Differentiable Trees
2023-09-27
What I Read: Giant Steps Can Solve Optimization Faster
2023-06-19
What I Read: Tree-Structured Parzen Estimator
2023-03-15
What I Read: Optimizing Machine Learning Training Pipelines
2023-01-19
What I Read: Transformers Training
2022-11-22
What I Read: SQLite
2022-11-02
What I Read: Neural Tangent Kernel
2022-10-27
What I Read: ML, Engagement, Maternal and Child Health
2022-10-13
What I Read: Backpropagation, Chain Rule
2022-10-11
What I Read: Snowflake Query Optimizer
2022-10-10
What I Read: When use Bayesian optimization
2022-05-10
What I Read: Generalization of SGD
2022-05-09
What I Read: Machine Learning, Building Blocks of Computing
2022-05-04
What I Read: Deep Learning From First Principles
2022-03-30
What I Read: Researchers Build AI That Builds AI
2021-12-15
What I Read: Deep Learning Optimization Theory
2021-12-01
What I Read: Limits Discovered in Quest for Optimal Solutions
2021-11-29
What I Read: machine learning with differential privacy
2021-11-01
What I Read: Neuron Bursts Can Mimic Famous AI Learning Strategy
2021-10-20
What I Read: Machine learning is not nonparametric statistics
2021-10-12
What I Learn: Scaling TensorFlow
2021-09-20
What I Read: Learning Neural Network Subspaces
2021-09-07
What I Read: Computer Scientists Discover Limits of Major Research Algorithm
2021-09-02
What I Read: Pathfinder, A parallel quasi-Newton algorithm
2021-08-24
What I Read: Not Optimized By Jax, PyTorch, or Tensorflow
2021-08-23
What I Read: Why Deep Learning Works
2021-08-03
What I Read: Gradient Pseudo-Swap
2021-07-26
What I Read: Model Free
2021-06-07
What I Read: Game theory for large-scale data analysis
2021-04-26
What I Read: Computer Scientist Who Tackles Inequality
2021-04-10
What I Read: Deep learning model compression
2021-03-03
What I Read: Machine learning is going real-time
2021-02-13
What I Read: Mismatches between Optimization Analyses and Deep Learning
2021-02-07
What I Read: Reproducing Deep Double Descent
2021-02-06
What I Read: Deep Double Descent: Where Bigger Models and More Data Hurt
2021-02-02
What I Read: Reinforcement learning is supervised learning
2021-01-19
What I Read: Autotuning Multi-Objective Optimization
2021-01-07
What I Read: Running Machine Learning at Scale
2020-12-19
What I Read: sub-linear deep learning algorithm