Andrew Fairless, Ph.D.
/About/Bio
/Projects
/Reposts
/Tags
/Categories
Entries tagged :: natural language processing
.
2025-08-13
What I Read: Prompt Optimization
2025-07-24
What I Read: AI Products
2025-07-23
What I Read: Generative, latent
2025-07-22
What I Read: BM25F
2025-07-21
What I Read: Language Models
2025-07-15
What I Read: reasoning research
2025-06-26
What I Read: Recommendation, LLMs
2025-06-24
What I Read: Small Language Models
2025-06-11
What I Read: LLMs in medicine
2025-05-01
What I Read: memorization, novelty
2025-04-29
What I Read: tensor dimensions, transformers
2025-04-27
What I Read: cosine similarity
2025-04-23
What I Read: AI languages
2025-03-25
What I Read: autoencoders, interpretability
2025-03-06
What I Read: multimodal LLMs
2025-03-05
What I Read: LLMs, school math
2025-03-03
What I Read: debate, ai
2025-02-25
What I Read: llm judge
2025-02-12
What I Read: evaluation quicksand
2025-02-11
What I Read: Mamba, State Space Models
2025-01-21
What I Read: GenAI, Classify Text
2025-01-06
What I Read: embedding models
2024-12-19
What I Read: Toy Models, Superposition?
2024-12-16
What I Watch: How LLMs store facts
2024-12-11
What I Read: Fine Tuning LLM
2024-12-03
What I Read: Evaluating LLM-Evaluators
2024-11-21
What I Read: Classifying pdfs
2024-11-20
What I Read: Tool Retrieval, RAG
2024-11-18
What I Read: LLM Pre-training Post-training
2024-11-12
What I Read: Turing Test, intelligence
2024-10-29
What I Read: Generative AI Platform
2024-10-28
What I Read: History, Transformer
2024-10-21
What I Read: LLM evaluation
2024-10-17
What I Read: Data Flywheels, LLM
2024-10-15
What I Read: Improving Language Models, Practical Size
2024-10-10
What I Read: Hidden Infinity, Preference Learning
2024-10-07
What I Read: Extrinsic Hallucinations, LLMs
2024-10-01
What I Read: What can LLMs never do?
2024-09-23
What I Read: Detecting hallucinations, LLMs, semantic entropy
2024-09-19
What I Read: Structured Generation, LLMs
2024-09-18
What I Read: AI Engineers, Search
2024-09-03
What I Read: Summarization, LLMs
2024-08-26
What I Read: neural systems understanding
2024-08-19
What I Read: What We Learned Building LLMs
2024-08-15
What I Read: Merge Large Language Models
2024-08-13
What I Read: LLM Pipelines, DSPy
2024-08-07
What I Read: LLM evaluation
2024-08-06
What I Read: LLM, DSPy Assertions and Suggestions
2024-08-05
What I Read: implicit biases, LLM
2024-07-29
What I Read: Platonic Hypothesis
2024-07-25
What I Read: Game Theory, AI
2024-07-17
What I Read: Matryoshka Embedding
2024-07-15
What I Read: LLMs, Open Source
2024-06-20
What I Read: Structured Generation, Constrained Decoding
2024-06-18
What I Read: Attention, transformers
2024-06-12
What I Read: Data Selection, LLMs
2024-06-10
What I Read: Mamba Explained
2024-06-03
What I Read: Chain-of-Thought Reasoning
2024-05-20
What I Read: text embeddings
2024-05-14
What I Read: 1-bit LLMs, 1.58 Bits
2024-05-09
What I Read: Mamba, Easy Way
2024-04-30
What I Read: Structured State Space Sequence Models
2024-04-29
What I Read: Forgetting Can Help AI Learn
2024-04-25
What I Read: Predictive Human Preference, Model Ranking to Model Routing
2024-04-22
What I Read: Compound AI Systems
2024-04-18
What I Read: Scaling ChatGPT, Engineering Challenges
2024-04-10
What I Read: How Quickly LLMs Learn Skills?
2024-04-04
What I Read: LoRA from Scratch
2024-04-03
What I Read: LLM Evaluation Metrics
2024-03-19
What I Read: Sampling Text Generation
2024-03-18
What I Read: Chatbots Understand Text
2024-03-04
What I Read: Self-Attention in GPT
2024-02-21
What I Read: Research Directions
2024-02-19
What I Read: Instruction Tuning
2024-02-13
What I Read: Limits of Transformers on Compositionality
2024-02-08
What I Read: survey LLM tooling
2024-02-07
What I Read: Multi-Modal Retrieval-Augmented Generation
2024-02-06
What I Read: Adversarial Attacks on LLMs
2024-02-05
What I Read: Finetuning LLMs Using LoRA
2024-01-04
What I Read: Multimodality
2024-01-03
What I Read: Finetuning LLMs with LoRA and QLoRA
2023-12-20
What I Read: Distributed Training, Finetuning
2023-12-13
What I Read: Retrieval Augmented Generation at scale
2023-12-11
What I Read: Tiny Language Models
2023-12-05
What I Read: evaluating AI systems
2023-11-16
What I Read: Estimate Token Importance in LLM Prompts
2023-10-23
What I Read: LLMs, single example
2023-10-16
What I Read: To Understand Transformers, Focus on Attention
2023-10-12
What I Read: GPT-4, 8 Models in One
2023-10-10
What I Read: LLM research
2023-10-05
What I Read: Multimodal, Embeddings
2023-09-25
What I Read: LLMs in Planning
2023-09-21
What I Read: Economic Case for Generative AI
2023-09-19
What I Read: LLM-based Products
2023-09-13
What I Read: Attention Off By One
2023-09-11
What I Read: What Do LLMs Know About Linguistics?
2023-09-07
What I Read: LLMs
2023-08-21
What I Read: Disagreement Modelling
2023-08-17
What I Read: LLM Agents
2023-08-16
What I Read: artificial intelligence really hard
2023-08-08
What I Read: Ways Digital Minds Know
2023-08-04
What I Read: Attack Impacts AI Chatbots
2023-07-27
What I Read: LLM Chatbots, Browser
2023-07-19
What I Read: Hard Stuff, Building Products, LLMs
2023-07-17
What I Read: Prompt injection
2023-07-12
What I Read: What, Why ChatGPT
2023-07-11
What I Read: In-Context Learning
2023-07-05
What I Read: Natural Language, supply chains
2023-06-29
What I Read: Against LLM
2023-06-28
What I Read: Chatbots, What Isn’t
2023-06-27
What I Read: Reinforcement Learning from Human Feedback
2023-06-21
What I Read: Reinforcement Learning, Language Models
2023-06-12
What I Read: Multi-label NLP
2023-06-05
What I Read: One Large Model
2023-05-31
What I Read: smaller LLMs, more tokens
2023-05-30
What I Read: Few Shot, Recommenders, LLMs
2023-05-29
What I Read: LLM applications, production
2023-05-25
What I Read: Prompt Engineering
2023-05-24
What I Read: Multimodal Models
2023-05-23
What I Read: human touch, LLMs
2023-05-15
What I Read: Topic Modeling
2023-05-11
What I Read: GPT, Ranking
2023-05-09
What I Read: Competitive Machine Learning
2023-05-08
What I Read: Abilities Emerging From AI
2023-04-18
What I Read: Relative representations
2023-04-12
What I Read: Infrastructure
2023-04-06
What I Read: Teach Computers Math
2023-03-30
What I Read: Language world models or surface statistics?
2023-03-21
What I Read: Machines Learn, Teach Basics
2023-03-06
What I Read: Modern AI Art
2023-02-23
What I Read: Building "Copilot for X"
2023-02-22
What I Read: AI, Human Values
2023-02-21
What I Read: Offline RL, Large Language Models
2023-01-19
What I Read: Transformers Training
2023-01-09
What I Read: large language model, UX
2022-12-21
What I Read: Pre-Trained Models, Robotics
2022-12-20
What I Read: undesired goals
2022-12-12
What I Read: Speech Recognition Metrics
2022-11-29
What I Read: Illustrated Stable Diffusion
2022-11-17
What I Read: Generative AI
2022-11-16
What I Read: Productizing Large Language Models
2022-11-15
What I Read: Career in NLP
2022-10-18
What I Read: Zero-Shot, K-Shot Learning
2022-10-12
What I Read: AI, Limits, Language
2022-10-06
What I Read: Emergent Features
2022-10-04
What I Read: Self-Taught AI, Brain
2022-09-28
What I Read: Robot Learned, Scraping Web
2022-09-12
What I Read: BLOOM Training
2022-09-06
What I Read: Transformers in computer vision
2022-08-31
What I Read: Large Language Models
2022-08-11
What I Read: Mimicry, Artificial Intelligence
2022-08-08
What I Read: DALL·E 2, Explained
2022-08-04
What I Read: Minerva, Quantitative Reasoning
2022-07-27
What I Read: Against Naive AI Scaling
2022-07-26
What I Read: Text Embeddings Visually Explained
2022-06-27
What I Read: Applying BERT to Speech
2022-06-01
What I Read: Learning, not Enough Data Part 3
2022-05-31
What I Read: Type-Aware Bi-Encoders for Open-Domain Entity Retrieval
2022-05-24
What I Read: Understanding, Simple AI
2022-04-26
What I Read: Will Transformers Take Over Artificial Intelligence?
2022-04-18
What I Read: One Voice Detector to Rule Them All
2022-04-13
What I Read: Textless NLP
2022-02-16
What I Read: To Understand Language is to Understand Generalization
2022-02-07
What I Read: What Does It Mean for AI to Understand?
2022-01-31
What I Read: Semi-Supervised Learning
2022-01-18
What I Learn: Meta-Learning, Keyphrase Extraction
2021-12-14
What I Read: Dense Vectors
2021-11-08
What I Read: How Train Large Deep Learning Models
2021-09-22
What I Read: AI Story Generation
2021-09-20
What I Read: Learning Neural Network Subspaces
2021-09-06
What I Read: Machine Learning Wont Solve Natural Language Understanding
2021-08-16
What I Read: Geometric, Deep Learning
2021-08-13
What I Read: Training AI, Analogies
2021-08-11
What I Read: Identifying Document Types
2021-08-09
What I Read: Prompting, Language Models, NLP
2021-08-05
What I Read: Understanding Levenshtein Distance
2021-07-20
What I Read: Semantic Search
2021-07-01
What I Read: Contrastive Representation Learning
2021-06-24
What I Read: Human-Centered Explainable AI
2021-06-16
What I Read: Knowledge Graphs with Language Model
2021-06-15
What I Read: Dataset Curation for NLP Projects
2021-05-10
What I Read: Reducing Toxicity in Language Models
2021-05-06
What I Read: Zero-Shot Learning
2021-03-29
What I Read: Reducing High Cost of Training NLP Models
2021-03-26
What I Read: Language Model Fine-tuning
2021-03-13
What I Read: Transformer Networks to Answer Questions About Images
2021-03-11
What I Read: Neural Text Generation
2021-03-10
What I Read: Why I’m lukewarm on graph neural networks
2021-03-09
What I Read: How Transformers work
2021-02-20
What I Read: Interpretability in Machine Learning
2021-02-18
What I Read: HuggingFace Transformers
2021-02-15
What I Read: Revisiting Sutton’s Bitter Lesson for AI
2021-02-05
What I Read: AI for good, think form extraction
2021-01-25
What I Read: This AI learns by reading the web
2021-01-21
What I Read: Transformer Architecture
2021-01-05
What I Read: Progress of Natural Language Processing
2021-01-03
What I Read: GPT-3, a Giant Step for NLP
2021-01-01
What I Read: Differentiable Reasoning over Text
2020-12-28
What I Read: GPT-3, The New Mighty Language Model
2020-12-28
What I Read: Self Supervised Representation Learning in NLP
2020-12-24
What I Read: Neural Networks to Find Answers in Tables
2020-12-10
What I Read: Reformer efficient Transformer
2020-12-05
What I Read: AI Epidemiologist First Warnings Virus
2017-10-09
Classifying medicine
How do patients experience conventional and alternative medicine differently? Yelp, random forests, ROC curves, and so much more!
Read more ⟶
2017-02-06
The Peanuts Project
Charlie Brown, Snoopy, Lucy, Linus . . . who was the most important character? Which of their relationships was the strongest? Indulge some nostalgia and hum some Guaraldi!
Read more ⟶