Channel Avatar

DataMListic @UCRM1urw2ECVHH7ojJw8MXiQ@youtube.com

26K subscribers - no pronouns :c

Welcome to DataMListic (former WhyML)! On this channel I exp


08:02
t-SNE - Explained
08:04
The Illusion of Thinking - Paper Walkthrough
05:18
Perception Encoder - Paper Walkthrough
05:15
SAM2: Segment Anything in Images and Videos - Paper Walkthrough
05:05
MMaDA: Multimodal Large Diffusion Language Models - Paper Walkthrough
05:12
Anthropic Claude 4 - System Card Walkthrough
04:52
Google's AlphaEvolve - Paper Walkthrough
10:28
Hidden Markov Models (HMM) Part 2 - The Viterbi Algorithm
05:53
Hidden Markov Models (HMM) Part 1 - Introduction
05:44
An Introduction to Graph Neural Networks
09:33
Gaussian Processes
08:15
Bayesian Optimization
04:22
The RBF Kernel
09:03
The Kernel Trick
08:07
The Curse of Dimensionality
04:27
Cross-Entropy - Explained
04:13
Weights Initialization in Neural Networks
03:59
Dropout Regularization - Explained
03:41
Recommender Systems - Part 3: Issues & Solutions
04:11
Recommender Systems - Part 2: Collaborative Filtering
03:38
Recommender Systems - Part 1: Content-Based Recommendations
02:37
Why L1 Regularization Produces Sparse Weights
04:11
Overfitting vs Underfitting - Explained
04:24
Confidence Intervals Explained
03:45
Z-Test Explained
04:04
L1 vs L2 Regularization
05:36
Poisson Distribution - Explained
08:01
Basic Probability Distributions Explained: Bernoulli, Binomial, Categorical, Multinomial
09:12
T-Test Explained
03:58
AI Weekly Brief - Week 2: Llama 3.2, OpenAI Voice Mode, Mira Murati leaves OpenAI
03:59
AI Weekly Brief - Week 2: LlamaCoder, Eureka, YouTube GenAI, Pixtral 12B
04:28
AI Weekly Brief - Week 1: OpenAI o1-preview, DataGemma, AlphaProteo
03:33
Covariance Matrix - Explained
08:10
The Bitter Lesson in AI...
05:40
Marginal, Joint and Conditional Probabilities Explained
04:49
Least Squares vs Maximum Likelihood
03:50
AI Reading List (by Ilya Sutskever) - Part 5
04:27
AI Reading List (by Ilya Sutskever) - Part 4
04:48
AI Reading List (by Ilya Sutskever) - Part 3
05:02
AI Reading List (by Ilya Sutskever) - Part 2
04:31
AI Reading List (by Ilya Sutskever) - Part 1
08:03
Vector Database Search - Hierarchical Navigable Small Worlds (HNSW) Explained
05:40
Singular Value Decomposition (SVD) Explained
03:27
ROUGE Score Explained
05:48
BLEU Score Explained
03:38
Cross-Validation Explained
03:51
Sliding Window Attention (Longformer) Explained
03:36
BART Explained: Denoising Sequence-to-Sequence Pre-training
20:28
RLHF: Training Language Models to Follow Instructions with Human Feedback - Paper Explained
27:43
Chain-of-Verification (COVE) Reduces Hallucination in Large Language Models - Paper Explained
13:59
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits - Paper Explained
05:14
LLM Tokenizers Explained: BPE Encoding, WordPiece and SentencePiece
03:15
Hyperparameters Tuning: Grid Search vs Random Search
26:16
Jailbroken: How Does LLM Safety Training Fail? - Paper Explained
02:44
Word Error Rate (WER) Explained - Measuring the performance of speech recognition systems
03:15
Spearman Correlation Explained in 3 Minutes
03:40
Two Towers vs Siamese Networks vs Triplet Loss - Compute Comparable Embeddings
08:11
LLM Prompt Engineering with Random Sampling: Temperature, Top-k, Top-p
03:21
Kullback-Leibler (KL) Divergence Mathematics Explained
04:36
Covariance and Correlation Explained