Natural Language Processing
A BSc-level course bridging statistical language models with modern transformer architectures.
Word Embeddings
BERT Architecture
Beam Search
Course Topics
N-grams & Language Models
Statistical foundations of NLP: probability chains, Markov assumptions, and perplexity evaluation.
Word2Vec & Embeddings
Dense word representations, Skip-gram architecture, and semantic geometry in vector space.
RNN & LSTM Networks
Sequential processing, vanishing gradients, and memory mechanisms with gated architectures.
Seq2Seq & Attention
Encoder-decoder models, attention mechanisms, and alignment in neural machine translation.
Transformers
Self-attention, multi-head attention, positional encoding, and parallel processing.
BERT & Pre-training
Pre-trained language models, masked language modeling, and transfer learning pipelines.
Scaling Laws
Compute-optimal training, Chinchilla scaling, and emergent capabilities in large models.
BPE & Tokenization
Subword tokenization, Byte-Pair Encoding, and vocabulary optimization strategies.
Text Generation & Decoding
Greedy, beam search, sampling strategies, temperature, and nucleus sampling.
Fine-tuning & LoRA
Parameter-efficient fine-tuning, adapters, LoRA, and prompt engineering techniques.
Efficiency & Quantization
Model compression, quantization, distillation, pruning, and deployment optimization.
Ethics & Bias
Bias detection, fairness metrics, WEAT tests, and responsible AI development.
Resources
Moodle HS25
Course schedule, lecture PDFs, Jupyter notebooks, and assignment information.
GitHub Repository
Complete source code, slides, notebooks, and all course materials.
Chart Gallery
250+ professional visualizations covering all NLP concepts.
Deep-Dive Modules
Specialized content on summarization, sentiment analysis, and embeddings.