| Lecture 01 |
LLM Fundamentals for Agents |
Transformers, tokenization, inference mechanics, context windows |
| Lecture 02 |
Prompt Engineering & Structured Output |
System prompts, few-shot, JSON mode, function calling |
| Lecture 03 |
Tool Use & Function Calling |
Tool schemas, parallel calls, error handling, safety |
| Lecture 04 |
Agent Architecture Patterns |
ReAct, CoT, Reflexion, plan-and-execute |
| Lecture 05 |
Memory Systems |
Short-term, long-term, episodic, semantic memory |
| Lecture 06 |
LangGraph — Stateful Workflows |
Nodes, edges, state, checkpointing, human-in-the-loop |
| Lecture 07 |
Claude Agent SDK |
Subagents, tool loops, streaming, computer use |
| Lecture 08 |
Multi-Agent Systems |
CrewAI, AutoGen, supervisor patterns, coordination |
| Lecture 09 |
RAG — Ingestion & Embeddings |
Chunking, embedding models, vector stores, indexing |
| Lecture 10 |
RAG — Retrieval & Reranking |
Hybrid search, MMR, cross-encoder reranking, evaluation |
| Lecture 11 |
Evaluation & Observability |
LLM-as-judge, RAGAS, tracing, cost tracking |
| Lecture 12 |
Production Deployment |
Streaming, caching, model routing, safety, scaling |