Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Hands-on AI & LLM App Development
1. LLMs Demystified: From Basics to API Integration
1.1 What Will You Learn in This Course And Why Does It Matter? (3:45)
1.2 What Exactly Is a Large Language Model (LLM)?
1.3 Why Do LLMs Rely on Probability and Not Certainty?
1.5 How Do LLMs Actually Learn from Data?
1.6 How Does a Large Language Model Work?
1.7 What Are the Key Parameters That Shape an LLM’s Output?
1.8 What Are Tokens and Why Do They Matter?
1.9 From Tokens to Context: How LLMs Process Input (4:08)
1.10 What Is a Context Window and How Does It Affect Input?
1.11 What Is Temperature and How Does It Influence Creativity?
1.12 Why LLMs Don’t Always Pick the Top Word? (2:52)
1.13 What Is Top-p Sampling and How Is It Used?
1.14 What Is Top-k Sampling?
1.15 What’s the Difference Between Top-p and Top-k Sampling?
1.16 How to Control Output Length and Quality?
1.17 What Does an API Call Actually Cost?
1.18 API Key Setup Guide: From Hugging Face to OpenAI in Colab
1.19 Key Takeaways & Summary
1.20 Quiz: Let's Test Your Knowledge
1.21 Hands-on Examples & Project
2. Designing Effective Prompts and Building with LangChain
2.1 What Makes a Good Prompt Different from a Great One?
2.2 Prompt Patterns Explained: Zero-shot to Few-shot (2:16)
2.3 What Are Prompt Patterns Like Zero-shot, One-shot, and Few-shot?
2.4 How Hallucinations Occur in LLMs and How to Minimize Them?
2.5 What Is LangChain and Why Should I Use It?
2.6 What Is a Model in LangChain And How To Choose One?
2.8 What Is a Prompt in LangChain and How Is It Structured?
2.9 What Are Output Parsers and How Do They Help Extract Results?
2.10 What Is a Chain in LangChain and How Does It Work?
2.11 What Are Indexes in LangChain and When To Use Them?
2.13 What Is Memory in LangChain And How Does It Keep Context?
2.14 Key Takeaways & Summary
2.15 Quiz: Let's Test Your Knowledge
3. Retrieval-Augmented Generation (RAG) with Vector Databases
3.2 Why Do LLMs Need External Knowledge to Answer Accurately?
3.3 What Are Embeddings and Why Are They Useful?
3.4 How Do Embeddings Power Semantic Search?
3.6 Why Not Use Traditional Databases for Semantic Search?
3.7 What Is a Vector Database and How Does It Work?
3.8 What Is Retrieval-Augmented Generation (RAG)?
3.9 How Does Embedding-Based Retrieval Work?
3.10 How Do Euclidean and Cosine Similarity Compare?
3.12 How Are Word Frequencies Turned Into Vectors?
3.13 Key Takeaways & Summary
3.14 Quiz: Let's Test Your Knowldge
1.9 From Tokens to Context: How LLMs Process Input
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock