You uploaded 10 documents and generated a slide deck — but the AI put its weight on background material instead of your key findings, glossed over critical data, or gave a surface overview when you needed a deep dive on one specific section. This tutorial covers three techniques for controlling NotebookLM slide content focus: strategic source filtering, steering prompts, and structured outlines — plus how Gamma and Claude handle the same challenge.
NotebookLM distributes attention roughly equally across all selected sources. When you check 10 documents and hit Generate, the AI tries to represent every source rather than prioritize the most important one. A background context file gets as much slide space as your key findings. A long document gets more coverage than a short-but-critical one. The result: a deck that skims everything instead of going deep where it counts.
A second compounding factor is non-deterministic generation — the same sources and prompt can produce meaningfully different decks on consecutive runs. A strong first output can't simply be "tweaked" by regenerating; key points may shift or disappear entirely the next time.
Both problems have the same solution: give NotebookLM tighter constraints. The more precise your instructions, the less room the AI has to make decisions you didn't intend.
This is the highest-leverage technique available. Before generating, deselect any sources not directly relevant to the presentation goal. A notebook with 15 documents may need only 3–5 to produce a focused deck. The practical rule: if a source won't appear as a citation in the final slides, deselect it.
For multi-topic notebooks, generate separate decks from different source subsets. A product launch notebook might contain market research, technical specs, and customer feedback. Generate three targeted decks: one from the market research (for executives), one from technical specs (for engineers), one from customer feedback (for the product team). Each deck stays focused because the source set is focused.
Vague prompts produce vague slides. "Make a presentation about my research" hands the AI maximum freedom to decide what matters. Effective steering prompts name names: "Focus on the three statistically significant findings in Source A. Include the comparison data from Table 3 in Source B. Ignore the literature review section entirely."
The most effective steering prompts follow a three-part structure: (1) Include what — name specific sections, data points, or arguments. (2) Exclude what — explicitly tell the AI what to skip. (3) Emphasize what — identify the single most important takeaway and instruct the AI to build the narrative around it.
The most precise control comes from providing a slide-by-slide outline. Instead of letting the AI decide structure, write an outline specifying what appears on each slide: "Slide 1: Title. Slide 2: Three key findings from Source A, shown as large numbers with context. Slide 3: Comparison of Method X vs. Method Y from Source B, in a two-column table."
This approach limits the AI's role to visual execution — it handles layout, graphics, and design while you control content and structure. Combined with source filtering and a steering prompt, this produces the most predictable and focused results of any NotebookLM slide workflow.
Treat the first generation as a draft, not a final product. The most effective workflow: (1) Generate with moderate constraints. (2) Review which slides hit the mark and which went off-target. (3) Use the prompt-based revision feature to fix specific slides. (4) If the overall structure is wrong, refine the outline and regenerate. Two to three iterations typically produce a deck that matches the original intent.
| Control Dimension | NotebookLM | Gamma | Claude |
|---|---|---|---|
| Source grounding | Best — uses only your uploaded documents | None — generates from the prompt; may hallucinate | Good — faithfully processes uploaded files |
| Content focus control | Source filtering + steering prompt + outline | Prompt only — depends entirely on your description | Conversational — refine through natural-language iteration |
| Exclusion control | Deselect sources; add "ignore X" in prompt | No source concept — include only what you describe | Strong — "skip section Y, focus on Z" |
| Reproducibility | Low — output varies across runs | Low — similar variation | Higher — iterative editing preserves context |
| Iterative refinement | Prompt-based slide revision (new feature) | Direct in-browser editing of any element | Best — conversational "change slide 3 to..." workflow |
| Data accuracy | Highest — cites your sources | Lowest — may fabricate statistics | High — depends on uploaded context |
Content-focus recommendation: When accuracy is non-negotiable — research presentations, client reports, financial summaries — start with NotebookLM for source-grounded content, then use revision prompts to sharpen focus. When speed matters more than source fidelity, Gamma's direct editing is faster. When you need precise iterative control through a back-and-forth editing conversation, Claude is the strongest option for slide content control.
Every prompt in this guide plus all prompts across the full category — advanced workflows, specialized use cases, and production-grade templates.
Category Bundle — one-time access
Unlock Category Prompts — $19.99ONE-TIME · 30-DAY GUARANTEE · INSTANT ACCESS