$ yuktics v0.1

T0 — The Meta Layer module 00.3 ~3 hrs

Using AI as your tutor

The single biggest unfair advantage you have over a CS student five years ago. Most students are using it backwards — copying answers instead of learning faster. This module fixes that.

Prerequisites

  • a working dev environment
  • an Anthropic / OpenAI account (free tier OK)

Stack

  • Claude (claude.ai or Claude Code)
  • ChatGPT (optional, for compare-runs)
  • Cursor or VS Code
  • a real, hard problem you currently don't understand

By the end of this module

  • Pick the right AI tool for the right task — chat, IDE assistant, autonomous agent.
  • Ask questions that produce understanding, not just answers.
  • Recognize when you are about to learn something — and when you are about to skip past learning it.
  • Set up Claude Code (or equivalent) so you can pair-program in your terminal by the end of the module.

If you remember one thing from this entire curriculum, remember this: how you use AI as a student is more important than what you study. The CS student who uses AI well in 2026 will, in three years, be miles ahead of the one who is grinding the same problems we all ground in 2018. The CS student who uses AI badly will, in three years, be unemployable, and they won’t know why.

The split between those two students is not about access. Both have the same tools. The split is about how they ask, when they ask, and what they do with the answer. This module is the one that puts you on the right side of that split.

The wrong way (and why it feels right)

The default move when you don’t understand something is now: paste the problem into ChatGPT, get a working answer, move on. This is what 80% of students are doing in 2026. It feels productive. It produces correct code. It is also slowly making them worse engineers.

The reason is mechanical. The thing your brain locks in is whatever it had to struggle with on the way to a solution. If the AI removes the struggle, you stop locking anything in. You become someone who can finish problem sets but cannot debug your own code, cannot extend a system, and cannot interview, because the interviewer is going to ask you to think out loud, and you have spent three years not doing that.

The fix is not to stop using AI. The fix is to use it like a tutor instead of like an answer machine. Tutors do not give you the answer. Tutors:

  • Make sure you understand the question.
  • Watch you try.
  • Point you at the next concept you’re missing, not the final solution.
  • Push back when your reasoning is wrong.
  • Tell you what you got right.

Modern AI can do all five of those things. It just won’t do them by default. You have to ask.

The three modes

There are three distinct ways to use AI as a student, and they suit different moments. Get them mixed up and you’ll either learn nothing or build nothing.

Mode 1 — Chat (claude.ai, ChatGPT)

Use chat for understanding. Concepts you’ve never seen, errors you can’t read, papers you can’t parse, syntax you don’t recognize. The artifact is your understanding, not a finished file. Treat it as a conversation: you ask, you summarize what you heard, the model corrects you, you ask again.

Mode 2 — IDE assistant (Cursor, VS Code Copilot, Claude Code)

Use IDE-integrated AI for building, with you in the driver’s seat. Inline completions. Whole-function generation. Refactors. The model sees your code, you steer. Best for the work you broadly understand but want to move faster on.

Mode 3 — Autonomous agent (Claude Code, Codex)

Use agentic mode for shipping work that crosses files, runs tests, edits in batch. You give a goal, the model takes 5–20 actions, you review the diff. Best for the parts of a project you have already understood and now just want done.

The mistake students make is using mode 3 when they should be in mode 1, and using mode 1 when they should be in mode 2. Knowing which mode you’re in for any given task is half the skill.

How to ask, so you actually learn

A small set of habits separates students who learn from AI from students who only finish problem sets with it.

1. State what you already think

Before asking, write one sentence: here is what I think the answer is, and here is why I’m uncertain. Then ask. The model will correct your specific misconception instead of dumping a generic explanation. This is the single highest-yield habit on this list. It works because it forces you to commit to a hypothesis before the model commits one for you.

Bad: “How does memoization work?” Better: “I think memoization works by caching the result of expensive function calls keyed by their arguments, and that this only helps when the function is pure. I’m uncertain whether it helps for recursive functions where the same args get hit on different branches. Am I right?“

2. Ask for the concept, not the code

When you’re stuck on a problem, ask the model to explain the relevant concept without showing you a solution. Then go solve it yourself. If you fail, come back and show your attempt.

Prompt: “I’m stuck on a sliding-window problem. Explain the sliding-window pattern in general terms — what kinds of subproblems it solves, how to recognize them — but don’t show me code for this specific problem. I want to attempt it first.”

3. Treat its first answer as a hypothesis, not a verdict

Models are confident by default. Push back. “Are you sure? What’s the failure mode of that approach? When would it not work?” You’ll learn more from one round of pushback than from three rounds of agreement.

4. Ask for the wrong way

Most learning resources only show you the right way. The model can show you the wrong ways. “Show me three approaches to this problem, ranked by how a senior engineer would feel about them, and tell me what’s wrong with the first two.” You will permanently remember the wrong ways.

5. Force a checkpoint

After the explanation, ask the model to quiz you. “Now ask me three questions to check whether I actually understood this, and don’t tell me the answers until I try.” Most students never do this. It is the difference between thinking you understood something and verifying you did.

What never to ask AI

Some things are still better learned the old way, and a good student knows which.

  • Anything where the struggle is the point. Your first time writing a binary search. Your first time debugging a segfault. Your first time understanding pointers. AI can take the struggle away, and the struggle was the lesson.
  • Anything you’d be embarrassed to forget. Authentication flows. Cryptographic primitives. The semantics of git rebase. If you can’t explain it without AI, you don’t know it.
  • Anything taste-driven that depends on context the model doesn’t have. Naming a service. Choosing a stack for your product. Deciding what to ship next. The model has no skin in your game, and its defaults are the average of every codebase ever scraped.

A simple test: if losing AI access for an hour would prevent you from doing this task, that task is one you needed to learn the old way.

A small assignment, today

Spend the next 90 minutes doing the following, in order. This is the build for this module.

  1. Pick a CS concept you currently don’t understand well. (Suggestions: dynamic programming, async/await, hash tables, garbage collection, transactions and isolation levels, the OSI model, Big-O proof techniques.)
  2. Open Claude or ChatGPT. Apply habit 1: state what you think, ask the model to correct you.
  3. Apply habit 2: ask for the concept, no code.
  4. Try to use the concept on a small toy problem of your choosing. Get it wrong.
  5. Show the model what you tried, ask it to explain what’s specifically wrong without rewriting it.
  6. Try again. Get closer.
  7. Apply habit 5: ask the model to quiz you.
  8. Write three sentences in a notebook (paper or digital) summarizing what you now understand.

If those three sentences are clearer than they would have been after a tutorial, you have used the tool correctly. Do this every time.

Set up Claude Code (or equivalent) before you finish

The best use of AI for a CS student in 2026 is in your terminal, where it can read your code, run your tests, and pair-program with you on real projects. Set this up before you finish this module.

# Claude Code (recommended, this curriculum uses it)
npm install -g @anthropic-ai/claude-code

# Authenticate
claude
# follow the OAuth flow, paste your API key, or use a Claude.ai login

# Inside any project directory:
cd ~/my-project
claude

Spend 30 minutes asking it to do something small in a real project: rename a function across files, add a test, explain a confusing piece of someone else’s code. You will feel the difference between this and chat-based AI within five minutes. Once you’ve felt it, you will not go back.

Alternatives if you prefer: Cursor, Aider, Continue, GitHub Copilot in VS Code. They are all good. The point is not the tool — it is putting AI inside the loop where you are actually working.

Going deeper

When you have specific questions, in this order:

  1. Anthropic — Prompt engineering for everyone — the canonical doc. Less hype than most.
  2. Karpathy — Software is changing (again). video · 30 min · why this is a real shift, not a fad.
  3. Claude Code docs — for serious agentic-coding use.
  4. Simon Willison’s blog — the most consistently sharp writer on what’s actually possible right now.

Skip the “10 ChatGPT prompts that will change your life” content. None of it will.

Checkpoints

If any one wobbles, the corresponding section above is what to reread.

  1. Name the three modes of AI use as a student. Give one task that fits each mode and one task that does not.
  2. What is the danger of pasting a problem into ChatGPT and using the answer? Why does it feel productive?
  3. Walk through the five “how to ask” habits from memory. You should be able to name all five.
  4. Give three examples of things you should not ask AI for, and explain why for each.
  5. What is the test for whether you needed to learn something the old way?

If you can answer all five from memory, you’ve earned 00.3. The next move depends on where you started — most students go to 00.4 (reading code, docs, errors) or jump straight into 01.1 (Python deep enough to be dangerous).