| title | emoji | colorFrom | colorTo | sdk | app_port | pinned |
|---|---|---|---|---|---|---|
TorchCode |
π₯ |
red |
yellow |
docker |
7860 |
false |
Crack the PyTorch interview.
Practice implementing operators and architectures from scratch β the exact skills top ML teams test for.
Like LeetCode, but for tensors. Self-hosted. Jupyter-based. Instant feedback.
Top companies (Meta, Google DeepMind, OpenAI, etc.) expect ML engineers to implement core operations from memory on a whiteboard. Reading papers isn't enough β you need to write softmax, LayerNorm, MultiHeadAttention, and full Transformer blocks cold.
TorchCode gives you a structured practice environment with:
| Feature | ||
|---|---|---|
| π§© | 13 curated problems | The most frequently asked PyTorch interview topics |
| βοΈ | Automated judge | Correctness checks, gradient verification, and timing |
| π¨ | Instant feedback | Colored pass/fail per test case, just like competitive programming |
| π‘ | Hints when stuck | Nudges without full spoilers |
| π | Reference solutions | Study optimal implementations after your attempt |
| π | Progress tracking | What you've solved, best times, and attempt counts |
No cloud. No signup. No GPU needed. Just make run β or try it instantly on Hugging Face.
Launch on Hugging Face Spaces β opens a full JupyterLab environment in your browser. Nothing to install.
docker run -p 8888:8888 -e PORT=8888 ghcr.io/duoan/torchcode:latestmake runOpen http://localhost:8888 β that's it. Works with both Docker and Podman (auto-detected).
The bread and butter of ML coding interviews. You'll be asked to write these without torch.nn.
If you're interviewing for any role touching LLMs or Transformers, expect at least one of these.
| # | Problem | What You'll Implement | Difficulty | Key Concepts |
|---|---|---|---|---|
| 13 | GPT-2 Block | GPT2Block (nn.Module) |
Pre-norm, causal MHA + MLP (4x, GELU), residual connections |
Each problem has two notebooks:
| File | Purpose |
|---|---|
01_relu.ipynb |
βοΈ Blank template β write your code here |
01_relu_solution.ipynb |
π Reference solution β check when stuck |
1. Open a blank notebook β Read the problem description
2. Implement your solution β Use only basic PyTorch ops
3. Debug freely β print(x.shape), check gradients, etc.
4. Run the judge cell β check("relu")
5. See instant colored feedback β β
pass / β fail per test case
6. Stuck? Get a nudge β hint("relu")
7. Review the reference solution β 01_relu_solution.ipynb
from torch_judge import check, hint, status
check("relu") # Judge your implementation
hint("causal_attention") # Get a hint without full spoiler
status() # Progress dashboard β solved / attempted / todoTotal: ~6β8 hours spread across 2β3 weeks. Perfect for interview prep on a deadline.
| Week | Focus | Problems | Time |
|---|---|---|---|
| 1 | π§± Foundations | ReLU β Softmax β Linear β LayerNorm β BatchNorm β RMSNorm | 1β2 hrs |
| 2 | π§ Attention Deep Dive | SDPA β MHA β Causal β GQA β Sliding Window β Linear Attn | 3β4 hrs |
| 3 | ποΈ Integration | GPT-2 Block + speed run (re-implement all, timed) | 1β2 hrs |
ββββββββββββββββββββββββββββββββββββββββββββ
β Docker / Podman Container β
β β
β JupyterLab (:8888) β
β βββ templates/ (reset on each run) β
β βββ solutions/ (reference impl) β
β βββ torch_judge/ (auto-grading) β
β βββ PyTorch (CPU), NumPy β
β β
β Judge checks: β
β β Output correctness (allclose) β
β β Gradient flow (autograd) β
β β Shape consistency β
β β Edge cases & numerical stability β
ββββββββββββββββββββββββββββββββββββββββββββ
Single container. Single port. No database. No frontend framework. No GPU.
make run # Build & start (http://localhost:8888)
make stop # Stop the container
make clean # Stop + remove volumes + reset all progressTorchCode uses auto-discovery β just drop a new file in torch_judge/tasks/:
TASK = {
"id": "my_task",
"title": "My Custom Problem",
"difficulty": "medium",
"function_name": "my_function",
"hint": "Think about broadcasting...",
"tests": [ ... ],
}No registration needed. The judge picks it up automatically.
Do I need a GPU?
No. Everything runs on CPU. The problems test correctness and understanding, not throughput.
Can I keep my solutions between runs?
Blank templates reset on every
make run so you practice from scratch. Save your work under a different filename if you want to keep it.
How are solutions graded?
The judge runs your function against multiple test cases using
torch.allclose for numerical correctness, verifies gradients flow properly via autograd, and checks edge cases specific to each operation.
Who is this for?
Anyone preparing for ML/AI engineering interviews at top tech companies, or anyone who wants to deeply understand how PyTorch operations work under the hood.
Built for engineers who want to deeply understand what they build.
If this helped your interview prep, consider giving it a β