English | 简体中文 | 繁體中文 | 日本語 | Français
TileGym is a CUDA Tile kernel library that provides a rich collection of kernel tutorials and examples for tile-based GPU programming.
Overview | Features | Installation | Quick Start | Contributing | License
This repository aims to provide helpful kernel tutorials and examples for tile-based GPU programming. TileGym is a playground for experimenting with CUDA Tile, where you can learn how to build efficient GPU kernels and explore their integration into real-world large language models such as Llama 3.1 and DeepSeek V2. Whether you're learning tile-based GPU programming or looking to optimize your LLM implementations, TileGym offers practical examples and comprehensive guidance.

- Rich collection of CUDA Tile kernel examples
- Practical kernel implementations for common deep learning operators
- Performance benchmarking to evaluate kernel efficiency
- End-to-end integration examples with popular LLMs (Llama 3.1, DeepSeek V2)
GPU Support: TileGym requires CUDA 13.1+ and an NVIDIA Ampere (e.g., A100) or Blackwell GPU (e.g., B200, RTX 5080, RTX 5090). All released cuTile kernels are validated on both architectures. Note that Ampere performance is still being actively optimized. Download CUDA from NVIDIA CUDA Downloads.
- PyTorch (version 2.9.1 or compatible)
- CUDA 13.1+ (Required - TileGym is built and tested exclusively on CUDA 13.1+)
- Triton (included with PyTorch installation)
If you already have torch and triton, skip this step.
pip install --pre torch --index-url https://download.pytorch.org/whl/cu130We have verified that torch==2.9.1 works. You can also get triton packages when installing torch.
TileGym uses cuda-tile for GPU kernel programming, which depends on the tileiras compiler at runtime.
pip install tilegym[tileiras]This installs TileGym and all runtime dependencies, including cuda-tile[tileiras] which bundles the tileiras compiler directly into your Python environment.
If you already have tileiras available on your system (e.g., from CUDA Toolkit 13.1+), you can omit the extra:
pip install tilegymgit clone https://github.com/NVIDIA/TileGym.git
cd TileGym
pip install .[tileiras] # or: pip install . (if you have system tileiras)For editable (development) mode, use pip install -e . or pip install -e .[tileiras].
⚠️ Required: TileGym kernels use features fromcuda-tile-experimental(e.g., the autotuner). This package is not available on PyPI and must be installed separately from source:pip install "cuda-tile-experimental @ git+https://github.com/NVIDIA/cutile-python.git#subdirectory=experimental"
cuda-tile-experimentalis maintained by the CUDA Tile team as a source-only experimental package. See more details in experimental-features-optional.
All runtime dependencies (except cuda-tile-experimental) are declared in requirements.txt and are installed automatically by both pip install tilegym and pip install ..
We also provide Dockerfile, you can refer to modeling/transformers/README.md.
There are three main ways to use TileGym:
All kernel implementations are located in the src/tilegym/ops/ directory. You can test individual operations with minimal scripts. Function-level usage and minimal scripts for individual ops are documented in tests/ops/README.md
Evaluate kernel performance with micro-benchmarks:
cd tests/benchmark
bash run_all.shComplete benchmark guide available in tests/benchmark/README.md
Use TileGym kernels in end-to-end inference scenarios. We provide runnable scripts and instructions for transformer language models (e.g., Llama 3.1-8B) accelerated using TileGym kernels.
First, install the additional dependency:
pip install accelerate==1.13.0 --no-depsContainerized Setup (Docker):
docker build -t tilegym-transformers -f modeling/transformers/Dockerfile .
docker run --gpus all -it tilegym-transformers bashMore details in modeling/transformers/README.md
TileGym also includes experimental cuTile.jl kernel implementations in Julia. These are self-contained in the julia/ directory and do not require the Python TileGym package.
Prerequisites: Julia 1.12+, CUDA 13.1, Blackwell GPU
# Install Julia (if not already installed)
curl -fsSL https://install.julialang.org | sh
# Install dependencies
julia --project=julia/ -e 'using Pkg; Pkg.instantiate()'
# Run tests
julia --project=julia/ julia/test/runtests.jlSee julia/Project.toml for the full dependency list.
We welcome contributions of all kinds. Please read our CONTRIBUTING.md for guidelines, including the Contributor License Agreement (CLA) process.
- Project license: MIT
- Third-party attributions and license texts: