Skip to content

[Build] LLVM-18#274

Merged
hughperkins merged 34 commits intomainfrom
hp/llvm-18-v2
Nov 16, 2025
Merged

[Build] LLVM-18#274
hughperkins merged 34 commits intomainfrom
hp/llvm-18-v2

Conversation

@hughperkins
Copy link
Copy Markdown
Collaborator

@hughperkins hughperkins commented Nov 15, 2025

Issue: #

Brief Summary

Based heavily on taichi-dev/taichi#8735 by @johnnynunez

  • revised approach to pass upgrade

copilot:summary

Walkthrough

copilot:walkthrough

@hughperkins hughperkins mentioned this pull request Nov 16, 2025
@hughperkins hughperkins changed the title [Build] LLVM-18 attempt 2 [Build] LLVM-18 Nov 16, 2025
@hughperkins hughperkins marked this pull request as ready for review November 16, 2025 17:25
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR upgrades the LLVM version from 15.0.7 to 18.1.8, adapting the codebase to accommodate breaking changes in LLVM's API, particularly the transition from the legacy pass manager to the new pass manager infrastructure and the migration to opaque pointer types.

  • Updates LLVM prebuilt binaries to version 18.1.8
  • Migrates CUDA and CPU JIT compilation to use the new PassBuilder API with ModulePassManager
  • Adapts pointer type handling for LLVM 18's opaque pointers using PointerType::getUnqual()
  • Temporarily skips tests for quantized floats with 8 exponent bits

Reviewed Changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 13 comments.

Show a summary per file
File Description
.github/workflows/scripts/ti_build/llvm.py Updates LLVM version to 18.1.8 and build version, removes AMDGPU-specific path
CMakeLists.txt Removes version detection code
gstaichi/runtime/llvm/llvm_context_pass.h Comments out deprecated PassManagerBuilder include
gstaichi/runtime/cuda/jit_cuda.h Adds new pass manager headers, removes legacy PassManagerBuilder
gstaichi/runtime/cuda/jit_cuda.cpp Migrates to new PassBuilder API with ModulePassManager and updates to CodeGenOptLevel enum
gstaichi/runtime/cpu/jit_cpu.cpp Updates to use toPtr() for ExecutorAddr conversion and Host.h location
gstaichi/codegen/cpu/codegen_cpu.cpp Migrates to new PassBuilder API, updates to CodeGenOptLevel and CodeGenFileType enums
gstaichi/runtime/amdgpu/jit_amdgpu.cpp Removes deprecated adjustPassManager call
gstaichi/rhi/cuda/cuda_context.cpp Updates maximum compute capability from 86 to 90 for newer GPUs
gstaichi/codegen/llvm/struct_llvm.cpp Replaces typed pointer methods with opaque PointerType::getUnqual()
gstaichi/codegen/llvm/llvm_codegen_utils.cpp Updates pointer type comparison for opaque pointers
gstaichi/codegen/llvm/codegen_llvm.cpp Updates pointer type creation and adds TI_NOT_IMPLEMENTED for deprecated path
gstaichi/codegen/cuda/codegen_cuda.cpp Updates pointer type creation to use PointerType::getUnqual()
tests/python/test_quant_float_shared_exp.py Adds pytest.skip for exponent_bits == 8 tests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

return is_same_type(a->getPointerElementType(), b->getPointerElementType());
auto ptr_a = llvm::cast<llvm::PointerType>(a);
auto ptr_b = llvm::cast<llvm::PointerType>(b);
return ptr_a->getAddressSpace() == ptr_b->getAddressSpace();
Copy link

Copilot AI Nov 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With LLVM 18's opaque pointers, the updated logic only compares address spaces, but ignores that the function description (lines 12-23) indicates the function should check if types are renamed versions of each other. This change fundamentally breaks the original purpose of the function - it should be comparing pointer types more comprehensively, not just address spaces. The original recursive check via getPointerElementType() was checking type equivalence, but now two pointers with different pointee types will be considered the same as long as they have the same address space.

Suggested change
return ptr_a->getAddressSpace() == ptr_b->getAddressSpace();
if (ptr_a->getAddressSpace() != ptr_b->getAddressSpace()) {
return false;
}
#if LLVM_VERSION_MAJOR >= 18
// Opaque pointers: cannot compare element types directly
if (ptr_a->isOpaque() || ptr_b->isOpaque()) {
// Fallback to type name comparison for opaque pointers
auto a_name = type_name(a);
auto b_name = type_name(b);
return a_name == b_name;
}
#endif
// Non-opaque pointers: compare element types recursively
return is_same_type(ptr_a->getPointerElementType(), ptr_b->getPointerElementType());

Copilot uses AI. Check for mistakes.
Comment thread gstaichi/rhi/cuda/cuda_context.cpp
Comment thread gstaichi/runtime/amdgpu/jit_amdgpu.cpp
Comment thread tests/python/test_quant_float_shared_exp.py
Comment thread tests/python/test_quant_float_shared_exp.py
Comment thread tests/python/test_quant_float_shared_exp.py
Comment thread tests/python/test_quant_float_shared_exp.py
Comment thread tests/python/test_quant_float_shared_exp.py
Comment thread gstaichi/codegen/cpu/codegen_cpu.cpp Outdated
Comment thread gstaichi/runtime/cuda/jit_cuda.cpp Outdated
Comment thread gstaichi/codegen/cpu/codegen_cpu.cpp
Comment thread gstaichi/codegen/llvm/codegen_llvm.cpp Outdated
Comment thread gstaichi/runtime/amdgpu/jit_amdgpu.cpp
Comment thread gstaichi/runtime/cuda/jit_cuda.cpp Outdated
Comment thread gstaichi/runtime/cuda/jit_cuda.cpp Outdated
Comment thread gstaichi/runtime/llvm/llvm_context_pass.h Outdated
Comment thread tests/python/test_quant_float_shared_exp.py
@hughperkins
Copy link
Copy Markdown
Collaborator Author

Note: I"m going to ignore the things copilot said, unless anyone bumps any of its comments.

@hughperkins
Copy link
Copy Markdown
Collaborator Author

though copilot makes a good point about amd code actually 🤔 Basically we cant support amd without having ci for amd though I feel.

@johnnynunez
Copy link
Copy Markdown
Contributor

johnnynunez commented Nov 16, 2025

though copilot makes a good point about amd code actually 🤔 Basically we cant support amd without having ci for amd though I feel.

There are a lot of code optimizations i think.
I am not expert on this, i was only available to run on llvm 20.
But from taichi there was a lot of to do and ideas to do.
Upgrade to that version is a good point to support newer devices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants