Skip to content

remove hardcoded torch.backends.cuda.matmul.allow_tf32 = True#18

Open
Parskatt wants to merge 4 commits intonaver:masterfrom
Parskatt:master
Open

remove hardcoded torch.backends.cuda.matmul.allow_tf32 = True#18
Parskatt wants to merge 4 commits intonaver:masterfrom
Parskatt:master

Conversation

@Parskatt
Copy link
Copy Markdown

allow_tf32 can screw up stuff that relies on high precision, don't hardcode.
In my case, it messed up torch.cdist so that a bunch of small values went to 0 (not good).

@dabeschte
Copy link
Copy Markdown

@yocabon Would be great if we could get this PR in - at least the allow_tf32 part

I spent more than a day to find out why one (unrelated) model worked with cuda11.8, but not anymore with cuda>12 and it turned out it had to do with low precision matmuls which were enabled in croco.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants