-
Notifications
You must be signed in to change notification settings - Fork 298
Dkorzekwa/any model other models #1007
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
danielkorzekwa
wants to merge
70
commits into
feature/puzzletron
Choose a base branch
from
dkorzekwa/any_model_other_models
base: feature/puzzletron
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
70 commits
Select commit
Hold shift + click to select a range
e82164f
Add anymodel directories to feature/puzzletron
danielkorzekwa 2099df3
Make any_model conversion working.
danielkorzekwa eb5cf8a
Update child_init.py with anymodel version
danielkorzekwa c9de41c
fix attention pruning
danielkorzekwa 3c1bc1f
Add trust_remote_code to load_model_config (default to false)
danielkorzekwa 8357136
Make activation scoring working
danielkorzekwa 6cc2194
Comment all tested models aside of llama_3_1_8b_instruct
danielkorzekwa ee4e1e3
Delete not needed decilm test
danielkorzekwa 449b523
Fix broken tests
danielkorzekwa fb27bba
Update puzzletron_nas_pluging to any_model version
danielkorzekwa b350f82
Correct test resources used by tests.
danielkorzekwa fafe5a3
Disable puzzletron tests (will be enabled after all any_model logic i…
danielkorzekwa e988248
Merge branch 'dkorzekwa/anymodel_core' into dkorzekwa/anymodel_activa…
danielkorzekwa c717852
Comment out not implemented models.
danielkorzekwa 030f126
format python docs
danielkorzekwa 8dcdfbf
Merge branch 'dkorzekwa/anymodel_core' into dkorzekwa/anymodel_activa…
danielkorzekwa 70df0df
Use trust_remote_code in force_cache_dynamic_modules()
danielkorzekwa bb56662
Merge branch 'dkorzekwa/anymodel_core' into dkorzekwa/anymodel_activa…
danielkorzekwa ecd953e
Fix anymodel pruning
danielkorzekwa ee8f538
Fix buid docs issue.
danielkorzekwa c9b76a1
Merge branch 'dkorzekwa/anymodel_core' into dkorzekwa/anymodel_activa…
danielkorzekwa 6e3af61
Merge branch 'dkorzekwa/anymodel_activation_scoring' into dkorzekwa/a…
danielkorzekwa 0ad6d92
Merging build_library_and_stats
danielkorzekwa 995eb1a
Merging anymodel: calc_one_block_scores
danielkorzekwa 34081c9
Mering any_model: calc_one_block_scores
danielkorzekwa ed5c00f
merge any_model: mip_and_realize_models
danielkorzekwa 993b5ec
Add all anymodel models but gptoss
danielkorzekwa 6e9f03b
Make nemotron-nano-12b-v2 to work (set trust_remote_code=true)
danielkorzekwa e8b7a7d
merge anymodel for nemotron-3-nano-30b-a3b-base-bf16
danielkorzekwa 47414d5
Clarify readme and avoid reusing the same reference in llama_converter.
danielkorzekwa a8305d8
Fix tied-embedding handling before writing the safetensors index.
danielkorzekwa 68421a5
Fix NaN ranking currently selects NaNs as “best” experts by default.
danielkorzekwa d6b8028
Code clean up.
danielkorzekwa ecd2341
Code clean up.
danielkorzekwa f9d845d
code clean up
danielkorzekwa d171b01
Merge branch 'dkorzekwa/anymodel_core' into dkorzekwa/anymodel_activa…
danielkorzekwa 722da90
Merge branch 'dkorzekwa/anymodel_activation_scoring' into dkorzekwa/a…
danielkorzekwa 934ab2f
code clean up
danielkorzekwa 0f14ec3
Merge branch 'dkorzekwa/anymodel_pruning' into dkorzekwa/anymodel_bui…
danielkorzekwa dcb9e02
remove not needed comment
danielkorzekwa 0c9ea5d
Merge branch 'dkorzekwa/anymodel_build_library_and_stats' into dkorze…
danielkorzekwa 5b310e2
Merge branch 'dkorzekwa/any_model_calc_one_block_scores' into dkorzek…
danielkorzekwa 4f82b1c
Merge branch 'dkorzekwa/mip_and_realize_models' into dkorzekwa/any_mo…
danielkorzekwa 176a435
Fix a broken test_puzzletron test on 2 gpus.
danielkorzekwa 02e2c9b
Merge branch 'dkorzekwa/anymodel_activation_scoring' into dkorzekwa/a…
danielkorzekwa 92c4419
Merge branch 'dkorzekwa/anymodel_pruning' into dkorzekwa/anymodel_bui…
danielkorzekwa aa1eb3e
Merge branch 'dkorzekwa/anymodel_build_library_and_stats' into dkorze…
danielkorzekwa 2b84a96
Merge branch 'dkorzekwa/any_model_calc_one_block_scores' into dkorzek…
danielkorzekwa fb838c0
Merge branch 'dkorzekwa/mip_and_realize_models' into dkorzekwa/any_mo…
danielkorzekwa cb6b182
Add mamba to puzzletron dependencies.
danielkorzekwa 670bb34
Update mamba-ssm and casual-conv1d dependences (remove pinpoint versi…
danielkorzekwa 0e1b591
Install mamba-ssm and causal-conv1d in testenv:cuda13-gpu-puzzletron
danielkorzekwa ca845ec
Fix installing dependencies in testenv:cuda13-gpu-puzzletron
danielkorzekwa be825bc
Fix anymodel for qwen3 8B in 2 gpus
danielkorzekwa 7fd1afa
Fix pipeline parallelism issue for wen3-vl-30b-a3b-instruct-qwen3_vl-…
danielkorzekwa 7d7b609
Fix multi-gpu issue for nemotron-nano-12b-v2
danielkorzekwa 249af9d
Fix no_op in any_model
danielkorzekwa b80583c
Merge branch 'feature/puzzletron' into dkorzekwa/any_model_other_models
danielkorzekwa 1dd742e
Fix nemotron_h_model_descriptor.
danielkorzekwa 4a6ebbe
Fix tox -e build-docs
danielkorzekwa 585f0ed
pin mamba/casual-conv1d versions to fix failing assertion for test_pu…
danielkorzekwa 7fb5d9a
Fix for installing mamba-ssm
danielkorzekwa 75d3d69
Fix broken test for nemotron-3-nano-30b-a3b-base-bf16
danielkorzekwa 0e5722d
code clean up
danielkorzekwa 2dd9735
Make test_puzzletron test deterministic
danielkorzekwa 3561de5
Comment out all models but nemotron-3-nano-30b-a3b-base-bf16 to check…
danielkorzekwa 27866de
Implement Qwen3VLRemoveExpertsIndependentHook
danielkorzekwa 52922a4
# Initialize weights to ensure all parameters are properly initialized
danielkorzekwa c234fb4
Fix non-deterministic test_puzzletron test
danielkorzekwa 53dcd10
Fix for unsetting CUDA_VISIBLE_DEVICES
danielkorzekwa File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,8 +19,11 @@ | |
|
|
||
| from typing import Type | ||
|
|
||
| import torch | ||
|
|
||
| from modelopt.torch.nas.plugins.megatron_hooks.base_hooks import ForwardHook as ActivationsHook | ||
| from modelopt.torch.puzzletron.tools.logger import aprint | ||
| from modelopt.torch.puzzletron.utils.dummy_modules import DummyBlock, DummyModule | ||
|
|
||
|
|
||
| def register_activation_hooks( | ||
|
|
@@ -51,6 +54,16 @@ def register_activation_hooks( | |
| module_names_to_hook = pruning_mixin.get_module_names_to_hook(model) | ||
| activation_hooks = dict() | ||
| for block_idx, module_name in module_names_to_hook: | ||
| try: | ||
| module = model.get_submodule(module_name) | ||
| except AttributeError: | ||
| # Module doesn't exist on this rank's shard (e.g., in distributed setup) | ||
| continue | ||
|
|
||
| # Skip dummy modules - they don't have real activations to hook | ||
| if isinstance(module, (DummyModule, DummyBlock)): | ||
| continue | ||
|
|
||
| block_config = None | ||
| if block_idx is not None: | ||
| block_config = model.config.block_configs[block_idx] | ||
|
|
@@ -59,13 +72,25 @@ def register_activation_hooks( | |
| "block_config": block_config, | ||
| } | ||
|
|
||
| module = model.get_submodule(module_name) | ||
| hook = hook_class(module, curr_activation_hooks_kwargs) | ||
| module.register_forward_hook(hook) | ||
| activation_hooks[module_name] = hook | ||
|
|
||
| if len(activation_hooks) == 0: | ||
| raise ValueError("couldn't find any hooks") | ||
| # In distributed mode, it's okay for a rank to have 0 hooks if it doesn't own | ||
| # the target modules (e.g., with hybrid patterns like "*-" where different | ||
| # ranks own different layer types). However, we still want to catch real bugs | ||
| # where no hooks are found at all. | ||
| is_distributed = torch.distributed.is_available() and torch.distributed.is_initialized() | ||
| if is_distributed: | ||
| aprint( | ||
| "No hooks registered on this rank. This is expected if this rank " | ||
| "doesn't own any layers matching the hook pattern (e.g., in hybrid " | ||
| "patterns with distributed model sharding)." | ||
| ) | ||
| else: | ||
| raise ValueError("couldn't find any hooks") | ||
|
|
||
|
Comment on lines
+79
to
+93
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Distributed mode can silently proceed with zero hooks globally. On Lines 85-90, distributed runs only log when local hooks are zero. If all ranks have zero hooks, this returns successfully and downstream scoring can run without instrumentation. Suggested fix if len(activation_hooks) == 0:
@@
- if is_distributed:
- aprint(
- "No hooks registered on this rank. This is expected if this rank "
- "doesn't own any layers matching the hook pattern (e.g., in hybrid "
- "patterns with distributed model sharding)."
- )
+ if is_distributed:
+ local_count = torch.tensor([0], device="cuda" if torch.cuda.is_available() else "cpu")
+ global_count = local_count.clone()
+ torch.distributed.all_reduce(global_count, op=torch.distributed.ReduceOp.SUM)
+ if global_count.item() == 0:
+ raise ValueError("couldn't find any hooks on any distributed rank")
+ aprint(
+ "No hooks registered on this rank. This is expected if this rank "
+ "doesn't own any layers matching the hook pattern (e.g., in hybrid "
+ "patterns with distributed model sharding)."
+ )
else:
raise ValueError("couldn't find any hooks")🤖 Prompt for AI Agents |
||
| aprint(f"Found the following hooks: {activation_hooks.keys()}") | ||
| if len(activation_hooks) > 0: | ||
| aprint(f"Found the following hooks: {activation_hooks.keys()}") | ||
| return activation_hooks | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
21 changes: 21 additions & 0 deletions
21
modelopt/torch/puzzletron/anymodel/models/mistral_small/__init__.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| # SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| from modelopt.torch.puzzletron.anymodel.models.mistral_small.mistral_small_converter import ( | ||
| MistralSmallConverter, | ||
| ) | ||
| from modelopt.torch.puzzletron.anymodel.models.mistral_small.mistral_small_model_descriptor import ( | ||
| MistralSmallModelDescriptor, | ||
| ) |
41 changes: 41 additions & 0 deletions
41
modelopt/torch/puzzletron/anymodel/models/mistral_small/mistral_small_converter.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,41 @@ | ||
| # SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| # mypy: ignore-errors | ||
|
|
||
| from typing import List | ||
|
|
||
| from transformers import MistralConfig | ||
|
|
||
| from modelopt.torch.puzzletron.anymodel.converter import Converter, ConverterFactory | ||
| from modelopt.torch.puzzletron.decilm.deci_lm_hf_code.block_config import ( | ||
| AttentionConfig, | ||
| BlockConfig, | ||
| FFNConfig, | ||
| ) | ||
|
|
||
|
|
||
| @ConverterFactory.register_decorator("mistral_small") | ||
| class MistralSmallConverter(Converter): | ||
| @staticmethod | ||
| def create_block_configs_from_main_config(config: MistralConfig) -> List[BlockConfig]: | ||
| num_hidden_layers = config.num_hidden_layers | ||
|
|
||
| block_config = BlockConfig( | ||
| attention=AttentionConfig(no_op=False, num_key_value_heads=config.num_key_value_heads), | ||
| ffn=FFNConfig(no_op=False, intermediate_size=config.intermediate_size), | ||
| ).to_dict() | ||
|
|
||
| block_configs = [block_config] * num_hidden_layers | ||
| return block_configs |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard
block_configlookup before indexing.Line 69 assumes
model.config.block_configsexists and thatblock_idxis in range. That can raiseAttributeError/IndexErrorat runtime for incompatible configs.Suggested fix
🤖 Prompt for AI Agents