Skip to content

Fix shared LLM stop words mutation causing cross-agent state pollution#5289

Open
Christian-Sidak wants to merge 1 commit intocrewAIInc:mainfrom
Christian-Sidak:fix/shared-llm-stop-words-mutation
Open

Fix shared LLM stop words mutation causing cross-agent state pollution#5289
Christian-Sidak wants to merge 1 commit intocrewAIInc:mainfrom
Christian-Sidak:fix/shared-llm-stop-words-mutation

Conversation

@Christian-Sidak
Copy link
Copy Markdown

Summary

  • Fixes the mutable shared state bug where CrewAgentExecutor.__init__ permanently mutates self.llm.stop, causing stop words to accumulate across agents sharing the same LLM instance
  • Computes effective stop words locally per-executor and only applies them to the LLM during invoke/ainvoke, restoring the original value in a finally block
  • Prevents both sequential accumulation (Agent B inheriting Agent A's stop words) and reduces the race condition window in async execution

Changes

Single file: lib/crewai/src/crewai/agents/crew_agent_executor.py

__init__: Instead of mutating self.llm.stop directly, we now store the original stop words (self._original_stop) and the merged per-executor stop words (self._effective_stop) without touching the shared LLM.

invoke / ainvoke: Set self.llm.stop = self._effective_stop before execution, then restore self.llm.stop = self._original_stop in a finally block.

Test plan

  • Verify that a shared LLM's stop attribute is not permanently modified after an executor runs
  • Verify that multiple agents with different stop words do not pollute each other
  • Verify that llm.stop is correctly restored even when execution raises an exception

Fixes #5141

When multiple agents share the same LLM instance, each
CrewAgentExecutor.__init__ was mutating the shared LLM stop attribute,
causing stop words to accumulate across agents. This led to premature
generation termination and race conditions in async execution.

Instead of permanently mutating the shared LLM object, we now:
1. Compute effective stop words (original + executor-specific) locally
2. Apply them to the LLM only during invoke/ainvoke execution
3. Restore the original stop words in a finally block

Fixes crewAIInc#5141
@Christian-Sidak Christian-Sidak force-pushed the fix/shared-llm-stop-words-mutation branch from 769b2c3 to c886cad Compare April 15, 2026 04:08
@0xbrainkid
Copy link
Copy Markdown

The finally-block restore pattern is the correct fix — set before execution, restore unconditionally on exit, zero shared state mutation. The test plan covers the right cases.

Worth noting the trust dimension beyond the bug itself: shared LLM state mutation is a trust boundary violation between agents. Agent A modifying llm.stop and Agent B inheriting those stop words means Agent B is not operating with its declared configuration — it is operating with Agent A's configuration overlaid on its own. In security-sensitive crews (e.g., a crew where one agent handles user-facing interactions and another handles privileged data access), this kind of implicit configuration bleed is exactly the class of bug that makes behavioral trust scores unreliable.

An agent that has been verified to behave within certain parameters (stop words, response patterns) is implicitly recertified by that verification. If the actual runtime configuration diverges from the verified configuration due to shared state mutation, the behavioral attestation is no longer valid.

Two additions worth considering alongside the fix:

1. Configuration integrity check at task handoff. Before an agent begins a task that was handed off from another agent, assert that llm.stop == self._effective_stop — catch the mutation proactively rather than discovering it post-execution via behavioral drift.

2. Behavioral drift alert on stop word change. A sudden change in stop words mid-session (even legitimately via the fixed code) represents a configuration state change worth flagging in observability tools — not as an error, but as an event that can explain behavioral differences across task boundaries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Shared LLM stop words mutation causes cross-agent state pollution

2 participants