feat(jobs): make missing-Redis-state FAILUREs loud and self-diagnosing#1241
feat(jobs): make missing-Redis-state FAILUREs loud and self-diagnosing#1241
Conversation
✅ Deploy Preview for antenna-ssec canceled.
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 3 minutes and 6 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Deploy Preview for antenna-preview canceled.
|
Before: when process_nats_pipeline_result found the job's total-images
key missing, it failed the job with the hardcoded reason
"Job state keys not found in Redis (likely cleaned up concurrently)".
The reason string went to job.logger only — not progress.errors — and
collapsed three distinct causes (DB-index drift across hosts, key
eviction, never-initialized state) into one misleading line. The Redis
target (host:port/db) was never logged, so operators couldn't tell a
cache-DB split from a cleanup race without shelling into workers.
This commit makes that path name the actual cause:
1. AsyncJobStateManager.diagnose_missing_state() returns a one-line
snapshot: masked host:port, DB index, and SCAN output for job:{id}:*
with SCARDs. "keys_for_job=<none>" ⇒ never initialized or wrong DB;
SCARDs present ⇒ partial cleanup / eviction.
2. update_state() emits a WARN with that snapshot immediately before
returning None, so the trigger shows up in the worker log even if
the caller's FAILURE log is filtered out.
3. process_nats_pipeline_result passes the live snapshot to _fail_job
instead of the hardcoded string.
4. _fail_job appends the reason to job.progress.errors before save, so
the UI surfaces FAILUREs with a cause instead of errors=[].
5. run_job logs "Running job X on redis=HOST:PORT/dbN" at start. Cross-
host DB drift becomes visible in every job's log without needing to
already suspect it.
Tests added:
- diagnose_missing_state with never-initialized state (keys_for_job=<none>)
- diagnose_missing_state after partial cleanup (SCARDs of surviving sets
listed)
- Existing "genuinely missing state" test assertion updated to match the
richer reason string.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add TestFailJob with two TDD-confirmed cases: - ``test_fail_job_appends_reason_to_progress_errors`` — verifies the reason string lands in ``job.progress.errors`` (both in-memory and after refresh_from_db) so UI surfaces the cause of the FAILURE. The existing ``_fail_job`` call-site tests in ``TestProcessNatsPipelineResultError`` mock ``_fail_job`` entirely, so a regression that stops appending to ``progress.errors`` would slip through undetected. - ``test_fail_job_is_noop_on_already_final_job`` — regression guard for the early-return branch when the job is already in a final state. Also: - Comment the bare ``except Exception: pass`` around the ``progress.errors.append`` in ``_fail_job`` to explain why we swallow diagnostic-write failures. - Extend the ``AsyncJobStateManager.diagnose_missing_state`` docstring with a one-paragraph note about the SCAN cost (failure-path only, per-job fanout of at most four keys) to head off the obvious review question. Co-Authored-By: Claude <noreply@anthropic.com>
6730cfb to
111d5d6
Compare
There was a problem hiding this comment.
Pull request overview
Improves debuggability of async ML job failures when Redis-backed job state is missing by adding Redis-target diagnostics, propagating the failure reason into Job.progress.errors, and updating tests to lock in the new behavior.
Changes:
- Add
AsyncJobStateManager.diagnose_missing_state()and log a diagnostic snapshot on the missing-state path. - Include stage-specific missing-state diagnostics in
_fail_jobreasons and persist those reasons intoprogress.errorsfor UI visibility. - Add/adjust unit tests covering diagnostics output and
_fail_jobpersistence semantics.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
ami/ml/orchestration/async_job_state.py |
Adds missing-state diagnostic helper and warning log in update_state. |
ami/jobs/tasks.py |
Logs Redis target at job start; passes diagnostics into _fail_job; appends reason into progress.errors. |
ami/ml/tests.py |
Adds tests for diagnose_missing_state() output in “never initialized” and “partial keys present” cases. |
ami/jobs/tests/test_tasks.py |
Updates missing-state reason assertions; adds regression tests for _fail_job writing progress.errors. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ``_fail_job`` so the FAILURE log and the UI ``progress.errors`` entry | ||
| distinguish the three common causes — DB mismatch across hosts, key | ||
| eviction, and never-initialized state — instead of a single hardcoded | ||
| "likely cleaned up concurrently" guess that all three collapse to. |
There was a problem hiding this comment.
The diagnose_missing_state() docstring says it's called from _fail_job, but _fail_job doesn't invoke it (only update_state and process_nats_pipeline_result do). Please update the docstring to match actual call sites so future readers don’t look for diagnostics propagation in _fail_job.
| ``_fail_job`` so the FAILURE log and the UI ``progress.errors`` entry | |
| distinguish the three common causes — DB mismatch across hosts, key | |
| eviction, and never-initialized state — instead of a single hardcoded | |
| "likely cleaned up concurrently" guess that all three collapse to. | |
| ``process_nats_pipeline_result`` so the FAILURE log and the UI | |
| ``progress.errors`` entry distinguish the three common causes — DB | |
| mismatch across hosts, key eviction, and never-initialized state — | |
| instead of a single hardcoded "likely cleaned up concurrently" guess | |
| that all three collapse to. |
| _ack_task_via_nats(reply_subject, logger) | ||
| _fail_job(job_id, "Job state keys not found in Redis (likely cleaned up concurrently)") | ||
| _fail_job( | ||
| job_id, | ||
| f"Job state missing from Redis (stage=process): {state_manager.diagnose_missing_state()}", | ||
| ) |
There was a problem hiding this comment.
In the missing-state path, update_state() already logs a warning that calls diagnose_missing_state(), and this call site invokes diagnose_missing_state() again to build the failure reason. That duplicates the Redis diagnostic work on the same event. Consider capturing the diagnosis once (e.g., have update_state return it alongside None/raise a dedicated exception carrying it) and reuse it for both logging and _fail_job/UI errors.
| def _describe_redis_target() -> str: | ||
| """Return a ``host:port/dbN`` string for the "default" Redis connection. | ||
|
|
||
| Logged at the start of every ``run_job`` so DB-index drift across hosts | ||
| (the class of misconfig that manifests as silent ``process_nats_pipeline_result`` | ||
| FAILUREs on whichever worker happens to read state from the wrong DB) is | ||
| visible in each job's log without requiring a separate diagnostic. | ||
| """ | ||
| try: | ||
| from django_redis import get_redis_connection | ||
|
|
||
| redis = get_redis_connection("default") | ||
| kwargs = getattr(redis.connection_pool, "connection_kwargs", {}) or {} | ||
| return f"redis={kwargs.get('host', '?')}:{kwargs.get('port', '?')}/db{kwargs.get('db', '?')}" |
There was a problem hiding this comment.
_describe_redis_target() claims it returns a host:port/dbN string, but it currently returns redis=host:port/dbN (and the exception case returns redis=(unavailable: ...)). Align the docstring with the actual format, or adjust the return value to match the documented output.
| # Cursor-safe SCAN over the job's keyspace — cheap even on a busy | ||
| # Redis because it's filtered server-side and the per-job fanout is | ||
| # at most a handful of keys (pending:process, pending:results, | ||
| # failed, total). | ||
| keys = sorted(k.decode() if isinstance(k, bytes) else k for k in redis.scan_iter(match=self._pattern())) | ||
| sizes: list[str] = [] |
There was a problem hiding this comment.
diagnose_missing_state() uses scan_iter(match=job:{id}:*), which iterates the full Redis keyspace (SCAN walks the entire DB even when MATCH filters server-side). On a large/loaded Redis (or if this failure repeats due to misconfig), this can add significant latency and load. Consider checking the small, fixed set of expected keys directly (pending:process, pending:results, failed, total) via exists/scard/get, instead of scanning.
Summary
Complements #1244 — that PR handles
pending_idsleft in Redis after NATS gives up redelivering; this PR handles the case where thestatekeys are missing entirely. The two paths are independent and coexist without conflict.When
process_nats_pipeline_resultfinds the job's total-images key missing from Redis, the FAILURE it produces is close to silent. The reason string is hardcoded — "Job state keys not found in Redis (likely cleaned up concurrently)" — and ends up only injob.logger, not inprogress.errors. The Redis target is never logged. Three distinct causes collapse into one misleading line:run_jobinitializes state in one DB,process_nats_pipeline_resultlooks for it in the other)This PR keeps the failure path but makes it name the actual cause.
What changes
AsyncJobStateManager.diagnose_missing_state()— returns a one-line snapshot:redis=host:port/dbN keys_for_job=<SCAN listing with SCARDs>.keys_for_job=<none>⇒ wrong DB or never-initialized; SCARDs present ⇒ partial cleanup / eviction. The SCAN runs only on the failure path and the per-job key fanout is at most four keys (pending:process, pending:results, failed, total), so the cost is negligible.update_state()— emits aWARNwith the snapshot immediately before returningNone. The trigger is now visible in the worker log even if the caller's FAILURE log is filtered.process_nats_pipeline_result— passes the live snapshot to_fail_jobinstead of the hardcoded string. Also annotates the stage (process/results)._fail_job— appendsreasontojob.progress.errorsbefore save. The UI now shows FAILUREs with a cause instead oferrors=[].run_job— logsRunning job X on redis=HOST:PORT/dbNat start. Cross-host DB drift becomes visible in every job's log without needing to already suspect it.Why
This is motivated by an observed production incident where three consecutive async_api jobs flipped straight to FAILURE at ~46% with zero errors recorded and no log line explaining why. The per-job log showed STARTED → FAILURE with nothing in between. The only clue was the hardcoded string, which pattern-matched an unrelated recent change and sent triage down the wrong path. The real cause turned out to be a worker host whose
REDIS_URLpointed at a different DB index than the rest of the stack. Any of the five items above would have caught it on the first failed job.The fix for the underlying misconfig is an infra change, not in this PR. This PR is about making the same class of failure instantly debuggable next time.
Tests
Added:
TestTaskStateManager.test_diagnose_missing_state_when_never_initialized— assertsredis=,/db, andkeys_for_job=<none>in the diagnostic string.TestTaskStateManager.test_diagnose_missing_state_lists_present_keys— after wiping only the total key, asserts the surviving pending-set SCARDs appear.TestFailJob.test_fail_job_appends_reason_to_progress_errors— verifies the reason string lands injob.progress.errorsboth in memory and afterrefresh_from_db(). The existing_fail_jobcall-site tests inTestProcessNatsPipelineResultErrormock_fail_jobentirely, so a silent regression on the append would not have been caught. Validated via TDD: temporarily skipped the append in production code, ran the test and watched it fail with the empty-list assertion, then restored.TestFailJob.test_fail_job_is_noop_on_already_final_job— regression guard for the early-return branch.Updated:
TestProcessNatsPipelineResultError.test_genuinely_missing_state_acks_and_fails_job— reason-string assertion updated to the new"Job state missing from Redis (stage=process): …"form.Verified locally
docker compose -f docker-compose.ci.yml run --rm django python manage.py test ami.jobs --keepdb→ 78 tests pass, including the newTestFailJobcases.docker compose -f docker-compose.ci.yml run --rm django python manage.py test ami.jobs.tests.test_tasks.TestFailJob ami.jobs.tests.test_tasks.TestProcessNatsPipelineResultError ami.ml.tests.TestTaskStateManager --keepdb→ 24 tests pass (the targeted slices touched by this PR).progress.errorsappend: withjob.progress.errors.append(reason)commented out,test_fail_job_appends_reason_to_progress_errorsfails withAssertionError: '…' not found in []. With the line restored, it passes. The test exercises the real code path.Manual verification to run in staging
redis-cli -n <db> DEL job:{id}:pending_images_totalwhile it's running. Confirm the resulting FAILURE shows the diagnostic in both the UI'sprogress.errorsand the worker log, and thatrun_job's opening log line identifies the Redis target.Relationship to concurrent / recently-merged work
_fail_job). This PR relies on the crispupdate_statecontract it established._fail_jobfunction, different entry points — both contribute reason strings intoprogress.errors._fail_jobmechanism to populateprogress.errorswith a different reason-string format. The two formats coexist without conflict; if fix(jobs): mark failed or lost images as failed so job can be marked complete #1244 lands first, a small_append_progress_error(job, reason)helper could consolidate the two append sites.