Skip to content

fix(deps): pin redis-py 5.3.1 to avoid ConnectionPool deadlock#1246

Merged
mihow merged 1 commit intomainfrom
fix/redis-py-pubsub-deadlock
Apr 17, 2026
Merged

fix(deps): pin redis-py 5.3.1 to avoid ConnectionPool deadlock#1246
mihow merged 1 commit intomainfrom
fix/redis-py-pubsub-deadlock

Conversation

@mihow
Copy link
Copy Markdown
Collaborator

@mihow mihow commented Apr 17, 2026

Summary

Bump redis==5.2.1redis==5.3.1 in requirements/base.txt. 5.2.x has a self-deadlock in ConnectionPool.release() triggered by PubSub.__del__ running during GC while the pool holds its non-reentrant _lock. Fixed upstream in redis-py 5.3.0 by switching to RLock (redis-py#3677); reintroduced in 6.x per the discussion on celery/celery#9622, so I'm pinning inside 5.3.x rather than chasing latest.

Observed failure

Celery beat on a production-like deployment stops scheduling all tasks with no error, no crash, no log output. Beat container shows Up in docker, but docker logs celerybeat has no new lines — indefinitely. Only a manual restart revives it. Observed 4/4 recent job runs.

The jobs.health_check periodic task runs every 15 min and is the safety net that reaps stalled async_api jobs when NATS messages go missing. Beat hanging silently disables that safety net, so jobs that have a handful of missing results get stuck in STARTED indefinitely and need manual intervention.

Root cause evidence

Captured via py-spy on a hung beat container. Main thread parked on futex_wait_queue inside ConnectionPool.release(). Abridged stack:

tick()                                   celery/beat.py:353
  apply_entry → apply_async              celery/beat.py:280
    send_task → on_task_call             celery/app/base.py:800
      ResultConsumer.consume_from        celery/backends/redis.py:170
        _consume_from hits ConnErr       celery/backends/redis.py:176
          reconnect_on_error → ensure    celery/backends/redis.py:130
            retry_over_time retries=0    kombu/utils/functional.py:318
              _reconnect_pubsub          celery/backends/redis.py:106
                mget → execute_command   redis/commands/core.py:2009
                  get_connection         redis/connection.py:1417
                    make_connection      redis/connection.py:1463
                      Connection.__init__  redis/connection.py:684
    ⮡ CPython GC fires PubSub.__del__ mid-__init__ ⮣
                        PubSub.__del__      redis/client.py:726
                          PubSub.reset      redis/client.py:734
                            ConnectionPool.release — STUCK   redis/connection.py:1468

Every thread (main + library bg threads) is on futex_wait_queue. Not an I/O hang — it's a userspace mutex deadlock. The TCP socket to Redis is in CLOSE-WAIT with 1 byte in the recv buffer, confirming the Redis server closed its end and the Python client tried to reconnect, which is the code path that deadlocks.

The Redis result backend subscribes to the result key via pubsub inside on_task_call, so this fires for every apply_async call — including beat's periodic ones. That means every time beat schedules a task, it enters a code path that can deadlock if a stale PubSub gets GC'd at the wrong moment. This matches redis-py#3654.

Why 5.3.1 specifically

  • 5.2.x: has the bug.
  • 5.3.0 / 5.3.1: self._lock: threading.Lockthreading.RLock in ConnectionPool — fix.
  • 6.x / 7.x: per celery/celery#9622, the lock semantics were reworked again and the same class of issue resurfaced. Reporters on that issue confirmed downgrading to 5.3.x stabilized beat. Worth revisiting when there's a clear fix upstream.

What this does not do

  • Does not upgrade celery / kombu / django-celery-beat. A separate evaluation said 5.5.3 / 5.5.4 / 2.9.0 is a compatible upgrade, but that's orthogonal to this bug and can land separately.
  • Does not change the broker heartbeat or CELERY_BROKER_CONNECTION_MAX_RETRIES. An earlier hypothesis blamed the AMQP heartbeat churn — the py-spy stack ruled that out, so those knobs stay as-is.

Test plan

  • CI: the pip-compile step picks up the new pin; no requirements/production.txt regeneration needed here since it inherits from base.txt on build.
  • Deploy to a demo environment, run the job pattern that previously hung beat (async_api with ~20-40 min runtime).
  • Verify beat keeps emitting Scheduler: Sending due task lines through job completion.
  • Re-run py-spy on beat during the job — confirm the main thread is not parked in ConnectionPool.release().
  • After job completes, confirm jobs.health_check fires on schedule and reaps any remaining stalled entries.

🤖 Generated with Claude Code

redis-py 5.2.1 has a self-deadlock in `ConnectionPool.release()` where
`PubSub.__del__` triggered by GC while the pool holds its `_lock` blocks
forever trying to re-acquire the same non-reentrant lock. Fixed upstream
in redis-py 5.3.0 by switching to an `RLock` (redis-py#3677), regressed
in 6.x (celery/celery#9622).

Observed symptom: celery beat stops scheduling all tasks with no error,
no crash, no log output. Main thread parked on `futex_wait_queue` inside
`ConnectionPool.release()`. Captured via py-spy on a hung beat container.

Stack at hang:

  tick → apply_async → send_task → on_task_call
      (celery/backends/redis.py: Redis result backend subscribes
       to the result key via PubSub for every apply_async)
    ResultConsumer.consume_from → hits ConnectionError on a stale pubsub
      reconnect_on_error → retry_over_time → _reconnect_pubsub
        mget → get_connection → make_connection → Connection.__init__
          ← GC fires PubSub.__del__ mid-__init__
            PubSub.reset → ConnectionPool.release → stuck on self._lock

Pinning 5.3.1 (not latest 6.x/7.x) is deliberate: per the thread on
celery/celery#9622, redis-py 6+ ships a newer ConnectionPool
implementation that reintroduces the same class of issue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 17, 2026 07:04
@netlify
Copy link
Copy Markdown

netlify bot commented Apr 17, 2026

Deploy Preview for antenna-preview canceled.

Name Link
🔨 Latest commit b88c3f8
🔍 Latest deploy log https://app.netlify.com/projects/antenna-preview/deploys/69e1db9c6684e70008b5d746

@netlify
Copy link
Copy Markdown

netlify bot commented Apr 17, 2026

Deploy Preview for antenna-ssec canceled.

Name Link
🔨 Latest commit b88c3f8
🔍 Latest deploy log https://app.netlify.com/projects/antenna-ssec/deploys/69e1db9c2e67780008bdd46e

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 17, 2026

Warning

Rate limit exceeded

@mihow has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 6 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 2 minutes and 6 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8706e37d-e44e-467d-8029-3f47ddb165e2

📥 Commits

Reviewing files that changed from the base of the PR and between 201cfa7 and b88c3f8.

📒 Files selected for processing (1)
  • requirements/base.txt
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/redis-py-pubsub-deadlock

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Pins redis-py to a known-good 5.3.x patch level to prevent a production-impacting Celery beat hang caused by a ConnectionPool.release() lock deadlock during GC, stabilizing Redis-backed task scheduling and result consumption.

Changes:

  • Bump redis==5.2.1 to redis==5.3.1 in requirements/base.txt.
  • Add an inline rationale comment documenting why 5.3.x is pinned (avoid 5.2.x deadlock; avoid 6.x regression reports).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@mihow
Copy link
Copy Markdown
Collaborator Author

mihow commented Apr 17, 2026

● 🟢 Integration is golden. All three fixes fired exactly as designed.

Beat (redis pin): ✅ alive — ticks at 04:15, 04:20, 04:25, 04:30, 04:35 all on schedule.

04:15 tick: lost_images check: 0 candidate(s) — job not stale enough (your borderline call was right, real updated_at was 04:07).

04:30 tick — reconciler fired all three signals in order:

  1. lost_images check: 1 candidate(s), 1 marked failed, 0 error(s) ✅
  2. (no separate per-job line — diagnostic was rolled into the job logger line below)
  3. jobs_health_check: marked 16 image(s) as failed (job idle past cutoff; NATS consumer num_pending=0 num_ack_pending=0 num_redelivered=16). IDs: ['2868', '2879', '2951', '2959', '2982',
    '3079', '3142', '3265', '3328', '3396'] ... and 6 more ✅

Final state: status=SUCCESS progress=100.00% (16/544 = 2.9% ≤ 0.5 threshold), progress.errors contains the diagnostic, all stages 100%, Redis state cleaned, NATS resources cleaned.

Both fixes (redis 5.3.1 pin + #1244 reconciler) work end-to-end on a real stuck job.

✻ Cogitated for 32s

✻ Running scheduled task (Apr 17 1:37am)

● Already verified at 04:34 (previous turn). Summary:

  • ✅ Beat ticked at 04:15, 04:20, 04:25, 04:30, 04:35 — redis pin holding
  • ✅ 04:15 health_check: lost_images check: 0 candidate(s) (job not stale yet)
  • ✅ 04:30 health_check: lost_images check: 1 candidate(s), 1 marked failed, 0 error(s) + jobs_health_check: marked 16 image(s) as failed (job idle past cutoff; NATS consumer
    num_pending=0 num_ack_pending=0 num_redelivered=16). IDs: ['2868'...] and 6 more
  • ✅ Job 88 final: status=SUCCESS progress=100.00%, all stages 100%, Redis + NATS cleaned
  • ✅ Both fixes work end-to-end

No further wakeup scheduled — integration test complete.

@mihow mihow merged commit bfd90f3 into main Apr 17, 2026
11 checks passed
@mihow mihow deleted the fix/redis-py-pubsub-deadlock branch April 17, 2026 08:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants