fix(deps): pin redis-py 5.3.1 to avoid ConnectionPool deadlock#1246
fix(deps): pin redis-py 5.3.1 to avoid ConnectionPool deadlock#1246
Conversation
redis-py 5.2.1 has a self-deadlock in `ConnectionPool.release()` where `PubSub.__del__` triggered by GC while the pool holds its `_lock` blocks forever trying to re-acquire the same non-reentrant lock. Fixed upstream in redis-py 5.3.0 by switching to an `RLock` (redis-py#3677), regressed in 6.x (celery/celery#9622). Observed symptom: celery beat stops scheduling all tasks with no error, no crash, no log output. Main thread parked on `futex_wait_queue` inside `ConnectionPool.release()`. Captured via py-spy on a hung beat container. Stack at hang: tick → apply_async → send_task → on_task_call (celery/backends/redis.py: Redis result backend subscribes to the result key via PubSub for every apply_async) ResultConsumer.consume_from → hits ConnectionError on a stale pubsub reconnect_on_error → retry_over_time → _reconnect_pubsub mget → get_connection → make_connection → Connection.__init__ ← GC fires PubSub.__del__ mid-__init__ PubSub.reset → ConnectionPool.release → stuck on self._lock Pinning 5.3.1 (not latest 6.x/7.x) is deliberate: per the thread on celery/celery#9622, redis-py 6+ ships a newer ConnectionPool implementation that reintroduces the same class of issue. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
✅ Deploy Preview for antenna-preview canceled.
|
✅ Deploy Preview for antenna-ssec canceled.
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 2 minutes and 6 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Pins redis-py to a known-good 5.3.x patch level to prevent a production-impacting Celery beat hang caused by a ConnectionPool.release() lock deadlock during GC, stabilizing Redis-backed task scheduling and result consumption.
Changes:
- Bump
redis==5.2.1toredis==5.3.1inrequirements/base.txt. - Add an inline rationale comment documenting why 5.3.x is pinned (avoid 5.2.x deadlock; avoid 6.x regression reports).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
● 🟢 Integration is golden. All three fixes fired exactly as designed. Beat (redis pin): ✅ alive — ticks at 04:15, 04:20, 04:25, 04:30, 04:35 all on schedule. 04:15 tick: lost_images check: 0 candidate(s) — job not stale enough (your borderline call was right, real updated_at was 04:07). 04:30 tick — reconciler fired all three signals in order:
Final state: status=SUCCESS progress=100.00% (16/544 = 2.9% ≤ 0.5 threshold), progress.errors contains the diagnostic, all stages 100%, Redis state cleaned, NATS resources cleaned. Both fixes (redis 5.3.1 pin + #1244 reconciler) work end-to-end on a real stuck job. ✻ Cogitated for 32s ✻ Running scheduled task (Apr 17 1:37am) ● Already verified at 04:34 (previous turn). Summary:
No further wakeup scheduled — integration test complete. |
Summary
Bump
redis==5.2.1→redis==5.3.1inrequirements/base.txt. 5.2.x has a self-deadlock inConnectionPool.release()triggered byPubSub.__del__running during GC while the pool holds its non-reentrant_lock. Fixed upstream in redis-py 5.3.0 by switching toRLock(redis-py#3677); reintroduced in 6.x per the discussion on celery/celery#9622, so I'm pinning inside 5.3.x rather than chasing latest.Observed failure
Celery beat on a production-like deployment stops scheduling all tasks with no error, no crash, no log output. Beat container shows
Upin docker, butdocker logs celerybeathas no new lines — indefinitely. Only a manual restart revives it. Observed 4/4 recent job runs.The
jobs.health_checkperiodic task runs every 15 min and is the safety net that reaps stalled async_api jobs when NATS messages go missing. Beat hanging silently disables that safety net, so jobs that have a handful of missing results get stuck inSTARTEDindefinitely and need manual intervention.Root cause evidence
Captured via py-spy on a hung beat container. Main thread parked on
futex_wait_queueinsideConnectionPool.release(). Abridged stack:Every thread (main + library bg threads) is on
futex_wait_queue. Not an I/O hang — it's a userspace mutex deadlock. The TCP socket to Redis is inCLOSE-WAITwith 1 byte in the recv buffer, confirming the Redis server closed its end and the Python client tried to reconnect, which is the code path that deadlocks.The Redis result backend subscribes to the result key via pubsub inside
on_task_call, so this fires for everyapply_asynccall — including beat's periodic ones. That means every time beat schedules a task, it enters a code path that can deadlock if a stalePubSubgets GC'd at the wrong moment. This matches redis-py#3654.Why 5.3.1 specifically
self._lock: threading.Lock→threading.RLockinConnectionPool— fix.What this does not do
CELERY_BROKER_CONNECTION_MAX_RETRIES. An earlier hypothesis blamed the AMQP heartbeat churn — the py-spy stack ruled that out, so those knobs stay as-is.Test plan
requirements/production.txtregeneration needed here since it inherits frombase.txton build.Scheduler: Sending due tasklines through job completion.ConnectionPool.release().jobs.health_checkfires on schedule and reaps any remaining stalled entries.🤖 Generated with Claude Code