Skip to content

Conversation

@julienrbrt
Copy link
Member

Overview

Proper count of pending blobs metrics.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 30, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 30, 2026, 6:28 PM

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the mechanism for tracking and reporting pending blobs metrics. It centralizes the calculation and update of the DASubmitterPendingBlobs metric within the main submission loop, ensuring a more consistent and accurate representation of the current backlog. This change removes redundant metric update calls from the submitToDA helper function and cleans up associated test code.

Highlights

  • Metric Refactoring: The logic for updating the DASubmitterPendingBlobs metric has been moved out of the submitToDA helper function, removing the getTotalPendingFn parameter and its associated calls.
  • Centralized Metric Update: The DASubmitterPendingBlobs metric is now updated centrally within the daSubmissionLoop in submitter.go, calculating the sum of pending headers and data blobs once per ticker cycle.
  • Code Cleanup: Unused mock code related to the Sync method in test/mocks/store.go has been removed, and test calls to submitToDA were updated to match the new function signature.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@claude
Copy link
Contributor

claude bot commented Jan 30, 2026

Claude finished @julienrbrt's task in 2m 10s —— View job


Code Review Complete

I've performed a comprehensive review of this PR. Here's my detailed feedback:

Summary

This PR refactors the metrics tracking for pending DA blobs by moving the DASubmitterPendingBlobs metric update from the submitToDA helper function to the main daSubmissionLoop. This centralizes the metric calculation and removes the getTotalPendingFn parameter from the submission flow.

Overall Assessment: ✅ Approved with Minor Observations

The refactoring is correct and improves the code quality. The change ensures more consistent and accurate metrics reporting.


Detailed Review

1. Code Quality & Architecture ✅

Strengths:

  • Improved separation of concerns: The metric update is now in the right place - the submission loop that has visibility into both headers and data counts
  • Cleaner API: Removing the getTotalPendingFn parameter simplifies the submitToDA generic function signature
  • Better accuracy: The metric now reflects the total pending blobs (headers + data) at the ticker interval, not scattered throughout retry logic

Observations:

  • The metric is now updated once per ticker cycle (line 311 in submitter.go:311) rather than after each submission attempt
  • This is actually more correct because:
    1. It provides a consistent snapshot of pending state
    2. It avoids race conditions where concurrent goroutines might report stale counts
    3. It reflects the total backlog (headers + data) as intended by the metric's help text

2. Correctness & Concurrency ✅

Race Condition Analysis:
The refactored code is thread-safe:

  • cache.NumPendingHeaders() and cache.NumPendingData() are called in the main ticker loop (single-threaded context)
  • The submission goroutines (lines 198-246, 258-307) run concurrently but don't update metrics themselves
  • No shared state is accessed unsafely

Timing Analysis:

// submitter.go:185-311
case <-ticker.C:
    headersNb := s.cache.NumPendingHeaders()  // Line 187
    // ... header submission logic ...
    dataNb := s.cache.NumPendingData()        // Line 251
    // ... data submission logic ...
    s.metrics.DASubmitterPendingBlobs.Set(float64(headersNb + dataNb))  // Line 311

The values are captured at the start of the ticker cycle, so the metric reflects the state at that moment. This is acceptable because:

  1. Metrics are inherently point-in-time snapshots
  2. The ticker runs frequently (every checkInterval)
  3. The alternative (calling inside submission goroutines) would create race conditions

Potential Issue - Stale Reads:
There's a minor timing concern: headersNb is read at line 187, dataNb is read at line 251, and the metric is set at line 311. If headers complete submission between lines 187 and 311, the metric might be slightly stale for one tick cycle.

Severity: Low - This is acceptable for metrics purposes and much better than the previous implementation which had similar timing issues plus additional complexity.

3. Performance & Resource Efficiency ✅

Improvements:

  • Reduced function call overhead: No longer passing and invoking getTotalPendingFn multiple times per submission
  • Fewer metric updates: Metric is updated once per ticker cycle instead of 3-5 times per submission attempt
  • Cleaner goroutine lifecycle: Submission goroutines no longer need to call back to get pending counts

No Performance Regressions: The removal of metric updates from retry paths is actually beneficial - it reduces unnecessary work during backpressure scenarios.

4. Error Handling & Resilience ✅

No changes to error handling logic. The refactoring is purely about metrics placement, not error paths.

5. Testing & Reliability ✅

Test Updates:
All test calls to submitToDA have been correctly updated to remove the getTotalPendingFn parameter:

  • da_submitter_mocks_test.go:82 (removed nil parameter)
  • da_submitter_mocks_test.go:130 (removed nil parameter)
  • da_submitter_mocks_test.go:180 (removed nil parameter)
  • da_submitter_mocks_test.go:224 (removed nil parameter)
  • da_submitter_mocks_test.go:267 (removed nil parameter)

Test Coverage:
The existing tests continue to validate:

  • Retry logic (mempool, unknown errors)
  • Batch halving on StatusTooBig
  • Partial success scenarios
  • Gas price handling

Missing Test Coverage:
There are no tests specifically validating the DASubmitterPendingBlobs metric updates in the new location. However, this is acceptable since:

  1. The metric update is trivial (just a Set call)
  2. Integration/E2E tests would catch regressions in actual usage
  3. Testing metrics in unit tests often adds minimal value

6. Documentation & Code Clarity ✅

CHANGELOG Entry:
The CHANGELOG entry is clear and concise:

- Proper counting of `DASubmitterPendingBlobs` metrics. [#3038]

Code Comments:
The new code includes a helpful comment at submitter.go:310:

// Update metrics with current pending counts
s.metrics.DASubmitterPendingBlobs.Set(float64(headersNb + dataNb))

This clearly documents the intent.

Suggestion for Future Enhancement:
Consider adding a comment explaining why the metric is updated here (centralized, consistent snapshot) vs. in submitToDA. For example:

// Update metrics with current pending counts.
// We update here (in the ticker loop) rather than in submitToDA to ensure
// a consistent snapshot of total backlog across both headers and data.

7. Security ✅

No security implications. This is a pure refactoring of metrics collection.


Additional Observations

Mock Cleanup ✅

The removal of unused Sync mock code from test/mocks/store.go (lines removed) is good housekeeping. This appears to be dead code from a previous refactoring.

Metric Semantics

The metric DASubmitterPendingBlobs has the help text: "Total number of blobs awaiting DA submission (backlog)" (metrics.go:209).

The new implementation correctly reports headersNb + dataNb, which is indeed the total backlog. The previous implementation scattered updates throughout retry logic, which could report inconsistent values.


Recommendations

1. Consider Adding Debug Logging (Optional)

To aid in troubleshooting metric discrepancies, consider adding debug logging:

// Update metrics with current pending counts
totalPending := headersNb + dataNb
s.logger.Debug().
    Uint64("pending_headers", headersNb).
    Uint64("pending_data", dataNb).
    Uint64("total_pending_blobs", totalPending).
    Msg("updating pending blobs metric")
s.metrics.DASubmitterPendingBlobs.Set(float64(totalPending))

This would help operators correlate metric values with system behavior.

2. CI/CD Validation

Ensure that:

  • ✅ All linters pass (make lint)
  • ✅ All tests pass (make test)
  • ✅ Integration tests validate metrics are reported correctly

Based on the PR checks, Buf validation has passed. No blockers identified.


Conclusion

This is a clean, well-executed refactoring that improves code quality and metric accuracy. The change:

  • ✅ Simplifies the submitToDA API
  • ✅ Centralizes metric collection for better consistency
  • ✅ Maintains thread-safety
  • ✅ Has no performance regressions
  • ✅ Updates all tests appropriately
  • ✅ Includes proper CHANGELOG documentation

No blocking issues found. The code is ready to merge.


Great work on this cleanup! The centralized metric approach is more maintainable and accurate than the previous scattered updates.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly refactors the logic for counting pending blobs for metrics. The previous implementation was flawed as it would overwrite the metric with either the pending header or data count, instead of their sum. The new logic moves the metric update to the daSubmissionLoop, where it is periodically and correctly updated with the total number of pending items. This is a solid fix. I've also suggested a small improvement to update two other related metrics for more granular monitoring.

}

// Update metrics with current pending counts
s.metrics.DASubmitterPendingBlobs.Set(float64(headersNb + dataNb))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Since you're already calculating headersNb and dataNb, it would be beneficial to also update the PendingHeadersCount and PendingDataCount metrics here. This would provide more granular insight into the pending items.

			s.metrics.PendingHeadersCount.Set(float64(headersNb))
			s.metrics.PendingDataCount.Set(float64(dataNb))
			s.metrics.DASubmitterPendingBlobs.Set(float64(headersNb + dataNb))

@codecov
Copy link

codecov bot commented Jan 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 55.33%. Comparing base (455b6c1) to head (958f53c).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3038      +/-   ##
==========================================
- Coverage   55.33%   55.33%   -0.01%     
==========================================
  Files         117      117              
  Lines       11685    11676       -9     
==========================================
- Hits         6466     6461       -5     
+ Misses       4494     4492       -2     
+ Partials      725      723       -2     
Flag Coverage Δ
combined 55.33% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants