Skip to content

Conversation

@ProjectsByJackHe
Copy link
Contributor

@ProjectsByJackHe ProjectsByJackHe commented Jul 16, 2025

Description

Extends upon PR #767 , which added the plumbing to enable TX queues to receive updates on when the offload state of the NIC changes via a ring flag, and a socket option to query the current state.

This pull request adds the same capability but for RX queues, so that sockets bound to RX can query the offload state.

To accomplish this, there was some significant changes made in XDPLWF, and the XDP core logic code.

Namely:

  • I added a separate LWF_generic queue to track objects in the XDP core layers in the XDPLWF layer, so that there is a 1-1 mapping between an xdp_lwf_generic_rx_queue and the core xdp_rx_queue. This was a problem before because the XDPLWF layer only created a generic rx queue when a program gets added. Which means there was no way to bubble up NDIS notifications until a program gets added. It made sense before, as there was no need to pre-emptively create a generic queue prior to attaching to the datapath. Now, since we subscribe to NDIS offload state change notifications, we need a 1-1 mapping somewhere.

  • Also fixed a few bugs in the XDP core logic which forgot to clear out a pointer to an object after we deleted that object.

  • Wired up the rx offload changed notification in the XDP core logic such that the LWF / NDIS notifications bubbled up can inform the shared XSK rings.

  • Modified when we registered RX queue notifications. Before, it did so when you activated the socket. This won't fly now, because we allocate provider references as soon as we bind() an XSK, which means in cases where the underlying interface gets reset, we must have notifications registered so that the rundown codepaths can properly clean up the XSK/Rx_queue before we de-allocate the interface set.

image

Testing

Added a new RX config test, and SpinXsk test to stress the new socket option.
(local tests while the CI is broken:)
image
image

P.S. The other highlighted test does not attach a program, it just binds a socket to RX only.
P.P.S. I also ran the SpinXsk for 5 minutes many times and it was fine. CI is broken right now.

Documentation

N/A

Installation

N/A

@ProjectsByJackHe ProjectsByJackHe marked this pull request as ready for review July 19, 2025 23:45
@ProjectsByJackHe ProjectsByJackHe requested a review from a team as a code owner July 19, 2025 23:45
@ProjectsByJackHe ProjectsByJackHe force-pushed the jackhe/rx-csum-config-updates branch from b2c016c to 9bba5eb Compare August 23, 2025 01:09
@ProjectsByJackHe ProjectsByJackHe force-pushed the jackhe/rx-csum-config-updates branch from 9bba5eb to 5e56c56 Compare August 26, 2025 23:24
@ProjectsByJackHe ProjectsByJackHe changed the title Add plumbing to notify RX queues of any offload updates Add plumbing to notify RX-bound sockets of any offload updates Aug 27, 2025
@ProjectsByJackHe ProjectsByJackHe changed the title Add plumbing to notify RX-bound sockets of any offload updates Add plumbing to notify RX-bound sockets of any NDIS offload updates Aug 27, 2025
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Diagrams are great, and we definitely are lacking them in XDP, but .png files are not maintainable as the architecture changes over time. If you still have the source of this diagram, can you use a tool to convert it to a markdown-friendly format, or mermaid, etc.?

If not, we can take that task up the next time the architecture changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can look into a markdown format

XDP_OPEN_INTERFACE *OpenInterface;
XDP_CLOSE_INTERFACE *CloseInterface;
XDP_CREATE_RX_QUEUE *CreateRxQueue;
XDP_CREATE_RX_NOTIFY_QUEUE *CreateRxNotifyQueue;
Copy link
Contributor

@mtfriesen mtfriesen Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to require XDP interface implementors create a separate queue notification object with a different lifetime from each queue. Instead, XDP itself should receive notifications on each RX queue, similar to the pattern in the TX queues.

Then, to solve the problem of state changes across RX queues being created/deleted below things like XSKs, each XSK can just use its existing RX queue change notification handler to detect whether the offload state has changed.

This keeps the complexity within one component, and keeps the external API simple.

Copy link
Contributor Author

@ProjectsByJackHe ProjectsByJackHe Jan 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey sorry for the late reply, it's been over 5 months since I last worked this PR, and it took a while to re-gain context.

The problem is lifecycle mismatch. TX queues get created when bind() is called on XSKs. RX queues are created only when a program gets attached to a bound RX socket according to the XDP core logic. The API expectation is that once you call bind() on an RX or TX socket, you should be able to receive offload notifications. Originally, you only had ATTACH / DETACH / DETACH_COMPLETE RX queue notifications which aligned nicely with the lifecycle of RX queues. But this new notification, RX_OFFLOAD_CURRENT_CONFIG, can be delivered anytime once you call bind(). This dives back to our previous discussion about apps having to attach a program first before receiving these offload notifications.

How can RX bound XSKs receive notifications if there is no underlying RX queue with the handle to propagate NDIS / (or other provider) offload notifications?

If I misunderstood your suggestion, please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants