OCPNODE-4179: Migrating test case OCP-80983 from openshift-tests-private to origin#30899
OCPNODE-4179: Migrating test case OCP-80983 from openshift-tests-private to origin#30899Chandan9112 wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Skipping CI for Draft Pull Request. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughMoved MicroShift detection to a Describe-level BeforeEach that skips the entire Describe on MicroShift; removed per-test MicroShift checks. Added an It test that verifies Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Chandan9112 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
test/extended/node/node_e2e/node.go
Outdated
| }) | ||
|
|
||
| //author: cmaurya@redhat.com | ||
| g.It("should verify cgroupv2 is the default and cgroupv1 cannot be set [OCP-80983] [apigroup:config.openshift.io] [apigroup:machineconfiguration.openshift.io]", func() { |
There was a problem hiding this comment.
Add the “[OTP]” annotation in the test name to tests that are ported from openshift-test-private. And also I think no need to add Polarian test case name.
Can remove [OCP-80983] [apigroup:config.openshift.io] [apigroup:machineconfiguration.openshift.io] .
There was a problem hiding this comment.
Changed in the latest comit
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
d167fb2 to
c6a09a4
Compare
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
test/extended/node/node_e2e/node.go
Outdated
| nodeConfig.Spec.CgroupMode = configv1.CgroupMode("v1") | ||
| _, updateErr := oc.AdminConfigClient().ConfigV1().Nodes().Update(ctx, nodeConfig, metav1.UpdateOptions{}) | ||
| o.Expect(updateErr).To(o.HaveOccurred(), "expected API server to reject cgroupMode v1, but update succeeded") | ||
| e2e.Logf("cgroupMode v1 correctly rejected with error: %v", updateErr) |
There was a problem hiding this comment.
When changing cgroup from v2 to v1, in message it should show like this: error: nodes.config.openshift.io "cluster" is invalid .
There was a problem hiding this comment.
please update the error log message
There was a problem hiding this comment.
Changed in the latest comit. The error message will look like this.
The Node "cluster" is invalid: spec.cgroupMode: Unsupported value: "v1": supported values: "v2", ""
c6a09a4 to
0fb028c
Compare
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
0fb028c to
332f7a4
Compare
|
Scheduling required tests: |
test/extended/node/node_e2e/node.go
Outdated
|
|
||
| g "github.com/onsi/ginkgo/v2" | ||
| o "github.com/onsi/gomega" | ||
|
|
There was a problem hiding this comment.
removed in the latest commit
|
Risk analysis has seen new tests most likely introduced by this PR. New tests seen in this PR at sha: 332f7a4
|
test/extended/node/node_e2e/node.go
Outdated
| g.By("2) Changing cgroup from v2 to v1 should result in error") | ||
| output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output() | ||
| o.Expect(err).Should(o.HaveOccurred()) | ||
| o.Expect(strings.Contains(output, "Unsupported value: \"v1\"")).Should(o.BeTrue()) |
c7c5091 to
c2c8ffd
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/extended/node/node_e2e/node.go (1)
19-26: Cache the MicroShift probe result instead of re-running it before every test
exutil.IsMicroShiftClustercan poll for minutes; putting it inBeforeEachrepeats that cost for everyItand increases timeout/flakiness risk as this block grows. Compute once perDescribeand reuse the result inBeforeEach.Suggested refactor
+import "sync" + var _ = g.Describe("[sig-node] [Jira:Node/Kubelet] Kubelet, CRI-O, CPU manager", func() { var ( oc = exutil.NewCLIWithoutNamespace("node").AsAdmin() + microShiftOnce sync.Once + isMicroShift bool + microShiftErr error ) // Skip all tests on MicroShift clusters as MachineConfig resources are not available g.BeforeEach(func() { - isMicroShift, err := exutil.IsMicroShiftCluster(oc.AdminKubeClient()) - o.Expect(err).NotTo(o.HaveOccurred()) + microShiftOnce.Do(func() { + isMicroShift, microShiftErr = exutil.IsMicroShiftCluster(oc.AdminKubeClient()) + }) + o.Expect(microShiftErr).NotTo(o.HaveOccurred()) if isMicroShift { g.Skip("Skipping test on MicroShift cluster") } })As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/node/node_e2e/node.go` around lines 19 - 26, Compute the MicroShift probe once and reuse it: declare a Describe-scoped (or package-scoped) variable like cachedIsMicroShift bool, run the expensive probe a single time (e.g., in g.BeforeSuite or at the outer Describe setup) by calling exutil.IsMicroShiftCluster(oc.AdminKubeClient()) and storing the result (and assert err with o.Expect(err).NotTo(o.HaveOccurred())), then change the existing g.BeforeEach block to check cachedIsMicroShift and call g.Skip("Skipping test on MicroShift cluster") when true instead of re-invoking exutil.IsMicroShiftCluster; keep existing references to o.Expect and g.Skip but remove the repeated probe call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 90-92: The assertion is too brittle by matching the full error
sentence; relax it to assert the command failed
(o.Expect(err).Should(o.HaveOccurred())) and that the output contains stable
fragments such as "spec.cgroupMode", "Unsupported value" and the rejected value
"\"v1\"" (or "v1") instead of the entire message. Locate the patch invocation
(oc.AsAdmin().WithoutNamespace().Run("patch")... ) and replace the single
o.ContainSubstring check with multiple contains assertions for those fragments
(or a single contains that concatenates stable fragments) to make the test
resilient to wording/casing changes.
---
Nitpick comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 19-26: Compute the MicroShift probe once and reuse it: declare a
Describe-scoped (or package-scoped) variable like cachedIsMicroShift bool, run
the expensive probe a single time (e.g., in g.BeforeSuite or at the outer
Describe setup) by calling exutil.IsMicroShiftCluster(oc.AdminKubeClient()) and
storing the result (and assert err with o.Expect(err).NotTo(o.HaveOccurred())),
then change the existing g.BeforeEach block to check cachedIsMicroShift and call
g.Skip("Skipping test on MicroShift cluster") when true instead of re-invoking
exutil.IsMicroShiftCluster; keep existing references to o.Expect and g.Skip but
remove the repeated probe call.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8a542e0c-a347-48eb-b629-3c7e9932b606
📒 Files selected for processing (1)
test/extended/node/node_e2e/node.go
test/extended/node/node_e2e/node.go
Outdated
| output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output() | ||
| o.Expect(err).Should(o.HaveOccurred()) | ||
| o.Expect(output).To(o.ContainSubstring("The Node \"cluster\" is invalid: spec.cgroupMode: Unsupported value: \"v1\": supported values: \"v2\", \"\"")) |
There was a problem hiding this comment.
Line 92 is too strict and can make the test flaky across message-format changes
The test intent is to verify rejection of spec.cgroupMode: "v1", but matching the full sentence ties the test to exact wording/casing. Assert stable fragments instead.
More resilient assertion
output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output()
o.Expect(err).Should(o.HaveOccurred())
-o.Expect(output).To(o.ContainSubstring("The Node \"cluster\" is invalid: spec.cgroupMode: Unsupported value: \"v1\": supported values: \"v2\", \"\""))
+o.Expect(output).To(o.ContainSubstring("spec.cgroupMode: Unsupported value: \"v1\""))
+o.Expect(output).To(o.ContainSubstring("supported values: \"v2\", \"\""))As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output() | |
| o.Expect(err).Should(o.HaveOccurred()) | |
| o.Expect(output).To(o.ContainSubstring("The Node \"cluster\" is invalid: spec.cgroupMode: Unsupported value: \"v1\": supported values: \"v2\", \"\"")) | |
| output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output() | |
| o.Expect(err).Should(o.HaveOccurred()) | |
| o.Expect(output).To(o.ContainSubstring("spec.cgroupMode: Unsupported value: \"v1\"")) | |
| o.Expect(output).To(o.ContainSubstring("supported values: \"v2\", \"\"")) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/node/node_e2e/node.go` around lines 90 - 92, The assertion is
too brittle by matching the full error sentence; relax it to assert the command
failed (o.Expect(err).Should(o.HaveOccurred())) and that the output contains
stable fragments such as "spec.cgroupMode", "Unsupported value" and the rejected
value "\"v1\"" (or "v1") instead of the entire message. Locate the patch
invocation (oc.AsAdmin().WithoutNamespace().Run("patch")... ) and replace the
single o.ContainSubstring check with multiple contains assertions for those
fragments (or a single contains that concatenates stable fragments) to make the
test resilient to wording/casing changes.
|
Scheduling required tests: |
|
Risk analysis has seen new tests most likely introduced by this PR. New tests seen in this PR at sha: c2c8ffd
|
c2c8ffd to
5a3753d
Compare
|
Scheduling required tests: |
|
Job Failure Risk Analysis for sha: 5a3753d
Risk analysis has seen new tests most likely introduced by this PR. New tests seen in this PR at sha: 5a3753d
|
test/extended/node/node_e2e/node.go
Outdated
| g.By("1) Check cgroup version") | ||
| workerNode, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={.items[0].metadata.name}").Output() | ||
| o.Expect(err).NotTo(o.HaveOccurred()) | ||
| cgroupV, err := oc.AsAdmin().WithoutNamespace().Run("debug").Args("node/"+workerNode, "-n", "default", "--", "chroot", "/host", "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup").Output() |
There was a problem hiding this comment.
Use nodeutils.ExecOnNodeWithChroot
5a3753d to
150a58c
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
test/extended/node/node_e2e/node.go (1)
91-93:⚠️ Potential issue | 🟠 MajorUse stable fragments for the rejection check.
Line 93still matches one exact validation sentence. That's brittle across API message wording changes; assert on stable fragments instead.More resilient assertion
output, err := oc.AsAdmin().WithoutNamespace().Run("patch").Args("nodes.config.openshift.io", "cluster", "-p", `{"spec": {"cgroupMode": "v1"}}`, "--type=merge").Output() o.Expect(err).Should(o.HaveOccurred()) - o.Expect(output).To(o.ContainSubstring("spec.cgroupMode: Unsupported value: \"v1\": supported values: \"v2\", \"\"")) + o.Expect(output).To(o.ContainSubstring("spec.cgroupMode")) + o.Expect(output).To(o.ContainSubstring("Unsupported value")) + o.Expect(output).To(o.ContainSubstring(`"v1"`)) + o.Expect(output).To(o.ContainSubstring(`"v2"`))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/node/node_e2e/node.go` around lines 91 - 93, The test currently asserts the exact validation sentence from the patch output (in the oc.AsAdmin().WithoutNamespace().Run("patch") call) which is brittle; change the assertion to check for stable fragments instead—ensure o.Expect(err).Should(o.HaveOccurred()) is kept and replace the single exact message check on output with assertions that it contains stable substrings such as "Unsupported value", "cgroupMode", and "v1" (or alternatively "supported values") so the failure is still detected without relying on exact wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 82-90: The test currently queries only the first worker node
(jsonpath {.items[0]}) and calls nodeutils.ExecOnNodeWithChroot without ensuring
the node is Ready, which can skip kernel-arg assertions; change the node query
to return all worker names (e.g. jsonpath {.items[*].metadata.name}), split and
iterate each worker name (replace the single-use workerNode variable with a
loop), for each node verify Ready status (use an existing wait/ready helper or
query .status.conditions for type=Ready before proceeding) then call
nodeutils.ExecOnNodeWithChroot for that node and perform the kernel-arg and
cgroup assertions per-node so non-Ready or misconfigured workers are exercised.
Ensure you update references to workerNode and the
oc.AsAdmin().WithoutNamespace().Run(...).Args(...) call and keep
nodeutils.ExecOnNodeWithChroot usage inside the per-node loop.
---
Duplicate comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 91-93: The test currently asserts the exact validation sentence
from the patch output (in the oc.AsAdmin().WithoutNamespace().Run("patch") call)
which is brittle; change the assertion to check for stable fragments
instead—ensure o.Expect(err).Should(o.HaveOccurred()) is kept and replace the
single exact message check on output with assertions that it contains stable
substrings such as "Unsupported value", "cgroupMode", and "v1" (or alternatively
"supported values") so the failure is still detected without relying on exact
wording.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c39959f8-ab44-45ea-ad90-324778e36c3e
📒 Files selected for processing (1)
test/extended/node/node_e2e/node.go
test/extended/node/node_e2e/node.go
Outdated
| g.By("1) Check cgroup version") | ||
| workerNode, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={.items[0].metadata.name}").Output() | ||
| o.Expect(err).NotTo(o.HaveOccurred()) | ||
| cgroupV, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup") | ||
| o.Expect(err).NotTo(o.HaveOccurred()) | ||
| e2e.Logf("cgroup version info is: [%v]\n", cgroupV) | ||
| o.Expect(cgroupV).To(o.ContainSubstring("cgroup2fs")) | ||
|
|
||
| g.By("2) Changing cgroup from v2 to v1 should result in error") |
There was a problem hiding this comment.
This It still misses part of the migrated coverage.
Line 83 hard-codes .items[0], so only one worker is probed, and the helper used at Line 85 does not verify that node is Ready first. That leaves the kernel-arg assertion out entirely and can miss non-Ready or misconfigured workers.
Possible shape of the fix
- workerNode, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={.items[0].metadata.name}").Output()
+ workerNodes, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={range .items[*]}{.metadata.name}{'\n'}{end}").Output()
o.Expect(err).NotTo(o.HaveOccurred())
- cgroupV, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup")
- o.Expect(err).NotTo(o.HaveOccurred())
- e2e.Logf("cgroup version info is: [%v]\n", cgroupV)
- o.Expect(cgroupV).To(o.ContainSubstring("cgroup2fs"))
+ workers := strings.Fields(workerNodes)
+ o.Expect(workers).NotTo(o.BeEmpty(), "expected at least one worker node")
+ for _, workerNode := range workers {
+ ready, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("node", workerNode, "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}").Output()
+ o.Expect(err).NotTo(o.HaveOccurred())
+ o.Expect(strings.TrimSpace(ready)).To(o.Equal("True"))
+
+ cgroupV, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup")
+ o.Expect(err).NotTo(o.HaveOccurred())
+ o.Expect(cgroupV).To(o.ContainSubstring("cgroup2fs"))
+
+ cmdline, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "cat /proc/cmdline")
+ o.Expect(err).NotTo(o.HaveOccurred())
+ o.Expect(cmdline).To(o.ContainSubstring("systemd.unified_cgroup_hierarchy=1"))
+ o.Expect(cmdline).To(o.ContainSubstring("cgroup_no_v1"))
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/node/node_e2e/node.go` around lines 82 - 90, The test currently
queries only the first worker node (jsonpath {.items[0]}) and calls
nodeutils.ExecOnNodeWithChroot without ensuring the node is Ready, which can
skip kernel-arg assertions; change the node query to return all worker names
(e.g. jsonpath {.items[*].metadata.name}), split and iterate each worker name
(replace the single-use workerNode variable with a loop), for each node verify
Ready status (use an existing wait/ready helper or query .status.conditions for
type=Ready before proceeding) then call nodeutils.ExecOnNodeWithChroot for that
node and perform the kernel-arg and cgroup assertions per-node so non-Ready or
misconfigured workers are exercised. Ensure you update references to workerNode
and the oc.AsAdmin().WithoutNamespace().Run(...).Args(...) call and keep
nodeutils.ExecOnNodeWithChroot usage inside the per-node loop.
|
@Chandan9112: This pull request references OCPNODE-4179 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
150a58c to
b8f904b
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/extended/node/node_e2e/node.go (1)
82-91:⚠️ Potential issue | 🟠 MajorThis still only validates one worker and skips the boot-arg part of the port.
Using
.items[0]lets the test pass with a misconfigured or NotReady second worker, and there is no assertion for the expected cgroupv2 kernel args at all. That leaves the migrated OCP-80983 coverage incomplete.Possible shape of the fix
- workerNode, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={.items[0].metadata.name}").Output() + workerNodes, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("nodes", "-l", "node-role.kubernetes.io/worker", "-o=jsonpath={range .items[*]}{.metadata.name}{'\n'}{end}").Output() o.Expect(err).NotTo(o.HaveOccurred()) - nodeStatus, err := oc.AsAdmin().Run("get").Args("nodes", workerNode, "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}").Output() - o.Expect(err).NotTo(o.HaveOccurred()) - o.Expect(nodeStatus).To(o.Equal("True"), "Worker node %s is not Ready", workerNode) - cgroupV, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup") - o.Expect(err).NotTo(o.HaveOccurred()) - e2e.Logf("cgroup version info is: [%v]\n", cgroupV) - o.Expect(cgroupV).To(o.ContainSubstring("cgroup2fs")) + workers := strings.Fields(workerNodes) + o.Expect(workers).NotTo(o.BeEmpty(), "expected at least one worker node") + for _, workerNode := range workers { + nodeStatus, err := oc.AsAdmin().Run("get").Args("nodes", workerNode, "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}").Output() + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(nodeStatus).To(o.Equal("True"), "Worker node %s is not Ready", workerNode) + + cgroupV, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "stat -c %T -f /sys/fs/cgroup") + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(cgroupV).To(o.ContainSubstring("cgroup2fs")) + + cmdline, err := nodeutils.ExecOnNodeWithChroot(oc, workerNode, "/bin/bash", "-c", "cat /proc/cmdline") + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(cmdline).To(o.ContainSubstring("systemd.unified_cgroup_hierarchy=1")) + o.Expect(cmdline).To(o.ContainSubstring("cgroup_no_v1")) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/node/node_e2e/node.go` around lines 82 - 91, The test currently only checks a single worker (using .items[0]) and only validates the filesystem type; update the logic in node.go to list all worker nodes (by label selector), iterate over each worker node name, assert each node is Ready (reuse the nodeStatus check), run nodeutils.ExecOnNodeWithChroot for each node to verify cgroup v2 (reuse cgroupV) and additionally exec into each node to read /proc/cmdline (or use a similar command) and assert the kernel boot args contain the expected cgroup v2 flags (add assertions for required args such as the unified cgroup flag used by your platform), failing the test if any worker is NotReady or missing cgroup v2 or the required kernel args. Ensure you reference and update the existing variables/functions: the label query that produced workerNode, the nodeStatus check, nodeutils.ExecOnNodeWithChroot, and the cgroupV assertions so they run per-node instead of only on .items[0].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 82-91: The test currently only checks a single worker (using
.items[0]) and only validates the filesystem type; update the logic in node.go
to list all worker nodes (by label selector), iterate over each worker node
name, assert each node is Ready (reuse the nodeStatus check), run
nodeutils.ExecOnNodeWithChroot for each node to verify cgroup v2 (reuse cgroupV)
and additionally exec into each node to read /proc/cmdline (or use a similar
command) and assert the kernel boot args contain the expected cgroup v2 flags
(add assertions for required args such as the unified cgroup flag used by your
platform), failing the test if any worker is NotReady or missing cgroup v2 or
the required kernel args. Ensure you reference and update the existing
variables/functions: the label query that produced workerNode, the nodeStatus
check, nodeutils.ExecOnNodeWithChroot, and the cgroupV assertions so they run
per-node instead of only on .items[0].
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 27808931-f02a-4347-aebf-ea99a52b7970
📒 Files selected for processing (1)
test/extended/node/node_e2e/node.go
|
Scheduling required tests: |
|
/retest |
1 similar comment
|
/retest |
|
@Chandan9112: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: b8f904b
New tests seen in this PR at sha: b8f904b
|
Adds automated test case OCP-80983 migrated from openshift-tests-private. The test validates:
Here is the test case link: Polarian-80983
Note for Reviewer:
Moved MicroShift skip check from individual test cases to g.BeforeEach block,
following the same pattern used in node_swap.go and image_volume.go.
This avoids duplicating the check inside every test case.
It passed successfully while executing on a live OCP 4.21 cluster: