Skip to content

Merge tests reports from "re-runs" to automatically ignore flaky test results#2354

Merged
bert-e merged 8 commits intodevelopment/2.14from
improvement/ZENKO-5142
Mar 18, 2026
Merged

Merge tests reports from "re-runs" to automatically ignore flaky test results#2354
bert-e merged 8 commits intodevelopment/2.14from
improvement/ZENKO-5142

Conversation

@francoisferrand
Copy link
Contributor

@francoisferrand francoisferrand commented Mar 16, 2026

In order to workaround flaky tests, the test command result will now be ignored. Instead, extra processing is done at the end of each job, in the archive-artifacts action:

  • retrieve the junit report from every earlier attempt (note: this only considers the current run, there is no general notion that a build is flaky)
  • merge these reports, by simply adding every test results in a single report. This is done using the python script, as it does not require installing any extra package for XML processing.
  • rely on mikepenz/action-junit-report action to compute the job status. We were already using it, and it has this functionality already AND knows how to handle such flaky tests (where there are multiple results for the same test).

In addition, to give some visibility on these flaky test, the summary (which was already present) will now show the results form every attempt. This required running the mikepenz/action-junit-report action twice though: once with the individual reports for building an easy to read report, and another time with the "merged" report to compute the job status. It looks like this:

image

This approach does not solve/reduce flakiness, nor allow easily blacklisting some flaky test results; however it helps mitigate the worse case scenario, by statistically ensuring much faster convergence: as every attempt can only decrease the number of failed tests.
Instead of requiring all tests to pass in a single run (where probabilty can quickly degrade when multiple tests are flaky), we only require that each test passes once overall.

Issue: ZENKO-5142

@francoisferrand francoisferrand requested review from a team, benzekrimaha and maeldonn March 16, 2026 18:40
@bert-e
Copy link
Contributor

bert-e commented Mar 16, 2026

Hello francoisferrand,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Available options
name description privileged authored
/after_pull_request Wait for the given pull request id to be merged before continuing with the current one.
/bypass_author_approval Bypass the pull request author's approval
/bypass_build_status Bypass the build and test status
/bypass_commit_size Bypass the check on the size of the changeset TBA
/bypass_incompatible_branch Bypass the check on the source branch prefix
/bypass_jira_check Bypass the Jira issue check
/bypass_peer_approval Bypass the pull request peers' approval
/bypass_leader_approval Bypass the pull request leaders' approval
/approve Instruct Bert-E that the author has approved the pull request. ✍️
/create_pull_requests Allow the creation of integration pull requests.
/create_integration_branches Allow the creation of integration branches.
/no_octopus Prevent Wall-E from doing any octopus merge and use multiple consecutive merge instead
/unanimity Change review acceptance criteria from one reviewer at least to all reviewers
/wait Instruct Bert-E not to run until further notice.
Available commands
name description privileged
/help Print Bert-E's manual in the pull request.
/status Print Bert-E's current status in the pull request TBA
/clear Remove all comments from Bert-E from the history TBA
/retry Re-start a fresh build TBA
/build Re-start a fresh build TBA
/force_reset Delete integration branches & pull requests, and restart merge process from the beginning.
/reset Try to remove integration branches unless there are commits on them which do not appear on the source branch.

Status report is not available.

@francoisferrand francoisferrand force-pushed the improvement/ZENKO-5142 branch from 5e1dcd1 to f6f5ab8 Compare March 17, 2026 17:18
@bert-e
Copy link
Contributor

bert-e commented Mar 17, 2026

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • 2 peers

@scality scality deleted a comment from bert-e Mar 17, 2026
- name: Run init CI test
run: bash run-e2e-test.sh "end2end" ${E2E_IMAGE_NAME}:${E2E_IMAGE_TAG} "end2end" "default"
working-directory: ./.github/scripts/end2end
continue-on-error: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the continue on error ?
Is it because it's required for the merging part to continue ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we don't continue-on-error, the build will fail immediately on error: whatever we do after, we cannot "recover" the build and make it pass.

@SylvainSenechal
Copy link
Contributor

SylvainSenechal commented Mar 17, 2026

In order to workaround flaky tests, the test command result will now be ignored. Instead, extra processing is done at the end of each job, in the archive-artifacts action:

  • retrieve the junit report from every earlier attempt (note: this only considers the current run, there is no general notion that a build is flaky)
  • merge these reports, by simply adding every test results in a single report. This is done using the python script, as it does not require installing any extra package for XML processing.
  • rely on mikepenz/action-junit-report action to compute the job status. We were already using it, and it has this functionality already AND knows how to handle such flaky tests (where there are multiple results for the same test).

In addition, to give some visibility on these flaky test, the summary (which was already present) will now show the results form every attempt. This required running the mikepenz/action-junit-report action twice though: once with the individual reports for building an easy to read report, and another time with the "merged" report to compute the job status. It looks like this:

image This approach does not solve/reduce flakiness, nor allow easily blacklisting some flaky test results; however it helps mitigate the worse case scenario, by statistically ensuring much faster convergence: as every attempt can only decrease the number of failed tests. Instead of requiring all tests to pass in a single run (where probabilty can quickly degrade when multiple tests are flaky), we only require that each test passes once overall.

Issue: ZENKO-5142

So on this run :
https://github.com/scality/Zenko/actions/runs/23207141364/job/67446761946
We have this result :
image

I see the Run e2e ctst step is marked as if it passed, but with this pr, it will always be marked as passed even when there are failures, and instead we will have to look at the Archive step or the Summary to see what happened, is that correct ?

  • Is it possible to still show the "run e2e ctst" step as failed, or is it something controlled by github that we can't do much about ?
  • Is it possible to have a dedicated in between step, maybe just before the archive, that would be called something like "merge ctst tests reports" : Because here I find it a bit weird that we see the failure on the archive step

Only partially related to this pr, but might be an opportunity for that : The Archive artifact step is super long, usually around ~7000 lines, and it's super annoying to have to scroll down to find the artifact, could we find a way to either, significantly reduce the amount of printed lines on this step (i dont think whats printed is super useful). Or better, does github offers some kind of feature so that we can display the artifact url maybe in the step title with a variable or something ? Also, I just checked rapidly, we should be able to have this link be visible in the github summary, and I think we already kinda do it, but not sure this is correct because the 4 artifacts I see here are dockerbuilds, not exactly the kind of url that I usually use to download artifacts (https://artifacts.scality.net/builds/github:scality:Zenko:staging-f6f5ab8b81.build-iso-and-end2end-test.8791) 🤔
image

@francoisferrand
Copy link
Contributor Author

So on this run : scality/Zenko/actions/runs/23207141364/job/67446761946 I see the Run e2e ctst step is marked as if it passed, but with this pr, it will always be marked as passed even when there are failures, and instead we will have to look at the Archive step or the Summary to see what happened, is that correct ?

is this just about changing habits, or is there a real problem?

  • The status of the job is still displayed front and foremost (in the sidebar typically) - which IMO is the most important information
  • The summary (also on the front page) shows the details of the failure, without having to dig through the logs or steps. Though unfortunately it is now displayed below the "Docker build summary", maybe we can improve that later.
  • Indeed, the step is not marked as failed : so we cannot as "immediately" answer the question did test fail ?
  • However, we can still easily answer to the reverse question: _what failed" ? either it is explicitely one of the deployment steps, or it is indeed the "archive artifacts"....which is a proxy for test failure, since most -if not all- steps there are continue-on-failure
  • Is it possible to still show the "run e2e ctst" step as failed, or is it something controlled by github that we can't do much about ?

Not possible unfortunately. In order for the build to be able to success, we must ignore the the error (continue-on-error), and GitHub marks the build as success in that case.

  • Is it possible to have a dedicated in between step, maybe just before the archive, that would be called something like "merge ctst tests reports" : Because here I find it a bit weird that we see the failure on the archive step

Technically yes, but it has other drawbacks

  • requires even more modifications to each job
  • there is a lot of coupling between this merging/analysis and the upload of artifacts, so I'd really keep them together

Only partially related to this pr, but might be an opportunity for that : The Archive artifact step is super long, usually around ~7000 lines, and it's super annoying to have to scroll down to find the artifact, could we find a way to either, significantly reduce the amount of printed lines on this step (i dont think whats printed is super useful). Or better, does github offers some kind of feature so that we can display the artifact url maybe in the step title with a variable or something ? Also, I just checked rapidly, we should be able to have this link be visible in the github summary, and I think we already kinda do it, but not sure this is correct because the 4 artifacts I see here are dockerbuilds, not exactly the kind of url that I usually use to download artifacts

  • reducing the length of archive artifacts logs is out of scope of this PR, but feel free to improve :)
  • you don't need to scroll, there is another summary with the link to artifacts, directly on the front page of the build:
image

@francoisferrand francoisferrand requested review from a team, DarkIsDude and delthas March 18, 2026 08:18
VM is destroyed anyway at the end of the test.

Issue: ZENKO-5142
- cache@v5
- checkout@v6
- create-github-app-token@v2
- login-action@v4

Issue: ZENKO-5142
@francoisferrand
Copy link
Contributor Author

/after_pull_request=2357

@bert-e
Copy link
Contributor

bert-e commented Mar 18, 2026

Waiting for other pull request(s)

The current pull request is locked by the after_pull_request option.

In order for me to merge this pull request, run the following actions first:

➡️ Merge the OPEN pull request:

Alternatively, delete all the after_pull_request comments from this pull request.

The following options are set: after_pull_request

@SylvainSenechal
Copy link
Contributor

Ok for the response, yeah it will shift a bit the way we investigate errors, as I'm used to looking at the "run ctst" job directly, and also usually don't look at the summary.
But this is just habits

@francoisferrand francoisferrand changed the base branch from development/2.14 to w/2.14/improvement/ZENKO-5226 March 18, 2026 13:29
@francoisferrand
Copy link
Contributor Author

/approve

@francoisferrand francoisferrand changed the base branch from w/2.14/improvement/ZENKO-5226 to development/2.14 March 18, 2026 13:54
@bert-e
Copy link
Contributor

bert-e commented Mar 18, 2026

Waiting for other pull request(s)

The current pull request is locked by the after_pull_request option.

In order for me to merge this pull request, run the following actions first:

➡️ Merge the OPEN pull request:

Alternatively, delete all the after_pull_request comments from this pull request.

The following options are set: after_pull_request, approve

user: ${{ inputs.user }}
password: ${{ inputs.password }}
source: /tmp/artifacts
if: always() No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't forget to add new lines 🙏

@@ -0,0 +1,132 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have test somehow ?

@bert-e bert-e merged commit 2111b11 into development/2.14 Mar 18, 2026
53 of 56 checks passed
@bert-e bert-e deleted the improvement/ZENKO-5142 branch March 18, 2026 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants