-
Notifications
You must be signed in to change notification settings - Fork 3.4k
HBASE-29838 Run Hadoop Check as a GitHub Action #7651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Looks like we occupied the runner for 6h and then it was aborted. |
|
https://infra.apache.org/github-actions-policy.html The policy here does not say about the 6 hours timeout... We can ask infra about the rules and the size of the github runners, our jenkins runners finished in "343m 23s", which was very close to 6 hours, so if the machine of the github runner is weaker, the build will be very easy to cost more than 6 hours... |
|
My action itself has |
|
or not. 6h is GH's hard limit, https://docs.github.com/en/actions/reference/limits |
|
Then maybe we should try self hosted github runners? For self hosted runners the execution time limit is 5 days... |
1829212 to
352a9a1
Compare
Yes we should bring this back to our CI discussions with Infra. Maybe we can borrow from the pool of new Jenkins workers while we continue to build this out. Yetus is supposed to provide smart, selective detection of module changes when it decided which tests to run. I think the new .github directory broke that for this run, so I've pushed a change to exclude it, maybe that will help. I'm also going to see if I can manually parallelize the unit test runs -- maybe break out three separate checks for the three main unit test groups or something like that. |
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Maybe we could split the UTs run as a seperated github check? Or even more, we could split the UTs run as two seperated check, one has -PrunDevTests(for small/medium tests) and one has -PrunLargeTests(for large tests). |
Yep, that's exactly my thinking as well. Landing these other cleanup issues and I'll be back. |
352a9a1 to
adfd056
Compare
|
(!) A patch to the testing environment has been detected. |
1 similar comment
|
(!) A patch to the testing environment has been detected. |
|
Okay this is better. Module selection chose only hbase-examples for running the unit tests. |
|
🎊 +1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
Looking over out last successful nightly on master, the Large tests on hbase-server still took 7h. I'm looking for other ways to partition this up. |
adfd056 to
ad51411
Compare
|
(!) A patch to the testing environment has been detected. |
1 similar comment
|
(!) A patch to the testing environment has been detected. |
|
🎊 +1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
No description provided.