uvatbc commented on code in PR #1303:
URL: https://github.com/apache/solr/pull/1303#discussion_r1084776393
##########
.github/workflows/solrj-test-crave.yml:
##########
@@ -0,0 +1,41 @@
+name: SolrJ Tests
+
+on:
+ pull_request:
+ branches:
+ - 'main'
+ paths:
+ - '.github/workflows/solrj-test.yml'
+ - 'solr/solrj/**'
+
+jobs:
+ test:
+ name: Run SolrJ Tests
+
+ runs-on: ubuntu-latest
+
+ steps:
+ # Setup
+ - uses: actions/checkout@v2
+ - name: Set up JDK 11
+ uses: actions/setup-java@v2
+ with:
+ distribution: 'temurin'
+ java-version: 11
+ java-package: jdk
+ - name: Grant execute permission for gradlew
+ run: chmod +x gradlew
+ - uses: actions/cache@v2
+ with:
+ path: |
+ ~/.gradle/caches
+ key: ${{ runner.os }}-gradle-solrj-${{ hashFiles('versions.lock') }}
+ restore-keys: |
+ ${{ runner.os }}-gradle-solrj-
+ ${{ runner.os }}-gradle-
+ - name: Get the Crave binary
+ run: curl -s
https://raw.githubusercontent.com/accupara/crave/master/get_crave.sh | bash -s
--
+ - name: Initialize gradle settings
+ run: ./crave run -- ./gradlew localSettings
Review Comment:
Re first comment: Yes, the output from `localSettings` will be retained
between subsequent `crave run`s.
Re second comment: I've changed the worker count to match the core count on
the remote build/test node so that it matches whatever the ephemeral node is
configured to be.
In other words, when we use a 16 core machine, we'll use 16 workers. When we
configure crave to use `n2d-standard-224` instances, `nproc` will return 224
and automatically tell gradlew to use the maximum number of cores.
We can consider a linear scale factor for the number of cores in a later
change as we understand or tweak how well gradle performs on multiple cores.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]