[ 
https://issues.apache.org/jira/browse/MRESOLVER-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17708803#comment-17708803
 ] 

ASF GitHub Bot commented on MRESOLVER-349:
------------------------------------------

cstamas commented on code in PR #276:
URL: https://github.com/apache/maven-resolver/pull/276#discussion_r1158326240


##########
src/site/markdown/configuration.md:
##########
@@ -92,7 +92,9 @@ Option | Type | Description | Default Value | Supports Repo 
ID Suffix
 `aether.syncContext.named.factory` | String | Name of the named lock factory 
implementing the `org.eclipse.aether.named.NamedLockFactory` interface. | 
`"rwlock-local"` | no
 `aether.syncContext.named.hashing.depth` | int | The directory depth to 
"spread" hashes in git-like fashion, integer between 0 and 4 (inclusive). | 2 | 
no
 `aether.syncContext.named.nameMapper` | String | Name of name mapper 
implementing the 
`org.eclipse.aether.internal.impl.synccontext.named.NameMapper` interface. | 
`"gav"` | no
-`aether.syncContext.named.time` | long | Amount of time a synchronization 
context shall wait to obtain a lock. | 30 | no
+`aether.syncContext.named.retry` | int | Count of retries SyncContext adapter 
should attempt, when obtaining locks. | `2` | no

Review Comment:
   yup, agreed. _Ideally_ it should be 0, but this gives opportunity to tune it





> Adapter when locking should "give up and retry"
> -----------------------------------------------
>
>                 Key: MRESOLVER-349
>                 URL: https://issues.apache.org/jira/browse/MRESOLVER-349
>             Project: Maven Resolver
>          Issue Type: Task
>          Components: Resolver
>    Affects Versions: 1.9.7
>            Reporter: Tamas Cservenak
>            Assignee: Tamas Cservenak
>            Priority: Major
>             Fix For: 1.9.8
>
>
> Somewhat in relation to MRESOLVER-346, but sync context (the named lock 
> adapter to be more specific) instead of holding as many it could locked so 
> far (and cause lock timeouts), should simply give up all and retry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to