[
https://issues.apache.org/jira/browse/LUCENE-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175697#comment-13175697
]
Simon Willnauer commented on LUCENE-3667:
-----------------------------------------
-Dtests.sequential=true works good for me on restricted systems. But I agree I
think we should have a setting that sets a upper bound no matter how many cpus
are available.
> Consider changing how we set the number of threads to use to run tests.
> -----------------------------------------------------------------------
>
> Key: LUCENE-3667
> URL: https://issues.apache.org/jira/browse/LUCENE-3667
> Project: Lucene - Java
> Issue Type: Improvement
> Reporter: Mark Miller
> Assignee: Mark Miller
> Priority: Minor
>
> The current way we set the number of threads to use is not expressive enough
> for some systems. My quad core with hyper threading is recognized as 8 CPUs -
> since I can only override the number of threads to use per core, 8 is as low
> as I can go. 8 threads can be problematic for me - just the amount of RAM
> used sometimes can toss me into heavy paging because I only have 8 GB of RAM
> - the heavy paging can cause my whole system to come to a crawl. Without
> hacking the build, I don't think I have a lot of workarounds.
> I'd like to propose that switch from using threadsPerProcessor to
> threadCount. In some ways, it's not as nice, because it does not try to scale
> automatically per system. But that auto scaling is often not ideal (hyper
> threading, wanting to be able to do other work at the same time), so perhaps
> we just default to 1 or 2 threads and devs can override individually?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]