This reply is symptomatic of the very problem noted on the Alpha list where we do not treat our people that submit input with respect.
And you misunderstand his issue. The issue is that BOINC runs inappropriate or inefficient mixes of work. This is a known and long standing issue. His work around is to reset debts. I have noted a similar issue with CPDN tasks and asked for the same type of manual control because of the loads CPDN tasks can place on a system. That is, to run no more than x tasks at anyone time for project y. In his case he sees the issue with NFS and Lattice... The current implementation will never run efficiently or "converge" and he well knows it ... Were this control added it is very likely that he would indeed stop having to reset debts to make BOINC work as he needs it to and I would not have to monitor my systems putting excess CPDN tasks into suspend states... On Feb 12, 2010, at 9:09 AM, Lynn W. Taylor wrote: > If you are a developer or tester, the goal is to find out if BOINC will > settle to reasonable values. > > If you are constantly resetting debts, then you're constantly resetting > the clock -- you're restarting the problem. > > If you leave it alone, and debts don't converge on something that works > to your expectations, then you need to report that. > > I'm seeing work fetch issues as well, but I'm going to monitor and let > it run its course, then my report will be meaningful. > > If you aren't a tester, if you're a user (and your "must reset debts" > screams "user") then you should go back to the official release. > > -- Lynn > > On 2/11/2010 11:57 PM, Ed A wrote: >> Inappropriate? It makes BOINC to better do what I ask of it. If that's >> inappropriate so be it. Real manual controls would be better. For instance >> my quads run better with no more than 3 instances of NFS. They run better >> with only 1 instance of Lattice at a time. Other less demanding projects >> can be run on the other cores however. I should be able to limit the number >> of concurrent instances of any project but there's no way to do this in >> BOINC. Resetting debts helps alleviate the problem a bit but I agree it's a >> poor solution, one that we've been forced into. >> >> Regards/Ed >> >> >> On Thu, Feb 11, 2010 at 8:01 AM,<[email protected]> wrote: >> >>> Overriding the debts means that the resource shares that you set are >>> meaningless. Constantly resetting the debts is in my opinion a fairly >>> inappropriate response. >>> >>> jm7 >>> >>> >>> Ed A >>> <canoebey...@gmai >>> l.com> To >>> [email protected] >>> 02/10/2010 05:07 cc >>> PM >>> Subject >>> Re: [boinc_dev] 6.10.32 failing to >>> maintain sufficient work >>> >>> >>> Hi John, >>> >>> I keep this in my cc_config.xml at all times: >>> >>> <zero_debts>1</zero_debts> >>> >>> Maybe that's why I didn't have the initial weirdness. I restart the >>> clients or reboot every couple of days to reset them. In my experience >>> things run far better at least for my use when the debt system is disabled >>> as much as possible. Here's a suggestion for a new commandline switch: >>> >>> <disable_debts>1</disable_debts> >>> >>> What do you think? >>> >>> Regards/Ed >>> >>> >>> On Wed, Feb 10, 2010 at 2:12 PM,<[email protected]> wrote: >>> It is no particular project. It does appear to be recovering - which >>> leads >>> me to speculate that one of the time_stats numbers was out of whack >>> somehow. The item of note is that the machine spent the last month and a >>> half doing an Aqua task that took both processors, and I am currently >>> wondering if that caused the problem somehow. >>> >>> The shortfall was 0 in the logs when it should have been around a half >>> day >>> for one CPU. The other CPU is effectively taken up by a CPDN task that >>> still has about 5 days of run time left. >>> >>> jm7 >>> >>> Ed A >>> <canoebey...@gmai >>> l.com> To >>> [email protected] >>> 02/10/2010 02:00 cc >>> PM >>> >>> >>> I've been using v6.10.32 on 10 machines since it came out and see none of >>> this behavior. In fact the scheduling seems better than in previous >>> versions, especially for GPU projects. Increasing the queue immediately >>> causes BOINC to DL more work. I've tested up to queue sizes of 1 day on >>> the CPU projects SIMAP and NFS and it's working as expected here. I >>> have >>> noticed that if I set a project (on a quad) at 100 and another at 300 it >>> more reliably runs 1 instance of the 100 level project and 3 instances of >>> the 300 level project. This is VERY good, as some projects take up too >>> many resources to be run on all 4 cores. Earlier BOINC versions seemed >>> far >>> less predicable and it was a big problem (thus all the requests for >>> manual >>> controls). I would like to give the BOINC development team big kudos for >>> instituting true backup projects in this version. Much needed, although >>> haven't found a project yet with the server software upgrade needed to >>> test >>> the feature. Looking forward to it patiently. Is it a particular >>> project >>> that you're having problems with? >>> >>> Regards/Ed >>> >>> >>> On Wed, Feb 10, 2010 at 7:58 AM,<[email protected]> wrote: >>> >>> I have set BOINC to know that I will be disconnected for 0.7 days, yet >>> it >>> stops trying to fetch work after 0.2 days of work is downloaded. Some >>> recent change to work fetch has changed the policy on how much work >>> should >>> be kept on a machine. >>> >>> If there is insufficient work for each CPU to last through the >>> disconnected >>> time, there is not enough work on the machine. >>> >>> I sent logs with my last message about this, and have not seen any >>> message >>> in the email list. >>> >>> jm7 >>> >>> _______________________________________________ >>> boinc_dev mailing list >>> [email protected] >>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >>> To unsubscribe, visit the above URL and >>> (near bottom of page) enter your email address. >>> >>> >>> >>> >> _______________________________________________ >> boinc_dev mailing list >> [email protected] >> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >> To unsubscribe, visit the above URL and >> (near bottom of page) enter your email address. >> > _______________________________________________ > boinc_dev mailing list > [email protected] > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev > To unsubscribe, visit the above URL and > (near bottom of page) enter your email address. _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
