Re: Beagle performance: end-user problem? (was Re: [opensuse] Beagle under 10.3 is really eating up my CPU)

2007-12-23 Thread Aaron Kulkis

Linda Walsh wrote:




The Saturday 2007-12-22 at 16:06 +0100, Anders Johansson wrote:
The real solution here is to find and fix the bug that causes 
beagle to

allocate so much memory. It doesn't happen on all systems.

---.
Without question, this is the best solution.



Anders Johansson wrote:
I wouldn't say probably. It shouldn't be par for the course for an 
application to not check return values from memory allocation functions


Aaron Kulkis wrote:

As I said earlier... the whole thing is poorly written.

---
And you say this based on??? 


It's performance.

It's SUPPOSED to be an unobtrusive background process,
but can cripple a high-end machine through extreme
resource-hogging behavior.

Unfriendly behavior is the very definition of poor code.


Do you have _any_ expertise
in the source?



Not at all, but that's not even the issue.
We're not talking about there's a faster way to sort
this data sorts of tweaks...we're talking about
code which is widely known to effectively cripple any
system it runs on, due to nothing more than the files
it's handling, and how many there are.

  (if you do, sorry, but your attitude is not

productive).  It _appears_ you know nothing about how it is
written but are basing your opinion on its behavior in certain
configurations.


I'm basing my evaluation on how it performs, and its
impact on system performancewhich is the ultimate
standard of judging any code as to whether it is to
be judged good enough or not.



This is why I made a comment about *not* using swap
on a system -- if you are using swap on any regular basis (my
threshhold is using swap, *anytime*, during 'normal' day-to-day
usage), you are running applications that are too big for for
your machine.  Now whether this is due to poor application memory
usage OR is due to poor-planning on the part of the system owner
depends, in large part, on promises or expectations set by the
developer or owner of the program.


I've got 2 GB of DDR2 on this machine, and 3 GB of swap,
on a Centrino Core Duo running at 800 MHz with two
internal 100 GB SATA drives, of which 140 GB is used
by my SuSE installation.

And a running beagle process makes this machine
absolutely unusableeven without a GUI.




Certainly, if I am running on a system with 128M of memory
with a 650MHz mobile CPU, and load a full suse 10.x desktop with
full features, I am asking that I be shot in the foot.  If
a release (like suse10.2, or windows 98) says it will run best with
1GB-mem + a 1GHz processor and my machine has 2GB+a 2GHz processor
and the release runs like a dog -- then I'd say it is the fault
of the release packager (they made the choice of what packages to
include 'by default').

Certainly if the *end user* chooses to run more applications
than their computer can comfortably fit in memory, how can the
application developer account for this.


Beagle will grow and grow and grow until it uses all
available swap.  It appears that the only way to satisfy
beagle's appetite appears to be to have enough memory to
load up all of your home directory tree into it.

But I don't know of any motherboard sold for under
US $10,000 that can accomodate 60 GB of ram.




Beagle should be scrapped and started over from the
ground up, starting with the design assumption that it
is to behave as an unobtrusive background process, not
the current one which can take over the whole system
with a feed me attitude as if the whole purpose for
a computer and its data to even exist is to provide
something for a beagle process to index.

-
Do you have documentation or direct knowledge
of what the design goals were?  If not, how do you know it
wasn't designed that way?


I'm saying that the design goals either were not
met, or they were utterly inappropriate.



Something the beagle developers cannot know is how
their application will be installed by release packagers.  One
example of an outstanding 'bug' (or feature depending on
interpretation) that can affect beagle performance
is how it is run by 'cron.daily'.  From my own experience,
under 9.3, the default is to run cron.daily 24 hours after it last
ran -- but if something delays it running overnight (like the
machine being off or suspended) it will run within 15 minutes
of the machine becoming active.  IT WON'T WAIT until the
'middle of the night', as you might want.  This has nothing
to do with beagle or its developers.


I'm likely to be using this computer at all hours of
the day and night... I wake up, get an idea, do something,
and then go back to sleep...



Ideally, the beagle indexing process would run once
(either at night, or immediately if needed), and then be able
to monitor the filesystems and directories for changes using
the fam (famd) package.


fam is another thing I've banished from my installations,
for similar reasons.  Maybe a good idea but it too suffers
from poor implementation.



The fam function (and as extended for directories
and/or 

Re: Beagle performance: end-user problem? (was Re: [opensuse] Beagle under 10.3 is really eating up my CPU)

2007-12-22 Thread Howard Huckabee
On Saturday 22 December 2007 07:29:04 pm Linda Walsh wrote:
  The Saturday 2007-12-22 at 16:06 +0100, Anders Johansson wrote:
  The real solution here is to find and fix the bug that causes beagle
  to allocate so much memory. It doesn't happen on all systems.

 ---.
   Without question, this is the best solution.

  Anders Johansson wrote:
  I wouldn't say probably. It shouldn't be par for the course for an
  application to not check return values from memory allocation functions

 Aaron Kulkis wrote:
  As I said earlier... the whole thing is poorly written.

 ---
   And you say this based on???  Do you have _any_ expertise
 in the source?  (if you do, sorry, but your attitude is not
 productive).  It _appears_ you know nothing about how it is
 written but are basing your opinion on its behavior in certain
 configurations.

   This is why I made a comment about *not* using swap
 on a system -- if you are using swap on any regular basis (my
 threshhold is using swap, *anytime*, during 'normal' day-to-day
 usage), you are running applications that are too big for for
 your machine.  Now whether this is due to poor application memory
 usage OR is due to poor-planning on the part of the system owner
 depends, in large part, on promises or expectations set by the
 developer or owner of the program.

   Certainly, if I am running on a system with 128M of memory
 with a 650MHz mobile CPU, and load a full suse 10.x desktop with
 full features, I am asking that I be shot in the foot.  If
 a release (like suse10.2, or windows 98) says it will run best with
 1GB-mem + a 1GHz processor and my machine has 2GB+a 2GHz processor
 and the release runs like a dog -- then I'd say it is the fault
 of the release packager (they made the choice of what packages to
 include 'by default').

   Certainly if the *end user* chooses to run more applications
 than their computer can comfortably fit in memory, how can the
 application developer account for this.

  Beagle should be scrapped and started over from the
  ground up, starting with the design assumption that it
  is to behave as an unobtrusive background process, not
  the current one which can take over the whole system
  with a feed me attitude as if the whole purpose for
  a computer and its data to even exist is to provide
  something for a beagle process to index.

 -
   Do you have documentation or direct knowledge
 of what the design goals were?  If not, how do you know it
 wasn't designed that way?

   Something the beagle developers cannot know is how
 their application will be installed by release packagers.  One
 example of an outstanding 'bug' (or feature depending on
 interpretation) that can affect beagle performance
 is how it is run by 'cron.daily'.  From my own experience,
 under 9.3, the default is to run cron.daily 24 hours after it last
 ran -- but if something delays it running overnight (like the
 machine being off or suspended) it will run within 15 minutes
 of the machine becoming active.  IT WON'T WAIT until the
 'middle of the night', as you might want.  This has nothing
 to do with beagle or its developers.

   Ideally, the beagle indexing process would run once
 (either at night, or immediately if needed), and then be able
 to monitor the filesystems and directories for changes using
 the fam (famd) package.

   The fam function (and as extended for directories
 and/or devices) monitors when any change is done to its monitored
 file-system objects, then calls listening programs to process
 the new or changed objects as they are changed on disk.  Ideally,
 you would then need no 'batch' updating, but such would be done
 in bits throughout the day as monitored files are changed.

   That being said, if a system doesn't have the OS support
 or resources needed to run 'famd' without without degradation, the
 system will still be painful to use (shorthand: be unusable).

   Be careful about global generalization about a product
 being bad, though, though, just because it doesn't run well in
 a particular situation.

 Linda

thanks, i think that about covers a broad spectrum of possible perceived 
problems...
   Howard
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Beagle performance: end-user problem? (was Re: [opensuse] Beagle under 10.3 is really eating up my CPU)

2007-12-22 Thread Carlos E. R.

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



The Saturday 2007-12-22 at 16:29 -0800, Linda Walsh wrote:

..


This is why I made a comment about *not* using swap
on a system -- if you are using swap on any regular basis (my
threshhold is using swap, *anytime*, during 'normal' day-to-day
usage), you are running applications that are too big for for
your machine.



Tsk, tsk... I have a machine with 32 MiB of memory and about 1 GiB swap, 
and there were an application that filled more than half of it; and it run 
:-P


   The system is 7.3 and the app was yast (you, actually). It had a big
   memory hole (known bug). Without that much swap the update would simply
   crash, and new memory was no longer available for the machine.
   Still, it worked.

A statement such as any swap usage is bad is not always correct for 
every body and every circumstance. It will not be as fast as having more 
ram, but... it works. Swap was designed for such a use. If designers 
thought that swap is a bad thing (R), they would not have designed kernel 
2.6 with swap enabled. They would remove the swapping code and tell us to 
buy more ram instead. Hardware makers would be very happy.


- -- 
Cheers,

   Carlos E. R.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.4-svn0 (GNU/Linux)

iD8DBQFHbciPtTMYHG2NR9URAsnEAJ0cOhIJd5oyjDBBCglqUtwGK8tJXgCfSOK1
Zi76fauIDD77N9loNYpiQAc=
=R0o3
-END PGP SIGNATURE-
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]