Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-16 Thread Olaf Selke
Scott Bennett wrote:
>>
>> anonymizer2:~# hugeadm --pool-list
>>  Size  Minimum  Current  Maximum  Default
>>   2097152  100  319 1000*
>>
>>  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
>> 21716 debian-t  20   0 2075m 1.1g  25m R 95.2 29.4   2020:29 0 tor
>   
>  That CPU usage looks pretty high to me, but I still don't know what
> you were accustomed to seeing tor use before, so is it higher so far?
> Lower?

cpu usage is always near 100% as long as being swamped by new circuits.
Cpu load seems to rather depend on the number of established tcp
connections than on the provided bandwidth. With openbsd-malloc resident
process size usually has been far below 1gb even after one or two weeks.
Process now with glibc malloc is still eating up memory and I'm sure it
will crash within the next days. Total memory is 4 gb:

anonymizer2:~# hugeadm --pool-list
  Size  Minimum  Current  Maximum  Default
   2097152  100  344 1000*

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
21716 debian-t  20   0 2702m 1.6g  26m R 88.5 41.8   3383:56 1 tor

> Okay.  But, again, I'd still like to know whether the hugepages are
> helping tor's CPU load or hurting it, and I would need some historical
> clues, even if just estimates, to know the answer to that.

in any case it doesn't hurt with respect to cpu load. I'm lacking
programming skills to make hugepages work with OpenBSD_malloc_Linux
coming with the tor sources. So hugepages might reduce load be roughly
about 20% with the side effect of getting a memory leak from glibc malloc.

@tor developers: Is there any good news for better scalability regarding
multiple cores?

regards Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-15 Thread Scott Bennett
 On Thu, 15 Apr 2010 14:42:26 +0200 Olaf Selke 
wrote:
>Scott Bennett wrote:
>>
>>  Olaf, if you're awake and on-line at/near this hour:-), how about
>> an update, now that blutmagie has been running long enough to complete
>> its climb to FL510 and accelerate to its cruising speed?  Also, how about
>> some numbers for how it ran without libhugetlbfs, even if only approximate,
>> for comparison?  (The suspense is really getting to me.:^)
>
>tor process is still growing:
>
>anonymizer2:~# hugeadm --pool-list
>  Size  Minimum  Current  Maximum  Default
>   2097152  100  319 1000*
>
>  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
>21716 debian-t  20   0 2075m 1.1g  25m R 95.2 29.4   2020:29 0 tor
  
 That CPU usage looks pretty high to me, but I still don't know what
you were accustomed to seeing tor use before, so is it higher so far?
Lower?
>
>It hard to tell after only one day how throughput is affected. Pls give
>me some more days. In the meanwhile everybody can do his own assessment

Okay.  But, again, I'd still like to know whether the hugepages are
helping tor's CPU load or hurting it, and I would need some historical
clues, even if just estimates, to know the answer to that.

>from mrtg data http://torstatus.blutmagie.de/public_mrtg

 It's in the queue, but I ran into a firefox bug again a few minutes
ago and had to kill it.  Now your statistics page is competing with the
reloading of well over 200 open tabs. B^}  Once it has loaded, though,
I'll refresh the page from time to time to check on things.  How often
are the graphs updated?  (Yes, I see the 5-minute refresh timeout on the
page, but is the timeout related to graph regeneration?)
>
>There are additional non-public graphs for environmental data monitoring
>like temperatures, fan speeds, and other super secret stuff which gives
>me a hint if someone is messing with my hardware.
>
 Ooh!  A cloak-and-dagger challenge for OR-TALK subscribers?  Cool!
:)  Not that I'm into that sort of thing, you understand...ahem.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-15 Thread Olaf Selke
Scott Bennett wrote:
>
>  Olaf, if you're awake and on-line at/near this hour:-), how about
> an update, now that blutmagie has been running long enough to complete
> its climb to FL510 and accelerate to its cruising speed?  Also, how about
> some numbers for how it ran without libhugetlbfs, even if only approximate,
> for comparison?  (The suspense is really getting to me.:^)

tor process is still growing:

anonymizer2:~# hugeadm --pool-list
  Size  Minimum  Current  Maximum  Default
   2097152  100  319 1000*

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
21716 debian-t  20   0 2075m 1.1g  25m R 95.2 29.4   2020:29 0 tor

It hard to tell after only one day how throughput is affected. Pls give
me some more days. In the meanwhile everybody can do his own assessment
from mrtg data http://torstatus.blutmagie.de/public_mrtg

There are additional non-public graphs for environmental data monitoring
like temperatures, fan speeds, and other super secret stuff which gives
me a hint if someone is messing with my hardware.

Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-15 Thread Scott Bennett
 On Wed, 14 Apr 2010 17:23:35 +0200 Olaf Selke 
wrote:
>Scott Bennett wrote:
>
>>>
>>> It appears memory consumption with the wrapped Linux malloc() is still
>>> larger than than with openbsd-malloc I used before. Hugepages don't
>>> appear to work with openbsd-malloc.
>>>
>>  Okay, that looks like a problem, and it probably ought to be passed
>> along to the LINUX developers to look into.
>
>yes, but I don't suppose this problem being related to hugepages
>wrapper. Linking tor against standard glibc malloc() never worked for me
>in the past. Always had the problem that memory leaked like hell and
>after a few days tor process crashed with an out of memory error.
>Running configure script with --enable-openbsd-malloc flag solved this
>issue but apparently it doesn't work with libhugetlbfs.so.
>
>After 17 hours of operation resident process size is 1 gig.
>
>  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
>21716 debian-t  20   0 1943m 1.0g  24m R 79.4 26.9 927:51.27 1 tor
>
>On the other hand cpu load really seems to be reduced compared with
>standard page size.
>
 Olaf, if you're awake and on-line at/near this hour:-), how about
an update, now that blutmagie has been running long enough to complete
its climb to FL510 and accelerate to its cruising speed?  Also, how about
some numbers for how it ran without libhugetlbfs, even if only approximate,
for comparison?  (The suspense is really getting to me.:^)
 Thanks!


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Scott Bennett
Hi Arjan,
 On Wed, 14 Apr 2010 22:03:33 +0200 Arjan
>Scott Bennett wrote:
>>  On Tue, 13 Apr 2010 19:10:37 +0200 Arjan
>>  wrote:
>>> Scott Bennett wrote:
  BTW, I know that there are *lots* of tor relays running on LINUX
 systems whose operators are subscribed to this list.  Don't leave Olaf and
 me here swinging in the breeze.  Please jump in with your LINUX expertise
 and straighten us out.
>>> I'm not an expert, but I managed to perform some google searches.
>>>
>>> http://libhugetlbfs.ozlabs.org/
>>> >From that website:
>>> libhugetlbfs is a library which provides easy access to huge pages of
>>> memory. It is a wrapper for the hugetlbfs file system. Applications can
>>> use huge pages to fulfill malloc() requests without being recompiled by
>>> using LD_PRELOAD.
>> 
>>  [Aside to Olaf:  oh.  So forcing the use of OpenBSD's malloc() might
>> prevent the libhugetlbfs stuff from ever knowing that it was supposed to
>> do something. :-(  I wonder how hard it would be to fix the malloc() in
>> libhugetlbfs, which is most likely derived from the buggy LINUX version.
>> Does libhugetlbfs come as source code?  Or is the use of LD_PRELOAD simply
>> causing LINUX's libc to appear ahead of the OpenBSD version, in which case
>> forcing reordering of the libraries might work?  --SB]
>
>If Olafs test shows that CPU usage is reduced and throughput stays the
>same or improves, modifying Tor to support linux huge pages might be an
>option. Part 2 of this article contains some information about the
>available interfaces:
>   http://lwn.net/Articles/374424/

 Thanks.  I'll take a look at it, but I still haven't had the nap
I was going to take. :-(

>Getting the wrapper to work with (or like) the OpenBSD version will
>probably be easier.
>
 One of the reasons I'm still awake is that I was browsing through
the OpenBSD version of malloc() that is shipped with tor and libhugetlbfs's
morecore.c module.  I'm still not sure quite what is going on with how
the stuff gets linked together, so I don't know which avenue might be
the easiest approach, but modifying tor is probably the worst option.
If the LINUX side of things gets fixed, then the patches ought to be
contributed to the LINUX effort.  However, it may be easier to modify
the OpenBSD malloc() to call something in morecore.c to get memory
allocated by the kernel, falling back to whatever it currently does if
the morecore.c stuff returns an error because it can't allocate the
hugepages necessary to satisfy a request.  Of course, someone would still
need to find out how to keep the LINUX malloc() from being substituted
for the OpenBSD malloc() at runtime when the libhugetlbfs wrapper is
in use.  I doubt I can contribute much to the effort, given that I don't
have a LINUX system available to me.
>
>>> Someone is working on transparent hugepage support:
>>> http://thread.gmane.org/gmane.linux.kernel.mm/40182
>> 
>>  I've now had time to get through that entire thread.  I found it
>> kind of frustrating reading at times.  It seemed to me that in one of
>> the very first few messages, the author described how they had long
>> since shot themselves in the feet (i.e., by rejecting the algorithm of
>> Navarro et al. (2002), which had already been demonstrated to work on an 
>> early
>> FreeBSD 5 system modified by Navarro's team) on emotional grounds (i.e.,
>> "we didn't feel its [Navarro's method's] heuristics were right").
>
>
>Thanks for your analysis of the thread and the reference to the Navarro
>paper.
>
>I've located the paper and will read it when time permits:
>http://www.usenix.org/events/osdi02/tech/full_papers/navarro/
>
 Oh.  Sorry about that.  I had intended to include that at the end
of what I wrote, but apparently I spaced it.  I didn't mean to make anyone
have to search for it.  Thanks for correcting the deficiency in my message.
 I think you'll find their design is quite elegant and well thought out.
It apparently required adding fewer than 3600 lines of code to the kernel
to do it and uses a trivial amount of kernel CPU time in action.  It's
quite transparent and adaptive to conditions, but there are probably some
conditions under which it might give less benefit than the LINUX hugepages
way.  However, it continually tries to promote processes that allocate
enough space in a segment to fill the next larger page size.  Its
reservation system greatly increases the chances that promotions will
occur.  It's not a perfect solution to the problem, but I suspect there
aren't any perfect solutions for it on the software side of things.  What
is really needed is for the chip manufacturers to correct the matter by
increasing their TLB sizes rather dramatically.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
*

Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Arjan
Scott Bennett wrote:
>  On Tue, 13 Apr 2010 19:10:37 +0200 Arjan
>  wrote:
>> Scott Bennett wrote:
>>>  BTW, I know that there are *lots* of tor relays running on LINUX
>>> systems whose operators are subscribed to this list.  Don't leave Olaf and
>>> me here swinging in the breeze.  Please jump in with your LINUX expertise
>>> and straighten us out.
>> I'm not an expert, but I managed to perform some google searches.
>>
>> http://libhugetlbfs.ozlabs.org/
>> >From that website:
>> libhugetlbfs is a library which provides easy access to huge pages of
>> memory. It is a wrapper for the hugetlbfs file system. Applications can
>> use huge pages to fulfill malloc() requests without being recompiled by
>> using LD_PRELOAD.
> 
>  [Aside to Olaf:  oh.  So forcing the use of OpenBSD's malloc() might
> prevent the libhugetlbfs stuff from ever knowing that it was supposed to
> do something. :-(  I wonder how hard it would be to fix the malloc() in
> libhugetlbfs, which is most likely derived from the buggy LINUX version.
> Does libhugetlbfs come as source code?  Or is the use of LD_PRELOAD simply
> causing LINUX's libc to appear ahead of the OpenBSD version, in which case
> forcing reordering of the libraries might work?  --SB]

If Olafs test shows that CPU usage is reduced and throughput stays the
same or improves, modifying Tor to support linux huge pages might be an
option. Part 2 of this article contains some information about the
available interfaces:
http://lwn.net/Articles/374424/
Getting the wrapper to work with (or like) the OpenBSD version will
probably be easier.


>> Someone is working on transparent hugepage support:
>> http://thread.gmane.org/gmane.linux.kernel.mm/40182
> 
>  I've now had time to get through that entire thread.  I found it
> kind of frustrating reading at times.  It seemed to me that in one of
> the very first few messages, the author described how they had long
> since shot themselves in the feet (i.e., by rejecting the algorithm of
> Navarro et al. (2002), which had already been demonstrated to work on an early
> FreeBSD 5 system modified by Navarro's team) on emotional grounds (i.e.,
> "we didn't feel its [Navarro's method's] heuristics were right").


Thanks for your analysis of the thread and the reference to the Navarro
paper.

I've located the paper and will read it when time permits:
http://www.usenix.org/events/osdi02/tech/full_papers/navarro/

***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Scott Bennett
 On Tue, 13 Apr 2010 19:10:37 +0200 Arjan
 wrote:
>Scott Bennett wrote:
>>  BTW, I know that there are *lots* of tor relays running on LINUX
>> systems whose operators are subscribed to this list.  Don't leave Olaf and
>> me here swinging in the breeze.  Please jump in with your LINUX expertise
>> and straighten us out.
>
>I'm not an expert, but I managed to perform some google searches.
>
>http://libhugetlbfs.ozlabs.org/
>>From that website:
>libhugetlbfs is a library which provides easy access to huge pages of
>memory. It is a wrapper for the hugetlbfs file system. Applications can
>use huge pages to fulfill malloc() requests without being recompiled by
>using LD_PRELOAD.

 [Aside to Olaf:  oh.  So forcing the use of OpenBSD's malloc() might
prevent the libhugetlbfs stuff from ever knowing that it was supposed to
do something. :-(  I wonder how hard it would be to fix the malloc() in
libhugetlbfs, which is most likely derived from the buggy LINUX version.
Does libhugetlbfs come as source code?  Or is the use of LD_PRELOAD simply
causing LINUX's libc to appear ahead of the OpenBSD version, in which case
forcing reordering of the libraries might work?  --SB]
>
>Someone is working on transparent hugepage support:
>http://thread.gmane.org/gmane.linux.kernel.mm/40182

 I've now had time to get through that entire thread.  I found it
kind of frustrating reading at times.  It seemed to me that in one of
the very first few messages, the author described how they had long
since shot themselves in the feet (i.e., by rejecting the algorithm of
Navarro et al. (2002), which had already been demonstrated to work on an early
FreeBSD 5 system modified by Navarro's team) on emotional grounds (i.e.,
"we didn't feel its [Navarro's method's] heuristics were right").  They
then spent the rest of the thread squabbling over the goals and
individual techniques of Navarro et al. that they had reinvented, while
not admitting to themselves that that was what they had done, and over
the obstacles they were running into because of the parts that they had 
*not* adopted (yet, at least).  At times, it appeared that the author of
the fairly large patch that implemented the improvements to the hugepage
system was arguing directly from Navarro et al. (2002) with one of the
other participants.  Shades of Micro$lop's methods and not-invented-here
attitude.  What a bummer to see LINUX developers thinking in such denial!
So if the guy who had written that early kernel patch for LINUX (the thread
was a year and a half ago) has persisted in his implementation, he may have
the bugs out of it by now, but in the long run, his design (or lack thereof)
should provide something that provides significant improvement for some
large processes on LINUX, but the way it is done won't be at all pretty.
 Unlike the method of Navarro et al., which that team had actually
done not on an x86-type of system, which is the only type so far supported
for superpages in FreeBSD 7 (not sure about 8.0), but on an alpha machine,
using the four page sizes offered by the hardware, the method implemented
by the OP of the thread used a "hugepage" size (2 MB) that is not supported
by the hardware, except for pages in instruction (text) segments.  I didn't
see anywhere in the thread an explanation of how their software pages are
made to work with the hardware, but I would imagine they must combine two
software hugepages to make a single 4 MB page as far as the address
translation circuitry is concerned.  It left me wondering much of the time
which processor architecture they were working with, though it eventually
became clear that they were indeed talking about x86 processors.  The
others in the thread also voiced opinions that the method would prove to
be not easily portable to other hardware architectures, unlike the Navarro
method.
 Navarro et al. (2002) found that their array of test applications did
not all benefit at all superpage sizes.  Which superpage size crossed the
threshhold into reduced TLB thrashing varied from application to application.
Some benefited after the first promotion from 8 KB base pages to 64 KB 
superpages.  Others benefited after the further promotion to 512 KB
superpages.  Still others' performance characteristics did not improve much
until after the third promotion to 4 MB superpages.  Which size causes the
big improvement for an application depends almost entirely upon the memory
access patterns of that application.  It remains to be seen whether an
application that doesn't speed up in FreeBSD tests until after the application
has been promoted to the 4 MB superpages will speed up in LINUX's 2 MB
hugepages.
 I'm still tired.  I think I might take a short nap again, so I might
not post replies to anyone's followups on this for a few hours.  (Yawn...)


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett 

Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Scott Bennett
 On Wed, 14 Apr 2010 17:23:35 +0200 Olaf Selke 
wrote:
>Scott Bennett wrote:
>
>>>
>>> It appears memory consumption with the wrapped Linux malloc() is still
>>> larger than than with openbsd-malloc I used before. Hugepages don't
>>> appear to work with openbsd-malloc.
>>>
>>  Okay, that looks like a problem, and it probably ought to be passed
>> along to the LINUX developers to look into.
>
>yes, but I don't suppose this problem being related to hugepages
>wrapper. Linking tor against standard glibc malloc() never worked for me
>in the past. Always had the problem that memory leaked like hell and
>after a few days tor process crashed with an out of memory error.
>Running configure script with --enable-openbsd-malloc flag solved this
>issue but apparently it doesn't work with libhugetlbfs.so.

 Is tor statically linked?  If not, I wonder if it's a library-ordering
problem, where a version of malloc() in libhugetlbfs or in a library called
by a routine in libhugetlbfs gets linked in ahead of the OpenBSD version.
I don't know how much flexibility that LD_PRELOAD method gives you, but
perhaps you could try the -rpath trick we FreeBSD folks had to use to force 
use of openssl from ports rather than the broken one in the base system.
>
>After 17 hours of operation resident process size is 1 gig.

 How much was it typically using before?
>
>  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
>21716 debian-t  20   0 1943m 1.0g  24m R 79.4 26.9 927:51.27 1 tor
>
>On the other hand cpu load really seems to be reduced compared with
>standard page size.
>
 Holy Crapola!  79.4% is a *reduction*?!??  8-Q  What did it use
before?  100%?  1 GB is 512 hugepages.  I wonder if getting the malloc()
issue resolved and lowering the working set size would reduce the CPU
time still further, given that each TLB only holds 64 entries.  (I fail
to see yet why the LINUX developers picked a hugepage size that is not
supported by hardware, at least not for the data and stack segments.)
 A long time back, we tossed an idea around briefly to the effect
that you might get more balanced utilization of your machine by running
two copies of tor in parallel with their throughput capacities limited
to something more than half apiece of the current, single instance's
capacity.  That would bring the other core into play more of the time.
A configuration like that would still place both instances very high
in the list of relays ordered by throughput, but the reduction in the
advertised capacity of each would help to spread the requests to both
better.  They would still be limited by the TCP/IP's design limit on
port numbers for the system as a whole, which you would likely never
see because the kernel would probably just refuse connections when all
port numbers were already in use, but would probably allow you to
squeeze more total tor throughput through the machine than you get at
present, while still leaving a moderate amount of idle time on each
core that would then be available for other processing.  Have you given
any more thought to this idea over the ensuing months?


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Olaf Selke
Scott Bennett wrote:

>>
>> It appears memory consumption with the wrapped Linux malloc() is still
>> larger than than with openbsd-malloc I used before. Hugepages don't
>> appear to work with openbsd-malloc.
>>
>  Okay, that looks like a problem, and it probably ought to be passed
> along to the LINUX developers to look into.

yes, but I don't suppose this problem being related to hugepages
wrapper. Linking tor against standard glibc malloc() never worked for me
in the past. Always had the problem that memory leaked like hell and
after a few days tor process crashed with an out of memory error.
Running configure script with --enable-openbsd-malloc flag solved this
issue but apparently it doesn't work with libhugetlbfs.so.

After 17 hours of operation resident process size is 1 gig.

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  P COMMAND
21716 debian-t  20   0 1943m 1.0g  24m R 79.4 26.9 927:51.27 1 tor

On the other hand cpu load really seems to be reduced compared with
standard page size.

regards Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Scott Bennett
 On Wed, 14 Apr 2010 15:00:52 +0200 Olaf Selke 
wrote:
>Christian Kujau wrote:
>> On Tue, 13 Apr 2010 at 05:58, Scott Bennett wrote:
>>> and straighten us out.  Remember that Olaf runs the highest-load-bearing
>>> tor node in our whole network, and there are at least two or four dozen
>>> others that should be considered heavyweight relays that are also on LINUX
>>> systems.
>> 
>> ...and some of them are running on old notebooks and the tor process is 
>> only a few megabytes in size :-|
>
>In the end of all days tor traffic has to pass thru the exit nodes.
>About 50% of all traffic leave tor network thru the top 15 exit nodes
>only. If they can't cope with their load all nifty tor ports for
>smartphones, dsl routers or whatsoever acting as entry or middle man
>will be in vain.

 Exactly.
>
>> However, if it turns out that using hugepages in Linux would help larger 
>> Tor installations (and "superpages" can be recommended for *BSD systems[0]
>> as well), maybe this can be documented somehwere under doc/ or in the 
>> wiki. But let's see how Olaf's experiment turns out.
>
>process size is still growing:
>
>anonymizer2:~# hugeadm --pool-list
>  Size  Minimum  Current  Maximum  Default
>   2097152  100  313 1000*
>
>It appears memory consumption with the wrapped Linux malloc() is still
>larger than than with openbsd-malloc I used before. Hugepages don't
>appear to work with openbsd-malloc.
>
 Okay, that looks like a problem, and it probably ought to be passed
along to the LINUX developers to look into.
 But the important question is how is tor's CPU usage looking now with
the hugepages as compared to before?  It was the CPU usage that you said
was the severe problem before.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Scott Bennett
 On Tue, 13 Apr 2010 18:18:02 -0700 (PDT) Christian Kujau
 wrote:
>On Tue, 13 Apr 2010 at 05:58, Scott Bennett wrote:
>> and straighten us out.  Remember that Olaf runs the highest-load-bearing
>> tor node in our whole network, and there are at least two or four dozen
>> others that should be considered heavyweight relays that are also on LINUX
>> systems.
>
>...and some of them are running on old notebooks and the tor process is 
>only a few megabytes in size :-|

 If tor is only using, say, 25 MB or so, then tor's CPU load is probably
low anyway.  Nevertheless, any other process on a small x86-type of LINUX
system that has a working set greater than 256 KB of instruction pages and/or
256 KB of data+stack pages would benefit from using enough hugepages to cover
its needs.
>
>However, if it turns out that using hugepages in Linux would help larger 
>Tor installations (and "superpages" can be recommended for *BSD systems[0]
>as well), maybe this can be documented somehwere under doc/ or in the 
>wiki. But let's see how Olaf's experiment turns out.
>
>Christian.
>
>[0] http://www.freebsd.org/releases/7.2R/relnotes-detailed.html
>This is disabled by default and can be enabled by setting a loader 
>tunable vm.pmap.pg_ps_enabled to 1.

 A couple of caveats regarding the automatic version available in
FreeBSD 7.2 and later releases are in order here.  To the best of my
knowledge, this feature is not yet available :-( in any of the other BSDs,
so tor relay operators on NetBSD, OpenBSD, DragonflyBSD, MirBSD, etc.
can disregard all of this stuff for the time being.  Another matter is
that FreeBSD systems on AMD processors of designs older than the K10 types
may actually get poorer performance with the feature enabled.  That's
because on those processors the number of entries in the TLBs is drastically
reduced in the 4 MB pages mode.  So on pre-K10 AMD processors the official
recommendation that I read was to try it if you have a large process that
is bogging down, and just see what happens.  If it helps, then that's great,
but be prepared for the strong possibility that it might just make matters
worse.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-14 Thread Olaf Selke
Christian Kujau wrote:
> On Tue, 13 Apr 2010 at 05:58, Scott Bennett wrote:
>> and straighten us out.  Remember that Olaf runs the highest-load-bearing
>> tor node in our whole network, and there are at least two or four dozen
>> others that should be considered heavyweight relays that are also on LINUX
>> systems.
> 
> ...and some of them are running on old notebooks and the tor process is 
> only a few megabytes in size :-|

In the end of all days tor traffic has to pass thru the exit nodes.
About 50% of all traffic leave tor network thru the top 15 exit nodes
only. If they can't cope with their load all nifty tor ports for
smartphones, dsl routers or whatsoever acting as entry or middle man
will be in vain.

> However, if it turns out that using hugepages in Linux would help larger 
> Tor installations (and "superpages" can be recommended for *BSD systems[0]
> as well), maybe this can be documented somehwere under doc/ or in the 
> wiki. But let's see how Olaf's experiment turns out.

process size is still growing:

anonymizer2:~# hugeadm --pool-list
  Size  Minimum  Current  Maximum  Default
   2097152  100  313 1000*

It appears memory consumption with the wrapped Linux malloc() is still
larger than than with openbsd-malloc I used before. Hugepages don't
appear to work with openbsd-malloc.

Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [or-talk] Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Christian Kujau
On Tue, 13 Apr 2010 at 05:58, Scott Bennett wrote:
> and straighten us out.  Remember that Olaf runs the highest-load-bearing
> tor node in our whole network, and there are at least two or four dozen
> others that should be considered heavyweight relays that are also on LINUX
> systems.

...and some of them are running on old notebooks and the tor process is 
only a few megabytes in size :-|

However, if it turns out that using hugepages in Linux would help larger 
Tor installations (and "superpages" can be recommended for *BSD systems[0]
as well), maybe this can be documented somehwere under doc/ or in the 
wiki. But let's see how Olaf's experiment turns out.

Christian.

[0] http://www.freebsd.org/releases/7.2R/relnotes-detailed.html
This is disabled by default and can be enabled by setting a loader 
tunable vm.pmap.pg_ps_enabled to 1.
-- 
BOFH excuse #98:

The vendor put the bug there.
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Olaf Selke
Arjan schrieb:
> 
> http://libhugetlbfs.ozlabs.org/
> From that website:
> libhugetlbfs is a library which provides easy access to huge pages of
> memory. It is a wrapper for the hugetlbfs file system. Applications can
> use huge pages to fulfill malloc() requests without being recompiled by
> using LD_PRELOAD.

ok, just started tor with this wrapper. Looks like it's working as expected:

anonymizer2:~/tmp# lsof -np `cat /var/run/tor/tor.pid` | grep libhugetlbfs.so
tor 21716 debian-tor  mem   REG8,11452825390654 
/usr/local/lib64/libhugetlbfs.so

anonymizer2:~/tmp# hugeadm --pool-list
  Size  Minimum  Current  Maximum  Default
   2097152  100  107 1000*

anonymizer2:~/tmp# cat /proc/meminfo | grep -i hugepage
HugePages_Total: 107
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:7
Hugepagesize:   2048 kB

Keep you posted how it changes performance.
Will go to sleep now, Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Scott Bennett
 On Tue, 13 Apr 2010 19:10:37 +0200 Arjan
 wrote:
>Scott Bennett wrote:
>>  BTW, I know that there are *lots* of tor relays running on LINUX
>> systems whose operators are subscribed to this list.  Don't leave Olaf and
>> me here swinging in the breeze.  Please jump in with your LINUX expertise
>> and straighten us out.
>
>I'm not an expert, but I managed to perform some google searches.
>
>http://libhugetlbfs.ozlabs.org/
>>From that website:
>libhugetlbfs is a library which provides easy access to huge pages of
>memory. It is a wrapper for the hugetlbfs file system. Applications can
>use huge pages to fulfill malloc() requests without being recompiled by
>using LD_PRELOAD.

 That does look promising, then.  Perhaps Olaf and others can use that
method for now.
>
>Someone is working on transparent hugepage support:
>http://thread.gmane.org/gmane.linux.kernel.mm/40182

 Thanks much for these URLs, Arjan.  I've started going through this
thread, but it's a horrendous lot to digest and full of LINUXisms that
I know nothing about.  I have some errands to run, and then I really *must*
get some sleep.  Maybe late tonight I'll continue reading.
 In the meantime, perhaps some adventurous relay operator using LINUX
could begin experimenting with the libhugetlbfs-in-LD_PRELOAD method to
see whether tor functions okay with it and then report the results back
to the list.  I'll be offline for at least 10 - 12 hours.  Best of luck!


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Arjan
Scott Bennett wrote:
>  BTW, I know that there are *lots* of tor relays running on LINUX
> systems whose operators are subscribed to this list.  Don't leave Olaf and
> me here swinging in the breeze.  Please jump in with your LINUX expertise
> and straighten us out.

I'm not an expert, but I managed to perform some google searches.

http://libhugetlbfs.ozlabs.org/
>From that website:
libhugetlbfs is a library which provides easy access to huge pages of
memory. It is a wrapper for the hugetlbfs file system. Applications can
use huge pages to fulfill malloc() requests without being recompiled by
using LD_PRELOAD.

Someone is working on transparent hugepage support:
http://thread.gmane.org/gmane.linux.kernel.mm/40182
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Olaf Selke
Scott Bennett wrote:
> 
>  BTW, I know that there are *lots* of tor relays running on LINUX
> systems whose operators are subscribed to this list.  Don't leave Olaf and
> me here swinging in the breeze.  Please jump in with your LINUX expertise
> and straighten us out.

in case of someone being interested in blutmagie exit's low level stats:
http://torstatus.blutmagie.de/public_mrtg

regards Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Scott Bennett
 On Tue, 13 Apr 2010 18:15:18 +0200 Olaf Selke 
wrote:
>Scott Bennett wrote:
>
>>  Now that you've had tor running for a while, what does a
>> "cat /proc/meminfo | grep -i hugepage" show you?  Also, 126 such pages
>> equal 256 MB of memory.  Is that really enough to hold your entire tor
>> process when it's going full tilt?  I thought I had seen you post items
>> here in the past that said it was taking well over 1 GB and approaching
>> 2 GB.
>
>tor process crashed with out of memory error ;-)
>Apr 13 11:06:39.419 [err] Out of memory on malloc(). Dying.

 Hmm.  Looks like you need to raise its stack segment and/or data segment
size limit(s).
>
>After restarting the tor process HugePages_Total and HugePages_Free
>still had a value of 126, so I assume tor didn't use them. Eventually I
>disabled them.
>
 Well, the first number shouldn't change.  If tor had already quit, the
the HugePages_Free value, even if some had been allocated/reserved, should
have reverted to the HugePages_Total value anyway, so what you saw there
should really be no surprise.
 Have you found anything yet about huge pages in the LINUX man pages or
other documentation?  It seems to me that the documentation kind of has to
cover the use of huge pages *somewhere*.  Does LINUX have anything like
apropos(1) for finding things by keystring in the man page collection?


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Olaf Selke
Scott Bennett wrote:

>  Now that you've had tor running for a while, what does a
> "cat /proc/meminfo | grep -i hugepage" show you?  Also, 126 such pages
> equal 256 MB of memory.  Is that really enough to hold your entire tor
> process when it's going full tilt?  I thought I had seen you post items
> here in the past that said it was taking well over 1 GB and approaching
> 2 GB.

tor process crashed with out of memory error ;-)
Apr 13 11:06:39.419 [err] Out of memory on malloc(). Dying.

After restarting the tor process HugePages_Total and HugePages_Free
still had a value of 126, so I assume tor didn't use them. Eventually I
disabled them.

cheers Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Scott Bennett
 On Tue, 13 Apr 2010 05:16:33 -0500 (CDT) I wrote:
> On Tue, 13 Apr 2010 11:04:36 +0200 Olaf Selke 
>wrote:
>>Scott Bennett wrote:
>>
>>>  Either I forgot (probable) or you didn't mention before (less probable)
>>> that you had moved it to a newer machine.  Whatever you're running it on,
>>> superpages or LINUX's "huge" pages ought to speed tor up considerably by
>>> drastically reducing TLB misses.  (I wasn't suggesting that you revert to
>>> older hardware.  I was thinking that you were still running tor on the Xeon-
>>> based machine.)
>>
>>I just setup hugepages (1024 pages a 2 MB) according this hint
>>http://www.pythian.com/news/1326/performance-tuning-hugepages-in-linux/
>
> Very interesting article.  Thanks for the URL.  Of course, not being
>a LINUX user, I have no idea what the acronyms for various LINUX kernel
>features mean, and I have mercifully been free of any involvement with
>Oracle for ~17 years, so ditto for the Oracle stuff. :-)
> One matter of concern, though, is the mention of a page size of 2 MB.
>Intel x86-style CPUs offer a 2 MB page size *only* for instruction (a.k.a.
>text) segments, not for data or stack segments, so I'm not sure what LINUX
>is doing with that.  (See also the last line of the following bit of
>output.)
>>
>>anonymizer2:~# echo 1024 > /proc/sys/vm/nr_hugepages
  
>>anonymizer2:~# cat /proc/meminfo | grep -i hugepage
>>HugePages_Total: 126
   ^^^
 Apparently, telling it reserve 1024 huge pages didn't take.  I guess
you'll have to dig into the LINUX documentation a bit to find out what's up
with that.

>>HugePages_Free:  126
>>HugePages_Rsvd:0
>>HugePages_Surp:0
>>Hugepagesize:   2048 kB
>>
>>Does tor process automagically take advantage from hugepages after
>>restarting the process or has tor source code to be modified?
>>
> Olaf, I honestly don't know.  I had not seen the page for which you
>provided a URL, and it is more recent than what I had read about LINUX's
>huge pages before.  Those older articles clearly stated that a program
>had to reserve/designate its memory as huge pages *explicitly*, but it's
>possible that usage is now more automatic.  However, part of the final
>sentence in the article's summary section stands out to me:
>
>   "If your database is running in LINUX *and has HugePages capability*
>   [emphasis mine  --SB], there is no reason not to use it."
>
>That suggests to me that the application (tor, in this case) must tell
>the LINUX kernel which page size it wants for its memory.  Whether it
>also has to specify address ranges explicitly to be so mapped, I haven't
>the foggiest idea.  But even if the application does have to tell the
>kernel something, it ought to be fairly trivial to add to tor's startup
>code.  Start out by overestimating (assuming there is adequate real
>memory on the system to play with) how much tor will need at its maximum
>size, then decrease it, perhaps a bit at a time in successive recompilations,
>until it only minimally exceeds tor's high-water mark.

 BTW, I know that there are *lots* of tor relays running on LINUX
systems whose operators are subscribed to this list.  Don't leave Olaf and
me here swinging in the breeze.  Please jump in with your LINUX expertise
and straighten us out.  Remember that Olaf runs the highest-load-bearing
tor node in our whole network, and there are at least two or four dozen
others that should be considered heavyweight relays that are also on LINUX
systems.  It is in every LINUX+tor user's interest to help Olaf and others
running tor nodes on LINUX systems here to make sure all of those systems
are getting the performance benefits of smaller page tables for their tor
processes (provided those systems have adequate real memory, which I would
bet most of them do).  I've worked with UNIX for decades, but have never
used a LINUX system.  Olaf shouldn't have to depend solely upon someone
who doesn't really know much of what he's writing about to get his blutmagie
tor node running with LINUX huge pages when there are so many LINUX system
experts on this list.

>[soapbox:  on]
> [Unless you have other applications also using that machine, this would
>probably all be made so much easier by just trying out PC-BSD 8.0 because a
>one-line entry in /boot/loader.conf would take care of superpages for you
>automatically.  PC-BSD is the install-and-go version for both new users who
>need to be able to use the system right away before learning much and casual
>users who have no interest in learning much about FreeBSD.  This special
>packaging of the system is designed to allow both groups, who might
>otherwise find it beyond the effort they were willing or able to put into
>it, to get its benefits.]
>[soapbox:  off]
> Now that you've had tor running for a while, what does a
>"cat /proc/meminfo | grep -i hugepage" show you?  Also, 126 such pages
>equal 256 MB of memory.  Is that real

Re: huge pages, was where are the exit nodes gone?

2010-04-13 Thread Scott Bennett
 On Tue, 13 Apr 2010 11:04:36 +0200 Olaf Selke 
wrote:
>Scott Bennett wrote:
>
>>  Either I forgot (probable) or you didn't mention before (less probable)
>> that you had moved it to a newer machine.  Whatever you're running it on,
>> superpages or LINUX's "huge" pages ought to speed tor up considerably by
>> drastically reducing TLB misses.  (I wasn't suggesting that you revert to
>> older hardware.  I was thinking that you were still running tor on the Xeon-
>> based machine.)
>
>I just setup hugepages (1024 pages a 2 MB) according this hint
>http://www.pythian.com/news/1326/performance-tuning-hugepages-in-linux/

 Very interesting article.  Thanks for the URL.  Of course, not being
a LINUX user, I have no idea what the acronyms for various LINUX kernel
features mean, and I have mercifully been free of any involvement with
Oracle for ~17 years, so ditto for the Oracle stuff. :-)
 One matter of concern, though, is the mention of a page size of 2 MB.
Intel x86-style CPUs offer a 2 MB page size *only* for instruction (a.k.a.
text) segments, not for data or stack segments, so I'm not sure what LINUX
is doing with that.  (See also the last line of the following bit of
output.)
>
>anonymizer2:~# echo 1024 > /proc/sys/vm/nr_hugepages
>anonymizer2:~# cat /proc/meminfo | grep -i hugepage
>HugePages_Total: 126
>HugePages_Free:  126
>HugePages_Rsvd:0
>HugePages_Surp:0
>Hugepagesize:   2048 kB
>
>Does tor process automagically take advantage from hugepages after
>restarting the process or has tor source code to be modified?
>
 Olaf, I honestly don't know.  I had not seen the page for which you
provided a URL, and it is more recent than what I had read about LINUX's
huge pages before.  Those older articles clearly stated that a program
had to reserve/designate its memory as huge pages *explicitly*, but it's
possible that usage is now more automatic.  However, part of the final
sentence in the article's summary section stands out to me:

"If your database is running in LINUX *and has HugePages capability*
[emphasis mine  --SB], there is no reason not to use it."

That suggests to me that the application (tor, in this case) must tell
the LINUX kernel which page size it wants for its memory.  Whether it
also has to specify address ranges explicitly to be so mapped, I haven't
the foggiest idea.  But even if the application does have to tell the
kernel something, it ought to be fairly trivial to add to tor's startup
code.  Start out by overestimating (assuming there is adequate real
memory on the system to play with) how much tor will need at its maximum
size, then decrease it, perhaps a bit at a time in successive recompilations,
until it only minimally exceeds tor's high-water mark.
[soapbox:  on]
 [Unless you have other applications also using that machine, this would
probably all be made so much easier by just trying out PC-BSD 8.0 because a
one-line entry in /boot/loader.conf would take care of superpages for you
automatically.  PC-BSD is the install-and-go version for both new users who
need to be able to use the system right away before learning much and casual
users who have no interest in learning much about FreeBSD.  This special
packaging of the system is designed to allow both groups, who might
otherwise find it beyond the effort they were willing or able to put into
it, to get its benefits.]
[soapbox:  off]
 Now that you've had tor running for a while, what does a
"cat /proc/meminfo | grep -i hugepage" show you?  Also, 126 such pages
equal 256 MB of memory.  Is that really enough to hold your entire tor
process when it's going full tilt?  I thought I had seen you post items
here in the past that said it was taking well over 1 GB and approaching
2 GB.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


huge pages, was where are the exit nodes gone?

2010-04-13 Thread Olaf Selke
Scott Bennett wrote:

>  Either I forgot (probable) or you didn't mention before (less probable)
> that you had moved it to a newer machine.  Whatever you're running it on,
> superpages or LINUX's "huge" pages ought to speed tor up considerably by
> drastically reducing TLB misses.  (I wasn't suggesting that you revert to
> older hardware.  I was thinking that you were still running tor on the Xeon-
> based machine.)

I just setup hugepages (1024 pages a 2 MB) according this hint
http://www.pythian.com/news/1326/performance-tuning-hugepages-in-linux/

anonymizer2:~# echo 1024 > /proc/sys/vm/nr_hugepages
anonymizer2:~# cat /proc/meminfo | grep -i hugepage
HugePages_Total: 126
HugePages_Free:  126
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Does tor process automagically take advantage from hugepages after
restarting the process or has tor source code to be modified?

regards Olaf
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/