Re: Memory usage tracking

2015-05-10 Thread tcak via Digitalmars-d-learn

On Sunday, 10 May 2015 at 09:44:42 UTC, tcak wrote:
I am testing my web server right now. I started 5 separate 
consoles and continuously sending request by using curl to it.


It uses shared memory as well, thought from `ipcs -a`, I don't 
see more than necessary amount of allocation.


At the moment, server received about 1.5M requests, and memory 
usage has reached to 128MB according to System Monitor of 
Ubuntu. (top gives a similar value as well). I saw now on `top` 
command that about 650KB shared memory is used only.


Is there any way to find out what is using that big space in 
memory? Would `-profile` do that?


Problem is that if I was to be using `-profile` flag, server 
would slow down, and I wouldn't be able to test it correctly 
already.


Hmm. Server was compiled in debug mode. Right now, it is 2.2M 
requests, and 174MB memory is in use.


Memory usage tracking

2015-05-10 Thread tcak via Digitalmars-d-learn
I am testing my web server right now. I started 5 separate 
consoles and continuously sending request by using curl to it.


It uses shared memory as well, thought from `ipcs -a`, I don't 
see more than necessary amount of allocation.


At the moment, server received about 1.5M requests, and memory 
usage has reached to 128MB according to System Monitor of Ubuntu. 
(top gives a similar value as well). I saw now on `top` command 
that about 650KB shared memory is used only.


Is there any way to find out what is using that big space in 
memory? Would `-profile` do that?


Problem is that if I was to be using `-profile` flag, server 
would slow down, and I wouldn't be able to test it correctly 
already.


Re: Memory usage tracking

2015-05-10 Thread weaselcat via Digitalmars-d-learn

On Sunday, 10 May 2015 at 10:43:37 UTC, tcak wrote:

On Sunday, 10 May 2015 at 09:44:42 UTC, tcak wrote:
I am testing my web server right now. I started 5 separate 
consoles and continuously sending request by using curl to 
it.


It uses shared memory as well, thought from `ipcs -a`, I don't 
see more than necessary amount of allocation.


At the moment, server received about 1.5M requests, and memory 
usage has reached to 128MB according to System Monitor of 
Ubuntu. (top gives a similar value as well). I saw now on 
`top` command that about 650KB shared memory is used only.


Is there any way to find out what is using that big space in 
memory? Would `-profile` do that?


Problem is that if I was to be using `-profile` flag, server 
would slow down, and I wouldn't be able to test it correctly 
already.


Hmm. Server was compiled in debug mode. Right now, it is 2.2M 
requests, and 174MB memory is in use.


Which compiler are you using? Also, debug mode might have linked 
against debug phobos - do a ldd on your executable.


Re: Memory usage tracking

2015-05-10 Thread tcak via Digitalmars-d-learn

On Sunday, 10 May 2015 at 10:50:40 UTC, weaselcat wrote:

On Sunday, 10 May 2015 at 10:43:37 UTC, tcak wrote:

On Sunday, 10 May 2015 at 09:44:42 UTC, tcak wrote:
I am testing my web server right now. I started 5 separate 
consoles and continuously sending request by using curl to 
it.


It uses shared memory as well, thought from `ipcs -a`, I 
don't see more than necessary amount of allocation.


At the moment, server received about 1.5M requests, and 
memory usage has reached to 128MB according to System Monitor 
of Ubuntu. (top gives a similar value as well). I saw now on 
`top` command that about 650KB shared memory is used only.


Is there any way to find out what is using that big space in 
memory? Would `-profile` do that?


Problem is that if I was to be using `-profile` flag, server 
would slow down, and I wouldn't be able to test it correctly 
already.


Hmm. Server was compiled in debug mode. Right now, it is 2.2M 
requests, and 174MB memory is in use.


Which compiler are you using? Also, debug mode might have 
linked against debug phobos - do a ldd on your executable.


I am using DMD. Web server is running as daemon, but web 
application is being debugged with gdb. For a while, gdb has 
started using 100% of CPU, and request-response slowed down 
greatly. 2.22M requests it has reached. It should end at 2.5M 
requests. Then I will check whether memory usage will go down by 
itself.


ldd result is this.

linux-vdso.so.1 =  (0x7ffc6192c000)
	libmysqlclient.so.18 = 
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.18 
(0x7ff6ecde4000)
	libpthread.so.0 = /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7ff6ecbc6000)
	librt.so.1 = /lib/x86_64-linux-gnu/librt.so.1 
(0x7ff6ec9bd000)

libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7ff6ec5f8000)
/lib64/ld-linux-x86-64.so.2 (0x7ff6ed346000)
libz.so.1 = /lib/x86_64-linux-gnu/libz.so.1 (0x7ff6ec3df000)
	libdl.so.2 = /lib/x86_64-linux-gnu/libdl.so.2 
(0x7ff6ec1da000)

libm.so.6 = /lib/x86_64-linux-gnu/libm.so.6 (0x7ff6ebed4000)


I am guessing a request object is having a instance copy 
somewhere, and GC is not destroying it, but I am not sure. Would 
I be able to find out about very long alive objects?