Re: squid process dies when it reaches a size of 1GB.

2006-07-18 Thread Janne Johansson

Joe Gibbens wrote:

Thanks for the reply Janne.
 
So my only way to run a process over 1GB in size is a custom kernel?  Is 


Yes, as of now, on i386.

there an easier way to run a large cache with a process size over 1GB?  


You can do other things aswell, like bumping cachepct to ~12 with
config -ef /bsd (I believe there is a limit close to 256M for filesystem 
cache on obsd, and you're having 2G ram gives 12 percent for that)

Not much help there, but at least something.

I can re-configure the memory usage, but it would be nice to be able to 
utilize more of my physical memory without having to go with a custom 
kernel.


Hack away, solve the issues! =)
(Or pay someone to do it for you/us)



Re: squid process dies when it reaches a size of 1GB.

2006-07-18 Thread Joe Gibbens
Thanks for the reply Janne.

So my only way to run a process over 1GB in size is a custom kernel?  Is
there an easier way to run a large cache with a process size over 1GB?  I
can re-configure the memory usage, but it would be nice to be able to
utilize more of my physical memory without having to go with a custom
kernel.


On 7/18/06, Janne Johansson <[EMAIL PROTECTED]> wrote:
>
> Joe Gibbens wrote:
> > I'm running squid-transparent on 3.9, and the process dies every time
> > it reaches 1GB.
> > FATAL: xcalloc: Unable to allocate 1 blocks of 4108 bytes!
> > The system has 2GB ram
> >
> > # ulimit -aH
> > time(cpu-seconds)unlimited
> > file(blocks) unlimited
> > coredump(blocks) unlimited
> > data(kbytes) 1048576  <- (where is this limit
> configured?)
>
> /sys/arch/i386/include/vmparam.h:#defineMAXDSIZ
> (1024*1024*1024)/* max data size */
>
> Note though, I could not go to 2G on amd64, since the kernel elf-loader
> code would act up while compiling (and other parts later might aswell!),
> but I did try 1.5G with a complete make build going through.
>
> > stack(kbytes)32768
> > lockedmem(kbytes)1907008
> > memory(kbytes)   1907008
> > nofiles(descriptors) 1024
> > processes532
> >
> > How do I change the 1GB maximum data segment size?  ulimit -d does not
> > seem to change anything.  Also, how do the limits in login.conf apply?
> > The _squid user is in the daemon class, and that class is set to a
> > data size of infinity?
>
> The resource limits are inherited from the hard limit that vmparam.h
> sets of course, so if you manage to increase it, the the login.conf
> "infinity" should go up also. You wont reach 2G though, if I can make a
> guess.
>



-- 
Joe Gibbens