Thanks for the reply Janne.

So my only way to run a process over 1GB in size is a custom kernel?  Is
there an easier way to run a large cache with a process size over 1GB?  I
can re-configure the memory usage, but it would be nice to be able to
utilize more of my physical memory without having to go with a custom
kernel.


On 7/18/06, Janne Johansson <[EMAIL PROTECTED]> wrote:
>
> Joe Gibbens wrote:
> > I'm running squid-transparent on 3.9, and the process dies every time
> > it reaches 1GB.
> > FATAL: xcalloc: Unable to allocate 1 blocks of 4108 bytes!
> > The system has 2GB ram
> >
> > # ulimit -aH
> > time(cpu-seconds)    unlimited
> > file(blocks)         unlimited
> > coredump(blocks)     unlimited
> > data(kbytes)         1048576          <- (where is this limit
> configured?)
>
> /sys/arch/i386/include/vmparam.h:#define        MAXDSIZ
> (1024*1024*1024)        /* max data size */
>
> Note though, I could not go to 2G on amd64, since the kernel elf-loader
> code would act up while compiling (and other parts later might aswell!),
> but I did try 1.5G with a complete make build going through.
>
> > stack(kbytes)        32768
> > lockedmem(kbytes)    1907008
> > memory(kbytes)       1907008
> > nofiles(descriptors) 1024
> > processes            532
> >
> > How do I change the 1GB maximum data segment size?  ulimit -d does not
> > seem to change anything.  Also, how do the limits in login.conf apply?
> > The _squid user is in the daemon class, and that class is set to a
> > data size of infinity?
>
> The resource limits are inherited from the hard limit that vmparam.h
> sets of course, so if you manage to increase it, the the login.conf
> "infinity" should go up also. You wont reach 2G though, if I can make a
> guess.
>



-- 
Joe Gibbens

Reply via email to