Cool, what's the PR number?

It sounds like something is odd. Email -current with the PR details
(number, how you reproduce it, etc) and let's see if we can get one of
the VM/UMA gurus to look into it.

Thanks,



-adrian

On 27 July 2013 00:26, Tugrul Erdogan <h.tugrul.erdo...@gmail.com> wrote:
> I have just pilled a PR.
>
> The negative written value is directly malloc's size parameter (in fact
> after some page size alignment enlargements operation). This parameter has
> been defined as "unsigned long" but printing with "%ld" as signed long. So
> if the size is very very big (more than 2^63 at amd64), the signed printing
> can remark the first bit of size as sign bit then write the - sign. But I
> think the size can not be so big, for this reason you are right, there must
> be a
> problem (the size parameter can come as negative or the enlargement
> functions can destroy the size parameter).
>
>
> On Sat, Jul 27, 2013 at 12:21 AM, Adrian Chadd <adr...@freebsd.org> wrote:
>>
>> Hi
>>
>> Have you filed a PR? This should get fixed.
>>
>> Also, being -ve is a problem. Is the value really negative? Is it
>> wrapping badly?
>>
>>
>>
>> -adrian
>>
>> On 25 July 2013 07:57, Tugrul Erdogan <h.tugrul.erdo...@gmail.com> wrote:
>> > howdy all,
>> >
>> > At my work, I am using 10.0-CURRENT on Intel(R) Xeon(R) E5620 with 16GB
>> > ram. I am taking
>> >
>> > "panic: kmem_malloc(-548663296): kmem_map too small: 539459584 total
>> > allocated"
>> >
>> > message with configuration below:
>> >
>> > [root@ ~]# sysctl vm.kmem_size_min vm.kmem_size_max vm.kmem_size
>> > vm.kmem_size_scale
>> > vm.kmem_size_min: 0
>> > vm.kmem_size_max: 329853485875
>> > vm.kmem_size: 16686845952
>> > vm.kmem_size_scale: 1
>> > [root@ ~]# sysctl hw.physmem hw.usermem hw.realmem
>> > hw.physmem: 17151787008
>> > hw.usermem: 8282652672
>> > hw.realmem: 18253611008
>> > [root@ ~]# sysctl hw.pagesize hw.pagesizes hw.availpages
>> > hw.pagesize: 4096
>> > hw.pagesizes: 4096 2097152 0
>> > hw.availpages: 4187448
>> >
>> >
>> > When I compare vmstat and netstat output of boot time result and
>> > subsequent result, the major difference are seemed at:
>> >
>> > pf_temp 0 0K - 79309736 128 | pf_temp 1077640 134705K - 84330076 128
>> >
>> > and after the panic at the core dump file the major vmstat difference
>> > is:
>> >
>> > temp 110 15K - 76212305 16,32,64,128,256 | temp 117 6742215K - 655115
>> > 16,32,64,128,2
>> >
>> > When I explore the source code of kernel (at vm_kern.c and vm_map.c), I
>> > see
>> > that the panic can occur with the cases at below:
>> >
>> > * negative malloc size parameter
>> >
>> > * longer than free buffer respect to kmem_map min_offset and max_offset
>> > values
>> >
>> > * try to allocate when the root entry of map is the rightmost entry of
>> > map
>> >
>> > * try to allocate bigger than map's max_free value
>> >
>> > I think the panic occurs at mbuf creation process when calling malloc()
>> > as
>> > a result of couldn't be able to allocate memory; but I don't understand
>> > why
>> > one of this panic case activating? The memory is almost empty but the
>> > device is saying kmem_map small when using about 0.5GB memory purely.
>> > How
>> > can i solve this panic problem?
>> >
>> > Thank you all for your time.
>> >
>> > -- Best Wishes,
>> >
>> > Tugrul Erdogan
>> > _______________________________________________
>> > freebsd-hackers@freebsd.org mailing list
>> > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
>> > To unsubscribe, send any mail to
>> > "freebsd-hackers-unsubscr...@freebsd.org"
>
>
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Reply via email to