On 18.02.2014 16:21, Julian Taylor wrote:
> On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith wrote:
>> On 17 Feb 2014 15:17, "Sturla Molden" wrote:
>>>
>>> Julian Taylor wrote:
>>>
When an array is created it tries to get its memory from the cache and
when its deallocated it returns it
I am cross-posting this to Cython user group to make sure they see this.
Sturla
Nathaniel Smith wrote:
> On 18 Feb 2014 10:21, "Julian Taylor" wrote:
>
> On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith wrote:
> On 17 Feb 2014 15:17, "Sturla Molden" wrote:
>
> Julian Taylor wrote:
>
> Whe
On 18 Feb 2014 10:21, "Julian Taylor" wrote:
>
> On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith wrote:
> > On 17 Feb 2014 15:17, "Sturla Molden" wrote:
> >>
> >> Julian Taylor wrote:
> >>
> >> > When an array is created it tries to get its memory from the cache
and
> >> > when its deallocated
On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith wrote:
> On 17 Feb 2014 15:17, "Sturla Molden" wrote:
>>
>> Julian Taylor wrote:
>>
>> > When an array is created it tries to get its memory from the cache and
>> > when its deallocated it returns it to the cache.
>>
...
>
> Another optimization w
Julian Taylor wrote:
> I was thinking of something much simpler, just a layer of pointer stacks
> for different allocations sizes, the larger the size the smaller the
> cache with pessimistic defaults.
> e.g. the largest default cache layer is 128MB and with one or two
> entries so we can cache te
On Tue, Feb 18, 2014 at 1:47 AM, David Cournapeau wrote:
>
> On Mon, Feb 17, 2014 at 7:31 PM, Julian Taylor
> wrote:
>>
>> hi,
>> I noticed that during some simplistic benchmarks (e.g.
>> https://github.com/numpy/numpy/issues/4310) a lot of time is spent in
>> the kernel zeroing pages.
>> This is
On Mon, Feb 17, 2014 at 7:31 PM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
> hi,
> I noticed that during some simplistic benchmarks (e.g.
> https://github.com/numpy/numpy/issues/4310) a lot of time is spent in
> the kernel zeroing pages.
> This is because under linux glibc will always
On 02/17/2014 06:56 PM, Nathaniel Smith wrote:
> On Mon, Feb 17, 2014 at 3:55 PM, Stefan Seefeld wrote:
>> On 02/17/2014 03:42 PM, Nathaniel Smith wrote:
>>> Another optimization we should consider that might help a lot in the
>>> same situations where this would help: for code called from the
>>>
On 17.02.2014 22:27, Sturla Molden wrote:
> Nathaniel Smith wrote:
>> Also, I'd be pretty wary of caching large chunks of unused memory. People
>> already have a lot of trouble understanding their program's memory usage,
>> and getting rid of 'greedy free' will make this even worse.
>
> A cache w
On Mon, Feb 17, 2014 at 3:55 PM, Stefan Seefeld wrote:
> On 02/17/2014 03:42 PM, Nathaniel Smith wrote:
>> Another optimization we should consider that might help a lot in the
>> same situations where this would help: for code called from the
>> cpython eval loop, it's afaict possible to determine
Nathaniel Smith wrote:
> Also, I'd be pretty wary of caching large chunks of unused memory. People
> already have a lot of trouble understanding their program's memory usage,
> and getting rid of 'greedy free' will make this even worse.
A cache would only be needed when there is a lot of computin
On 02/17/2014 03:42 PM, Nathaniel Smith wrote:
> Another optimization we should consider that might help a lot in the
> same situations where this would help: for code called from the
> cpython eval loop, it's afaict possible to determine which inputs are
> temporaries by checking their refcnt. In
On 17 Feb 2014 15:17, "Sturla Molden" wrote:
>
> Julian Taylor wrote:
>
> > When an array is created it tries to get its memory from the cache and
> > when its deallocated it returns it to the cache.
>
> Good idea, however there is already a C function that does this. It uses a
> heap to keep the
On 17.02.2014 21:16, Sturla Molden wrote:
> Julian Taylor wrote:
>
>> When an array is created it tries to get its memory from the cache and
>> when its deallocated it returns it to the cache.
>
> Good idea, however there is already a C function that does this. It uses a
> heap to keep the cache
Julian Taylor wrote:
> When an array is created it tries to get its memory from the cache and
> when its deallocated it returns it to the cache.
Good idea, however there is already a C function that does this. It uses a
heap to keep the cached memory blocks sorted according to size. You know it
hi,
I noticed that during some simplistic benchmarks (e.g.
https://github.com/numpy/numpy/issues/4310) a lot of time is spent in
the kernel zeroing pages.
This is because under linux glibc will always allocate large memory
blocks with mmap. As these pages can come from other processes the
kernel mu
16 matches
Mail list logo