On Thu, 5 Apr 2007, Andy Whitcroft wrote:
> Christoph if you could let us know which benchmarks you are seeing gains
> with that would be a help.
You saw the numbers that Ken got with the pipe test right?
Then there are some minor improvements if you run AIM7.
I got into this because I saw the
Andi Kleen wrote:
> On Monday 02 April 2007 23:56:08 Dave Hansen wrote:
>> On Mon, 2007-04-02 at 14:28 -0700, Christoph Lameter wrote:
>>> I do not care what its called as long as it
>>> covers all the bases and is not a glaring performance regresssion (like
>>> SPARSEMEM so far).
>> I honestly
Christoph Lameter wrote:
> On Mon, 2 Apr 2007, Martin Bligh wrote:
>
>>> Its just the opposite. The vmemmap code is so efficient that we can remove
>>> lots of other code and gops of these alternate implementations. On x86_64
>>> its even superior to FLATMEM since FLATMEM still needs a memory refe
On Wed, 4 Apr 2007, Bob Picco wrote:
> Well you must have forgotten about these two postings in regards to
> performance numbers:
> http://marc.info/?l=linux-ia64&m=111990276501051&w=2
> and
> http://marc.info/?l=linux-kernel&m=116664638611634&w=2
I am well aware of those but those were done with
Christoph Lameter wrote:[Mon Apr 02 2007, 05:28:30PM EDT]
> On Mon, 2 Apr 2007, Dave Hansen wrote:
>
> > On Mon, 2007-04-02 at 13:30 -0700, Christoph Lameter wrote:
> > > On Mon, 2 Apr 2007, Dave Hansen wrote:
> > > > I completely agree, it looks like it should be faster. The code
> > > >
On Monday 02 April 2007 09:37, Christoph Lameter wrote:
> On Sun, 1 Apr 2007, Andi Kleen wrote:
> > Hmm, this means there is at least 2MB worth of struct page on every node?
> > Or do you have overlaps with other memory (I think you have)
> > In that case you have to handle the overlap in change_pa
Christoph Lameter wrote:
On Mon, 2 Apr 2007, Martin Bligh wrote:
For 64GB you'd need 256M which would be a quarter of low mem. Probably takes
up too much of low mem.
Yup.
We could move whatever you currently use to handle that into i386 arch
code. Or are there other platforms that do simila
On Mon, 2 Apr 2007, Martin Bligh wrote:
> > For 64GB you'd need 256M which would be a quarter of low mem. Probably takes
> > up too much of low mem.
>
> Yup.
We could move whatever you currently use to handle that into i386 arch
code. Or are there other platforms that do similar tricks with hig
Christoph Lameter wrote:
On Tue, 3 Apr 2007, Andi Kleen wrote:
If it works I would be inclined to replaced old sparsemem with Christoph's
new one on x86-64. Perhaps that could cut down the bewildering sparsemem
ifdef jungle that is there currently.
But I presume it won't work on 32bit because
On Tue, 3 Apr 2007, Andi Kleen wrote:
> If it works I would be inclined to replaced old sparsemem with Christoph's
> new one on x86-64. Perhaps that could cut down the bewildering sparsemem
> ifdef jungle that is there currently.
>
> But I presume it won't work on 32bit because of the limited add
On Monday 02 April 2007 23:56:08 Dave Hansen wrote:
> On Mon, 2007-04-02 at 14:28 -0700, Christoph Lameter wrote:
> > I do not care what its called as long as it
> > covers all the bases and is not a glaring performance regresssion (like
> > SPARSEMEM so far).
>
> I honestly don't doubt that th
On Mon, 2 Apr 2007, Dave Hansen wrote:
> On Mon, 2007-04-02 at 14:28 -0700, Christoph Lameter wrote:
> > I do not care what its called as long as it
> > covers all the bases and is not a glaring performance regresssion (like
> > SPARSEMEM so far).
>
> I honestly don't doubt that there are regr
On Mon, 2007-04-02 at 14:28 -0700, Christoph Lameter wrote:
> I do not care what its called as long as it
> covers all the bases and is not a glaring performance regresssion (like
> SPARSEMEM so far).
I honestly don't doubt that there are regressions, somewhere. Could you
elaborate, and perhap
Note that these arguments on DISCONTIG are flame bait for many SGIers.
We usually see this as an attack on DISCONTIG/VMEMMAP which is the
existing best performing implementation for page_to_pfn and vice
versa. Please lets stop the polarization. We want one consistent scheme
to manage memory eve
On Mon, 2 Apr 2007, Dave Hansen wrote:
> On Mon, 2007-04-02 at 13:30 -0700, Christoph Lameter wrote:
> > On Mon, 2 Apr 2007, Dave Hansen wrote:
> > > I completely agree, it looks like it should be faster. The code
> > > certainly has potential benefits. But, to add this neato, apparently
> > > m
On Mon, 2007-04-02 at 13:30 -0700, Christoph Lameter wrote:
> On Mon, 2 Apr 2007, Dave Hansen wrote:
> > I completely agree, it looks like it should be faster. The code
> > certainly has potential benefits. But, to add this neato, apparently
> > more performant feature, we unfortunately have to a
On Mon, 2 Apr 2007, Martin Bligh wrote:
> > Its just the opposite. The vmemmap code is so efficient that we can remove
> > lots of other code and gops of these alternate implementations. On x86_64
> > its even superior to FLATMEM since FLATMEM still needs a memory reference
> > for the mem_map are
Christoph Lameter wrote:
On Mon, 2 Apr 2007, Dave Hansen wrote:
I completely agree, it looks like it should be faster. The code
certainly has potential benefits. But, to add this neato, apparently
more performant feature, we unfortunately have to add code. Adding the
code has a cost: code ma
On Mon, 2 Apr 2007, Dave Hansen wrote:
> I completely agree, it looks like it should be faster. The code
> certainly has potential benefits. But, to add this neato, apparently
> more performant feature, we unfortunately have to add code. Adding the
> code has a cost: code maintenance. This isn
On Mon, 2007-04-02 at 08:37 -0700, Christoph Lameter wrote:
> You want a benchmark to prove that the removal of memory references and
> code improves performance?
Yes, please. ;)
I completely agree, it looks like it should be faster. The code
certainly has potential benefits. But, to add this
On Mon, 2 Apr 2007, Dave Hansen wrote:
> MAX_ORDER, and the section size is at least MAX_ORDER. If we *did* have
> this, then the page allocator would already be broken for these
> nodes. ;)
Ahh... Ok.
> So, this SPARSE_VIRTUAL does introduce a new dependency, which Andi
> calculated above. B
On Mon, 2007-04-02 at 08:54 -0700, Christoph Lameter wrote:
> > BTW there is no guarantee the node size is a multiple of 128MB so
> > you likely need to handle the overlap case. Otherwise we can
> > get cache corruptions
>
> How does sparsemem handle that?
It doesn't. :)
In practice, this situ
On Mon, 2 Apr 2007, Andi Kleen wrote:
> > No no no. For the gazillions time: All of 1-1 mapped kernel memory on
> > x86_64 needs a 2 MB page table entry. The virtual memmap uses the same.
> > There are *no* additional TLBs used.
>
> But why do you reserve an own virtual area then if you claim t
> No no no. For the gazillions time: All of 1-1 mapped kernel memory on
> x86_64 needs a 2 MB page table entry. The virtual memmap uses the same.
> There are *no* additional TLBs used.
But why do you reserve an own virtual area then if you claim to not use any
additional mappings?
-Andi
-
To
On Sun, 1 Apr 2007, Andi Kleen wrote:
> Or do you have overlaps with other memory (I think you have)
We may get into a case where some page structs are physically located
on other nodes if there are no holes between nodes. This would be
particularly significant for 64MB node sizes using numa emu
cc: apw ... seeing as he wrote sparsemem in the first place, please copy
him on this stuff ?
Andi Kleen wrote:
On Monday 02 April 2007 17:37, Christoph Lameter wrote:
On Sun, 1 Apr 2007, Andi Kleen wrote:
Hmm, this means there is at least 2MB worth of struct page on every node?
Or do you have
On Mon, 2 Apr 2007, Andi Kleen wrote:
> > Correct. 2MB worth of struct page is 128 mb of memory. Are there nodes
> > with smaller amounts of memory?
>
> Yes the discontigmem minimum is 64MB and there are some setups
> (mostly with numa emulation) where you end up with nodes that small.
Ok. The
On Monday 02 April 2007 17:37, Christoph Lameter wrote:
> On Sun, 1 Apr 2007, Andi Kleen wrote:
>
> > Hmm, this means there is at least 2MB worth of struct page on every node?
> > Or do you have overlaps with other memory (I think you have)
> > In that case you have to handle the overlap in change
On Sun, 1 Apr 2007, Andi Kleen wrote:
> And when you reserve virtual space somewhere you should
> update Documentation/x86_64/mm.txt. Also you didn't adjust
> the end of the vmalloc area so in theory vmalloc could run
> into your vmemmap.
Ok. will add to the doc in the next release.
No need to
On Sun, 1 Apr 2007, Andi Kleen wrote:
> Hmm, this means there is at least 2MB worth of struct page on every node?
> Or do you have overlaps with other memory (I think you have)
> In that case you have to handle the overlap in change_page_attr()
Correct. 2MB worth of struct page is 128 mb of memor
On Sunday 01 April 2007 09:10, Christoph Lameter wrote:
> x86_64 make SPARSE_VIRTUAL the default
>
> x86_64 is using 2M page table entries to map its 1-1 kernel space.
> We implement the virtual memmap also using 2M page table entries.
> So there is no difference at all to FLATMEM. Both schemes re
Argh. This should have been Patch 2/2. There is nothing in between.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.or
32 matches
Mail list logo