actually yeah i've seen this... in a bizarre failure situation in a system
which physically had RAM in the boot node but it was never enumerated for
the kernel (other nodes had RAM which was enumerated).
so technically there was boot node RAM but the kernel never saw it.
-dean
On Wed, 30 Jan
why do we need another kernel cpuid reading method when sched_setaffinity
exists and cpuid is available in ring3?
-dean
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Tue, 29 Jan 2008, Andi Kleen wrote:
> > SRAT is essentially just a two dimensional table with node distances.
>
> Sorry, that was actually SLIT. SRAT is not two dimensional, but also
> relatively simple. SLIT you don't really need to implement.
yeah but i'd heartily recommend implementing
On Tue, 29 Jan 2008, Andi Kleen wrote:
SRAT is essentially just a two dimensional table with node distances.
Sorry, that was actually SLIT. SRAT is not two dimensional, but also
relatively simple. SLIT you don't really need to implement.
yeah but i'd heartily recommend implementing SLIT
why do we need another kernel cpuid reading method when sched_setaffinity
exists and cpuid is available in ring3?
-dean
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
actually yeah i've seen this... in a bizarre failure situation in a system
which physically had RAM in the boot node but it was never enumerated for
the kernel (other nodes had RAM which was enumerated).
so technically there was boot node RAM but the kernel never saw it.
-dean
On Wed, 30 Jan
On Thu, 17 Jan 2008, Patrick J. LoPresti wrote:
> I need to copy large (> 100GB) files between machines on a fast
> network. Both machines have reasonably fast disk subsystems, with
> read/write performance benchmarked at > 800 MB/sec. Using 10GigE cards
> and the usual tweaks to tcp_rmem etc.,
On Thu, 17 Jan 2008, Patrick J. LoPresti wrote:
I need to copy large ( 100GB) files between machines on a fast
network. Both machines have reasonably fast disk subsystems, with
read/write performance benchmarked at 800 MB/sec. Using 10GigE cards
and the usual tweaks to tcp_rmem etc., I am
On Tue, 15 Jan 2008, Andrew Morton wrote:
> On Tue, 15 Jan 2008 21:01:17 -0800 (PST) dean gaudet <[EMAIL PROTECTED]>
> wrote:
>
> > On Mon, 14 Jan 2008, NeilBrown wrote:
> >
> > >
> > > raid5's 'make_request' function calls generic_make_request
elayed is only called at unplug time, never in
> raid5. This seems to bring back the performance numbers. Calling it
> in raid5d was sometimes too soon...
>
> Cc: "Dan Williams" <[EMAIL PROTECTED]>
> Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
probably doesn't
the performance numbers. Calling it
in raid5d was sometimes too soon...
Cc: Dan Williams [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
probably doesn't matter, but for the record:
Tested-by: dean gaudet [EMAIL PROTECTED]
this time i tested with internal and external bitmaps
On Tue, 15 Jan 2008, Andrew Morton wrote:
On Tue, 15 Jan 2008 21:01:17 -0800 (PST) dean gaudet [EMAIL PROTECTED]
wrote:
On Mon, 14 Jan 2008, NeilBrown wrote:
raid5's 'make_request' function calls generic_make_request on
underlying devices and if we run out of stripe heads
if i boot an x86 64-bit 2.6.24-rc7 kernel with nosmp, maxcpus=0 or 1 it
still disables TSC :)
Marking TSC unstable due to TSCs unsynchronized
this is an opteron 2xx box which does have two cpus and no clock-divide in
halt or cpufreq enabled so TSC should be fine with only one cpu.
pretty sure
if i boot an x86 64-bit 2.6.24-rc7 kernel with nosmp, maxcpus=0 or 1 it
still disables TSC :)
Marking TSC unstable due to TSCs unsynchronized
this is an opteron 2xx box which does have two cpus and no clock-divide in
halt or cpufreq enabled so TSC should be fine with only one cpu.
pretty sure
On Fri, 11 Jan 2008, dean gaudet wrote:
> On Fri, 11 Jan 2008, Ingo Molnar wrote:
>
> > * Andi Kleen <[EMAIL PROTECTED]> wrote:
> >
> > > Cached requires the cache line to be read first before you can write
> > > it.
> >
> > nons
On Fri, 11 Jan 2008, Ingo Molnar wrote:
> * Andi Kleen <[EMAIL PROTECTED]> wrote:
>
> > Cached requires the cache line to be read first before you can write
> > it.
>
> nonsense, and you should know it. It is perfectly possible to construct
> fully written cachelines, without reading the
On Fri, 11 Jan 2008, Ingo Molnar wrote:
* Andi Kleen [EMAIL PROTECTED] wrote:
Cached requires the cache line to be read first before you can write
it.
nonsense, and you should know it. It is perfectly possible to construct
fully written cachelines, without reading the cacheline
On Fri, 11 Jan 2008, dean gaudet wrote:
On Fri, 11 Jan 2008, Ingo Molnar wrote:
* Andi Kleen [EMAIL PROTECTED] wrote:
Cached requires the cache line to be read first before you can write
it.
nonsense, and you should know it. It is perfectly possible to construct
fully
On Sat, 29 Dec 2007, [EMAIL PROTECTED] wrote:
> On Sat, 29 Dec 2007 12:40:47 PST, dean gaudet said:
>
> > the main worry i have is some user maliciously hardlinks everything
> > under /var/log somewhere else and slowly fills up the file system with
> > old rotated logs
On Sun, 30 Dec 2007, David Newall wrote:
> dean gaudet wrote:
> > > Pffuff. That's what volume managers are for! You do have (at least) two
> > > independent spindles in your RAID1 array, which give you less need to
> > > worry
>
On Sat, 29 Dec 2007, David Newall wrote:
> dean gaudet wrote:
> > On Wed, 19 Dec 2007, David Newall wrote:
> >
> > > Mark Lord wrote:
> > >
> > > > But.. pity there's no mount flag override for smaller systems,
> > > > where bind
On Sat, 29 Dec 2007, David Newall wrote:
dean gaudet wrote:
On Wed, 19 Dec 2007, David Newall wrote:
Mark Lord wrote:
But.. pity there's no mount flag override for smaller systems,
where bind mounts might be more useful with link(2) actually working.
I
On Sun, 30 Dec 2007, David Newall wrote:
dean gaudet wrote:
Pffuff. That's what volume managers are for! You do have (at least) two
independent spindles in your RAID1 array, which give you less need to
worry
about head-stack contention.
this system is write intensive
On Sat, 29 Dec 2007, [EMAIL PROTECTED] wrote:
On Sat, 29 Dec 2007 12:40:47 PST, dean gaudet said:
the main worry i have is some user maliciously hardlinks everything
under /var/log somewhere else and slowly fills up the file system with
old rotated logs.
Doctor, it hurts when I do
On Sat, 29 Dec 2007, Jan Engelhardt wrote:
>
> On Dec 28 2007 18:53, dean gaudet wrote:
> >p.s. in retrospect i probably could have arranged it more like this:
> >
> > mount /dev/md1 $tmpmntpoint
> > mount --bind $tmpmntpoint/var /var
> > mount --bind
On Wed, 19 Dec 2007, David Newall wrote:
> Mark Lord wrote:
> > But.. pity there's no mount flag override for smaller systems,
> > where bind mounts might be more useful with link(2) actually working.
>
> I don't see it. You always can make hard link on the underlying filesystem.
> If you need
On Wed, 19 Dec 2007, David Newall wrote:
Mark Lord wrote:
But.. pity there's no mount flag override for smaller systems,
where bind mounts might be more useful with link(2) actually working.
I don't see it. You always can make hard link on the underlying filesystem.
If you need to make
On Sat, 29 Dec 2007, Jan Engelhardt wrote:
On Dec 28 2007 18:53, dean gaudet wrote:
p.s. in retrospect i probably could have arranged it more like this:
mount /dev/md1 $tmpmntpoint
mount --bind $tmpmntpoint/var /var
mount --bind $tmpmntpoint/home /home
umount $tmpmntpoint
On Fri, 23 Nov 2007, Arne Georg Gleditsch wrote:
> dean gaudet <[EMAIL PROTECTED]> writes:
> > on AMD x86 pre-family 10h the boundary is 8 bytes, and on fam 10h it's 16
> > bytes. the penalty is a mere 3 cycles if an access crosses the specified
> > boundar
On Fri, 23 Nov 2007, Arne Georg Gleditsch wrote:
dean gaudet [EMAIL PROTECTED] writes:
on AMD x86 pre-family 10h the boundary is 8 bytes, and on fam 10h it's 16
bytes. the penalty is a mere 3 cycles if an access crosses the specified
boundary.
Worth noting though, is that atomic
On Fri, 23 Nov 2007, Alan Cox wrote:
> Its usually faster if you don't misalign on x86 as well.
i'm not sure if i agree with "usually"... but i know you (alan) are
probably aware of the exact requirements of the hw.
for everyone else:
on intel x86 processors an access is unaligned only if it
On Fri, 23 Nov 2007, Alan Cox wrote:
Its usually faster if you don't misalign on x86 as well.
i'm not sure if i agree with usually... but i know you (alan) are
probably aware of the exact requirements of the hw.
for everyone else:
on intel x86 processors an access is unaligned only if it
On Tue, 20 Nov 2007, dean gaudet wrote:
> On Tue, 20 Nov 2007, Metzger, Markus T wrote:
>
> > +__cpuinit void ptrace_bts_init_intel(struct cpuinfo_x86 *c)
> > +{
> > + switch (c->x86) {
> > + case 0x6:
> > + switch (c->x86_model) {
>
On Tue, 20 Nov 2007, Metzger, Markus T wrote:
> +__cpuinit void ptrace_bts_init_intel(struct cpuinfo_x86 *c)
> +{
> + switch (c->x86) {
> + case 0x6:
> + switch (c->x86_model) {
> +#ifdef __i386__
> + case 0xD:
> + case 0xE: /* Pentium M */
> +
On Mon, 19 Nov 2007, Ingo Molnar wrote:
>
> * Eric Dumazet <[EMAIL PROTECTED]> wrote:
>
> > I do see a problem, because some readers will take your example as a
> > reference, as it will probably sit in a page that
> > google^Wsearch_engines will bring at the top of search results for
> >
On Mon, 19 Nov 2007, Ingo Molnar wrote:
* Eric Dumazet [EMAIL PROTECTED] wrote:
I do see a problem, because some readers will take your example as a
reference, as it will probably sit in a page that
google^Wsearch_engines will bring at the top of search results for
next ten years
On Tue, 20 Nov 2007, dean gaudet wrote:
On Tue, 20 Nov 2007, Metzger, Markus T wrote:
+__cpuinit void ptrace_bts_init_intel(struct cpuinfo_x86 *c)
+{
+ switch (c-x86) {
+ case 0x6:
+ switch (c-x86_model) {
+#ifdef __i386__
+ case 0xD:
+ case
On Tue, 20 Nov 2007, Metzger, Markus T wrote:
+__cpuinit void ptrace_bts_init_intel(struct cpuinfo_x86 *c)
+{
+ switch (c-x86) {
+ case 0x6:
+ switch (c-x86_model) {
+#ifdef __i386__
+ case 0xD:
+ case 0xE: /* Pentium M */
+
On Fri, 16 Nov 2007, Ulrich Drepper wrote:
> dean gaudet wrote:
> > honestly i think there should be a per-task flag which indicates whether
> > fds are by default F_CLOEXEC or not. my reason: third party libraries.
>
> Only somebody who thinks exclusively about ap
you know... i understand the need for FD_CLOEXEC -- in fact i tried
petitioning for CLOEXEC options to all the fd creating syscalls something
like 7 years ago when i was banging my head against the wall trying to
figure out how to thread apache... but even still i'm not convinced that
On Fri, 16 Nov 2007, Andi Kleen wrote:
> I didn't see a clear list.
- cross platform extensible API for configuring perf counters
- support for multiplexed counters
- support for virtualized 64-bit counters
- support for PC and call graph sampling at specific intervals
- support for reading
On Fri, 16 Nov 2007, Ulrich Drepper wrote:
dean gaudet wrote:
honestly i think there should be a per-task flag which indicates whether
fds are by default F_CLOEXEC or not. my reason: third party libraries.
Only somebody who thinks exclusively about applications as opposed to
runtimes
On Fri, 16 Nov 2007, Andi Kleen wrote:
I didn't see a clear list.
- cross platform extensible API for configuring perf counters
- support for multiplexed counters
- support for virtualized 64-bit counters
- support for PC and call graph sampling at specific intervals
- support for reading
you know... i understand the need for FD_CLOEXEC -- in fact i tried
petitioning for CLOEXEC options to all the fd creating syscalls something
like 7 years ago when i was banging my head against the wall trying to
figure out how to thread apache... but even still i'm not convinced that
On Thu, 15 Nov 2007, Paul Mackerras wrote:
> dean gaudet writes:
>
> > actually multiplexing is the main feature i am in need of. there are an
> > insufficient number of counters (even on k8 with 4 counters) to do
> > complete stall accounting or to get a general over
On Wed, 14 Nov 2007, Andi Kleen wrote:
> Later a syscall might be needed with event multiplexing, but that seems
> more like a far away non essential feature.
actually multiplexing is the main feature i am in need of. there are an
insufficient number of counters (even on k8 with 4 counters) to
On Wed, 14 Nov 2007, Andi Kleen wrote:
Later a syscall might be needed with event multiplexing, but that seems
more like a far away non essential feature.
actually multiplexing is the main feature i am in need of. there are an
insufficient number of counters (even on k8 with 4 counters) to do
On Thu, 15 Nov 2007, Paul Mackerras wrote:
dean gaudet writes:
actually multiplexing is the main feature i am in need of. there are an
insufficient number of counters (even on k8 with 4 counters) to do
complete stall accounting or to get a general overview of L1d/L1i/L2 cache
hit
fwiw i also brought the TCP_DEFER_ACCEPT problems up the end of last year:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg28916.html
it's possible the final message in that thread is how we should define the
behaviour, i haven't tried the TCP_SYNCNT idea though.
-dean
-
To unsubscribe from
fwiw i also brought the TCP_DEFER_ACCEPT problems up the end of last year:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg28916.html
it's possible the final message in that thread is how we should define the
behaviour, i haven't tried the TCP_SYNCNT idea though.
-dean
-
To unsubscribe from
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
> dean gaudet wrote:
> > On Mon, 15 Oct 2007, Nick Piggin wrote:
> >
> >
> >> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> >> because it generally has to invalidate
On Mon, 15 Oct 2007, Nick Piggin wrote:
> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we have heaps of address space
available... couldn't the kernel just burn address space
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we have heaps of address space
available... couldn't the kernel just burn address space
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
dean gaudet wrote:
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we
On Sat, 8 Sep 2007, Petr Vandrovec wrote:
> dean gaudet wrote:
> > On Sun, 9 Sep 2007, Nick Piggin wrote:
> >
> > > I've also heard that string operations do not follow the normal ordering,
> > > but
> > > that's just with respect to individual
On Sun, 9 Sep 2007, Nick Piggin wrote:
> I've also heard that string operations do not follow the normal ordering, but
> that's just with respect to individual loads/stores in the one operation, I
> hope? And they will still follow ordering rules WRT surrounding loads and
> stores?
see section
On Sun, 9 Sep 2007, Nick Piggin wrote:
I've also heard that string operations do not follow the normal ordering, but
that's just with respect to individual loads/stores in the one operation, I
hope? And they will still follow ordering rules WRT surrounding loads and
stores?
see section 7.2.3
On Sat, 8 Sep 2007, Petr Vandrovec wrote:
dean gaudet wrote:
On Sun, 9 Sep 2007, Nick Piggin wrote:
I've also heard that string operations do not follow the normal ordering,
but
that's just with respect to individual loads/stores in the one operation,
I
hope
it's so very unfortunate the PCI standard has no feature bit to indicate
the presence of ECS.
FWIW in my testing on a range of machines spanning 7 or 8 years i could
read config space reg 256... and get 0x when the device didn't
support ECS, and get valid data when the device did
it's so very unfortunate the PCI standard has no feature bit to indicate
the presence of ECS.
FWIW in my testing on a range of machines spanning 7 or 8 years i could
read config space reg 256... and get 0x when the device didn't
support ECS, and get valid data when the device did
On Sun, 12 Aug 2007, Linus Torvalds wrote:
> On Sun, 12 Aug 2007, Dave Jones wrote:
> >
> > This does make me wonder, why these weren't caught in -mm ?
>
> I'm worried that -mm isn't getting a lot of exposure these days. People do
> run it, but I wonder how many..
andrew caught it in -mm and
On Sun, 12 Aug 2007, Linus Torvalds wrote:
On Sun, 12 Aug 2007, Dave Jones wrote:
This does make me wonder, why these weren't caught in -mm ?
I'm worried that -mm isn't getting a lot of exposure these days. People do
run it, but I wonder how many..
andrew caught it in -mm and reverted
http://sandpile.org/
On Wed, 18 Jul 2007, Rene Herman wrote:
> Good day.
>
> Would anyone happen to have a list of TLB sizes for some selected x86{,-64}
> CPUs? I know it goes from a few entries on a 386 to a lot on Opteron but I
> have a real hard time finding specific data.
>
> Rene.
> -
>
http://sandpile.org/
On Wed, 18 Jul 2007, Rene Herman wrote:
Good day.
Would anyone happen to have a list of TLB sizes for some selected x86{,-64}
CPUs? I know it goes from a few entries on a 386 to a lot on Opteron but I
have a real hard time finding specific data.
Rene.
-
To
On Thu, 19 Jul 2007, Bill Irwin wrote:
> On Thu, Jul 19, 2007 at 10:07:59AM -0700, Nishanth Aravamudan wrote:
> > But I do think a second reason to do this is to make hugetlbfs behave
> > like a normal fs -- that is read(), write(), etc. work on files in the
> > mountpoint. But that is simply my
On Thu, 19 Jul 2007, Bill Irwin wrote:
On Thu, Jul 19, 2007 at 10:07:59AM -0700, Nishanth Aravamudan wrote:
But I do think a second reason to do this is to make hugetlbfs behave
like a normal fs -- that is read(), write(), etc. work on files in the
mountpoint. But that is simply my
On Wed, 18 Jul 2007, Pasi Kärkkäinen wrote:
> What brand/model your sata_mv controller is? Would be nice to know to be
> able to get a "known-to-work" one..
http://supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
-dean
On Wed, 18 Jul 2007, Pasi Kärkkäinen wrote:
What brand/model your sata_mv controller is? Would be nice to know to be
able to get a known-to-work one..
http://supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
-dean
On Thu, 12 Jul 2007, Jeff Garzik wrote:
> dean gaudet wrote:
> > On Thu, 12 Jul 2007, Jeff Garzik wrote:
> >
> > > dean gaudet wrote:
> > > > oh very nice... no warnings on boot, and no warnings while i "dd
> > > > if=/dev/sdX
> &
On Thu, 12 Jul 2007, Jeff Garzik wrote:
> dean gaudet wrote:
> > oh very nice... no warnings on boot, and no warnings while i "dd if=/dev/sdX
> > of=/dev/null" and i'm seeing 74MB/s+ from each disk on this simple read
> > test.
> >
> > for lack of a
On Wed, 11 Jul 2007, Jeff Garzik wrote:
> As before, this patch is against 2.6.22 with no other patches needed nor
> applied.
>
> In this revision, interrupt handling was improved quite a bit,
> particularly for EDMA. The WARNING in mv_get_crpb_status() goes away,
> because that routine went
On Wed, 11 Jul 2007, Jeff Garzik wrote:
As before, this patch is against 2.6.22 with no other patches needed nor
applied.
In this revision, interrupt handling was improved quite a bit,
particularly for EDMA. The WARNING in mv_get_crpb_status() goes away,
because that routine went away.
On Thu, 12 Jul 2007, Jeff Garzik wrote:
dean gaudet wrote:
oh very nice... no warnings on boot, and no warnings while i dd if=/dev/sdX
of=/dev/null and i'm seeing 74MB/s+ from each disk on this simple read
test.
for lack of a better test i started an untar/diff stress test
On Thu, 12 Jul 2007, Jeff Garzik wrote:
dean gaudet wrote:
On Thu, 12 Jul 2007, Jeff Garzik wrote:
dean gaudet wrote:
oh very nice... no warnings on boot, and no warnings while i dd
if=/dev/sdX
of=/dev/null and i'm seeing 74MB/s+ from each disk on this simple read
test
On Mon, 9 Jul 2007, Jeff Garzik wrote:
>
> This is the latest update of the sata_mv conversion to new EH. I'm
> looking for testers, of two configurations:
>
> 2.6.22 + patch #1 (baseline)
> 2.6.22 + patch #1 + this patch (sata_mv new EH)
>
> This patch contains a
On Mon, 9 Jul 2007, Jeff Garzik wrote:
This is the latest update of the sata_mv conversion to new EH. I'm
looking for testers, of two configurations:
2.6.22 + patch #1 (baseline)
2.6.22 + patch #1 + this patch (sata_mv new EH)
This patch contains a small but
On Sun, 17 Jun 2007, Wakko Warner wrote:
> What benefit would I gain by using an external journel and how big would it
> need to be?
i don't know how big the journal needs to be... i'm limited by xfs'
maximum journal size of 128MiB.
i don't have much benchmark data -- but here are some rough
On Sun, 17 Jun 2007, Wakko Warner wrote:
> dean gaudet wrote:
> > On Sat, 16 Jun 2007, Wakko Warner wrote:
> >
> > > When I've had an unclean shutdown on one of my systems (10x 50gb raid5)
> > > it's
> > > always slowed the system down when booting up.
On Sun, 17 Jun 2007, Wakko Warner wrote:
dean gaudet wrote:
On Sat, 16 Jun 2007, Wakko Warner wrote:
When I've had an unclean shutdown on one of my systems (10x 50gb raid5)
it's
always slowed the system down when booting up. Quite significantly I must
say. I wait until I can
On Sun, 17 Jun 2007, Wakko Warner wrote:
What benefit would I gain by using an external journel and how big would it
need to be?
i don't know how big the journal needs to be... i'm limited by xfs'
maximum journal size of 128MiB.
i don't have much benchmark data -- but here are some rough
On Sat, 16 Jun 2007, Wakko Warner wrote:
> When I've had an unclean shutdown on one of my systems (10x 50gb raid5) it's
> always slowed the system down when booting up. Quite significantly I must
> say. I wait until I can login and change the rebuild max speed to slow it
> down while I'm using
On Sat, 16 Jun 2007, David Greaves wrote:
> Neil Brown wrote:
> > On Friday June 15, [EMAIL PROTECTED] wrote:
> >
> > > As I understand the way
> > > raid works, when you write a block to the array, it will have to read all
> > > the other
On Sat, 16 Jun 2007, David Greaves wrote:
Neil Brown wrote:
On Friday June 15, [EMAIL PROTECTED] wrote:
As I understand the way
raid works, when you write a block to the array, it will have to read all
the other blocks in the
On Sat, 16 Jun 2007, Wakko Warner wrote:
When I've had an unclean shutdown on one of my systems (10x 50gb raid5) it's
always slowed the system down when booting up. Quite significantly I must
say. I wait until I can login and change the rebuild max speed to slow it
down while I'm using it.
On Mon, 11 Jun 2007, Adam Litke wrote:
> Here's another breakage as a result of shared memory stacked files :(
>
> The NUMA policy for a VMA is determined by checking the following (in the
> order
> given):
>
> 1) vma->vm_ops->get_policy() (if defined)
> 2) vma->vm_policy (if defined)
> 3)
On Mon, 11 Jun 2007, Adam Litke wrote:
Here's another breakage as a result of shared memory stacked files :(
The NUMA policy for a VMA is determined by checking the following (in the
order
given):
1) vma-vm_ops-get_policy() (if defined)
2) vma-vm_policy (if defined)
3) task-mempolicy
On Sat, 9 Jun 2007, Linus Torvalds wrote:
> IOW, the most common case for libraries is not that they get invoced to do
> one thing, but that they get loaded and then used over and over and over
> again, and the _reason_ for wanting to have a file descriptor open may
> well be that the library
On Tue, 15 May 2007, William Lee Irwin III wrote:
> On Tue, May 15, 2007 at 10:41:06PM -0700, dean gaudet wrote:
> > prior to 2.6.21 i could "numactl --interleave=all" and use SHM_HUGETLB and
> > the interleave policy would be respected. as of 2.6.21 it doesn't seem t
nice. i proposed something like this 8 or so years ago... the problem is
that you've also got to deal with socket(2), socketpair(2), accept(2),
pipe(2), dup(2), dup2(2), fcntl(F_DUPFD)... everything which creates new
fds.
really what is desired is fork/clone with selective duping of fds.
On Tue, 15 May 2007, William Lee Irwin III wrote:
> On Tue, May 15, 2007 at 10:41:06PM -0700, dean gaudet wrote:
> > prior to 2.6.21 i could "numactl --interleave=all" and use SHM_HUGETLB and
> > the interleave policy would be respected. as of 2.6.21 it doesn't seem t
On Tue, 15 May 2007, William Lee Irwin III wrote:
On Tue, May 15, 2007 at 10:41:06PM -0700, dean gaudet wrote:
prior to 2.6.21 i could numactl --interleave=all and use SHM_HUGETLB and
the interleave policy would be respected. as of 2.6.21 it doesn't seem to
respect the policy
nice. i proposed something like this 8 or so years ago... the problem is
that you've also got to deal with socket(2), socketpair(2), accept(2),
pipe(2), dup(2), dup2(2), fcntl(F_DUPFD)... everything which creates new
fds.
really what is desired is fork/clone with selective duping of fds.
On Tue, 15 May 2007, William Lee Irwin III wrote:
On Tue, May 15, 2007 at 10:41:06PM -0700, dean gaudet wrote:
prior to 2.6.21 i could numactl --interleave=all and use SHM_HUGETLB and
the interleave policy would be respected. as of 2.6.21 it doesn't seem to
respect the policy
On Sat, 9 Jun 2007, Linus Torvalds wrote:
IOW, the most common case for libraries is not that they get invoced to do
one thing, but that they get loaded and then used over and over and over
again, and the _reason_ for wanting to have a file descriptor open may
well be that the library
ugh... do not send email before breakfast. do not send email before
breakfast. nevermind :)
-dean
On Tue, 5 Jun 2007, dean gaudet wrote:
> the HPET specification allows for HPETs with *much* lower resolution than
> 50us. in fact Fmin is 10Hz iirc. (sorry to jump in so late, b
the HPET specification allows for HPETs with *much* lower resolution than
50us. in fact Fmin is 10Hz iirc. (sorry to jump in so late, but i'm
about a month behind on the list.)
-dean
On Mon, 21 May 2007, Chris Wright wrote:
> -stable review patch. If anyone has any objections, please let
the HPET specification allows for HPETs with *much* lower resolution than
50us. in fact Fmin is 10Hz iirc. (sorry to jump in so late, but i'm
about a month behind on the list.)
-dean
On Mon, 21 May 2007, Chris Wright wrote:
-stable review patch. If anyone has any objections, please let us
ugh... do not send email before breakfast. do not send email before
breakfast. nevermind :)
-dean
On Tue, 5 Jun 2007, dean gaudet wrote:
the HPET specification allows for HPETs with *much* lower resolution than
50us. in fact Fmin is 10Hz iirc. (sorry to jump in so late, but i'm
about
On Fri, 25 May 2007, Jeff Garzik wrote:
> Already uncovered and fixed a few bugs in v3.
>
> Here's v4 of the sata_mv new-EH patch.
you asked for test results with 2.6.21.3 ... that seems to boot fine,
and i've tested reading from the disks only and it seems to be working
fine. ditto for
On Fri, 25 May 2007, Jeff Garzik wrote:
Already uncovered and fixed a few bugs in v3.
Here's v4 of the sata_mv new-EH patch.
you asked for test results with 2.6.21.3 ... that seems to boot fine,
and i've tested reading from the disks only and it seems to be working
fine. ditto for
1 - 100 of 306 matches
Mail list logo