Re: Possible data corruption sata_sil24?

2007-07-19 Thread Tejun Heo
David Shaw wrote:
 I'm not sure whether this is problem of sata_sil24 or dm layer.  Cc'ing
 linux-raid for help.  How much memory do you have?  One big difference
 between ata_piix and sata_sil24 is that sil24 can handle 64bit DMA.
 Maybe dma mapping or something interacts weirdly with dm there?
 
 The machine has 640 megs of RAM.  FWIW, I tried this with 512 megs of
 RAM with the same results.  Running Memtest86+ shows the memory is
 good.

Hmmm... I see, so no DMA to the wrong address problem then.  Let's see
whether dm people can help us out.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible data corruption sata_sil24?

2007-07-18 Thread Tejun Heo
David Shaw wrote:
 It fails whether I use a raw /dev/sdd or partition it into one large
 /dev/sdd1, or partition into multiple partitions.  sata_sil24 seems to
 work by itself, as does dm, but as soon as I mix sata_sil24+dm, I get
 corruption.
 H Can you reproduce the corruption by accessing both devices
 simultaneously without using dm?  Considering ich5 does fine, it looks
 like hardware and/or driver problem and I really wanna rule out dm.
 
 I think I wasn't clear enough before.  The corruption happens when I
 use dm to create two dm mappings that both reside on the same real
 device.  Using two different devices, or two different partitions on
 the same physical device works properly.  ich5 does fine with these 3
 tests, but sata_sil24 fails:
 
  * /dev/sdd, create 2 dm linear mappings on it, mke2fs and use those
dm devices == corruption
 
  * Partition /dev/sdd into /dev/sdd1 and /dev/sdd2, mke2fs and use
those partitions == no corruption
 
  * Partition /dev/sdd into /dev/sdd1 and /dev/sdd2, create 2 dm linear
mappings on /dev/sdd1, mke2fs and use those dm devices ==
corruption

I'm not sure whether this is problem of sata_sil24 or dm layer.  Cc'ing
linux-raid for help.  How much memory do you have?  One big difference
between ata_piix and sata_sil24 is that sil24 can handle 64bit DMA.
Maybe dma mapping or something interacts weirdly with dm there?

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible data corruption sata_sil24?

2007-07-18 Thread David Shaw
On Wed, Jul 18, 2007 at 05:53:39PM +0900, Tejun Heo wrote:
 David Shaw wrote:
  It fails whether I use a raw /dev/sdd or partition it into one large
  /dev/sdd1, or partition into multiple partitions.  sata_sil24 seems to
  work by itself, as does dm, but as soon as I mix sata_sil24+dm, I get
  corruption.
  H Can you reproduce the corruption by accessing both devices
  simultaneously without using dm?  Considering ich5 does fine, it looks
  like hardware and/or driver problem and I really wanna rule out dm.
  
  I think I wasn't clear enough before.  The corruption happens when I
  use dm to create two dm mappings that both reside on the same real
  device.  Using two different devices, or two different partitions on
  the same physical device works properly.  ich5 does fine with these 3
  tests, but sata_sil24 fails:
  
   * /dev/sdd, create 2 dm linear mappings on it, mke2fs and use those
 dm devices == corruption
  
   * Partition /dev/sdd into /dev/sdd1 and /dev/sdd2, mke2fs and use
 those partitions == no corruption
  
   * Partition /dev/sdd into /dev/sdd1 and /dev/sdd2, create 2 dm linear
 mappings on /dev/sdd1, mke2fs and use those dm devices ==
 corruption
 
 I'm not sure whether this is problem of sata_sil24 or dm layer.  Cc'ing
 linux-raid for help.  How much memory do you have?  One big difference
 between ata_piix and sata_sil24 is that sil24 can handle 64bit DMA.
 Maybe dma mapping or something interacts weirdly with dm there?

The machine has 640 megs of RAM.  FWIW, I tried this with 512 megs of
RAM with the same results.  Running Memtest86+ shows the memory is
good.

David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html