Re: [zfs-discuss] zfs data corruption

2008-04-24 Thread johansen
> I'm just interested in understanding how zfs determined there was data
> corruption when I have checksums disabled and there were no
> non-retryable read errors reported in the messages file.

If the metadata is corrupt, how is ZFS going to find the data blocks on
disk?

> >  I don't believe it was a real disk read error because of the
> >  absence of evidence in /var/adm/messages.

It's not safe to jump to this conclusion.  Disk drivers that support FMA
won't log error messages to /var/adm/messages.  As more support for I/O
FMA shows up, you won't see random spew in the messages file any more.

-j
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, CIFS, slow write speed

2008-04-24 Thread Rick
Recently I've installed SXCE nv86 for the first time in hopes of getting rid of 
my linux file server and using Solaris and ZFS for my new file server. After 
setting up a simple ZFS mirror of 2 disks, I enabled smb and set about moving 
over all of my data from my old storage server. What I noticed was the dismal 
performance while writing. I have tried to find information regarding 
performance and possible expectations, but I've yet to come across anything 
with any real substance that can help me out. I'm sure there is some guide on 
tuning for CIFS, but I've not been able to locate it. The write speeds for NFS 
described in this post 
http://opensolaris.org/jive/thread.jspa?threadID=55764&tstart=0 made me want to 
look into NFS. However, after disabling sharing, turning off smb, enabling NFS, 
and sharing the pool again I see the same if not worse performance on write 
speeds (ms windows SFU may be partially to blame, so I've gone back to learning 
how to fix smb instead of learning 
 and tweaking NFS).

What I'm doing is mounting the smb share with WinXP and pulling data from the 
ZFS mirror pool at 2.3MiB/s across the network. Writing to the same share from 
the WinXP host I get a fairly consistent 342KiB/s speed.

Copying data locally from an IDE drive to the zpool mirror (2 SATAII drives) I 
get much faster performance. As I do with copying data from one zpool mirror (1 
SATA1 drive and 1 SATAII drive) to another zpool mirror (2 SATAII drives) on 
the same host. I'm not sure on performance numbers but it takes *substantially* 
less time to transfer.

The research I've done thus far indicates that I've got to use a file that's 
double the size of my ram to ensure that caching doesn't skew the results. So 
these tests are all done with an 8GB file.

I would imagine that write speeds and read speeds across the network should be 
much closer. At this point, I'm assuming that I'm doing something wrong here. 
Anyone want to let me know what I'm missing?

rick
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper / X4500 marvell driver issues

2008-04-24 Thread Xavier Canehan
Carson,
are you sure about the patch number ? I have been unable to get a french 
support guy to find it.
You are mentionning an "official" IDR patch for Solaris 10 x86, update 4, 
aren't you ?

Thanks.

Xavier.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] UFS or ZFS for MYSQL and APACHE web server data and Database

2008-04-24 Thread Kory Wheatley
I have two 200GB ISCSI LUN's setup on each X4600 M2 running Solaris 10 X86 
update 4.  The data on these ISCSI disks for one server will be Apache and on 
the other server MYSQL.  My question is should I setup these disks as a ZFS 
Pool filesystem or as a UFS soft  partition (eventually the LUN will be 
expanded).  What would offer the best performance and easier to expand later 
on?  What file system as less problems setting up data this way?  This will be 
on production systems so I need to the best results.

The disks are connected to a Equallogic box that's setup as a Raid 50.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] panic on zfs scrub on builds 79 & 86

2008-04-24 Thread Robert Dickel
This just started happening to me. It's a striped non mirrored pool (I know I 
know). A zfs scrub causes a panic under a minute. I can also trigger a panic by 
doing tars etc. x86 64-bit kernel ... any ideas? Just to help rule out some 
things, I changed the motherboard, memory and cpu and it still happens ... I 
also think it happens on a 32-bit kernel.

genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) 
rp=ff00100917f0 addr=23b occurred in module "unix" due to a NULL pointer 
dereference
unix: [ID 10 kern.notice] 
unix: [ID 839527 kern.notice] sched: 
unix: [ID 753105 kern.notice] #pf Page fault
unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x23b
unix: [ID 243837 kern.notice] pid=0, pc=0xfb840dbb, 
sp=0xff00100918e8, eflags=0x10246
brighton unix: [ID 211416 kern.notice] cr0: 8005003b cr4: 
6f8
unix: [ID 624947 kern.notice] cr2: 23b
unix: [ID 625075 kern.notice] cr3: 300
unix: [ID 625715 kern.notice] cr8: c
unix: [ID 10 kern.notice] 
unix: [ID 592667 kern.notice]  rdi:  23b rsi:0 rdx: 
ff0010091c80
unix: [ID 592667 kern.notice]  rcx:2  r8:  1ab  r9: 
370503010300
unix: [ID 592667 kern.notice]  rax:0 rbx:0 rbp: 
ff0010091940
unix: [ID 592667 kern.notice]  r10:484e8 r11:  r12: 
 23b
unix: [ID 592667 kern.notice]  r13:3 r14:0 r15: 
   1
unix: [ID 592667 kern.notice]  fsb: fd7fff290200 gsb: ff02d0e69000  ds: 
  4b
unix: [ID 592667 kern.notice]   es:   4b  fs:0  gs: 
   0
unix: [ID 592667 kern.notice]  trp:e err:2 rip: 
fb840dbb
unix: [ID 592667 kern.notice]   cs:   30 rfl:10246 rsp: 
ff00100918e8
unix: [ID 266532 kern.notice]   ss:   38
unix: [ID 10 kern.notice] 
genunix: [ID 655072 kern.notice] ff00100916d0 unix:die+c8 ()
genunix: [ID 655072 kern.notice] ff00100917e0 unix:trap+13b1 ()
genunix: [ID 655072 kern.notice] ff00100917f0 unix:cmntrap+e9 ()
genunix: [ID 655072 kern.notice] ff0010091940 unix:mutex_enter+b ()
genunix: [ID 655072 kern.notice] ff0010091960 zfs:zio_buf_alloc+25 ()
genunix: [ID 655072 kern.notice] ff00100919a0 zfs:zio_read_init+49 ()
genunix: [ID 655072 kern.notice] ff00100919d0 zfs:zio_execute+7f ()
genunix: [ID 655072 kern.notice] ff0010091a10 zfs:zio_wait+2e ()
genunix: [ID 655072 kern.notice] ff0010091a60 zfs:traverse_read+19f ()
genunix: [ID 655072 kern.notice] ff0010091b00 zfs:find_block+15b ()
genunix: [ID 655072 kern.notice] ff0010091b90 zfs:traverse_segment+233 ()
genunix: [ID 655072 kern.notice] ff0010091be0 zfs:traverse_more+6f ()
genunix: [ID 655072 kern.notice] ff0010091c60 zfs:spa_scrub_thread+19d ()
genunix: [ID 655072 kern.notice] ff0010091c70 unix:thread_start+8 ()
unix: [ID 10 kern.notice] 
genunix: [ID 672855 kern.notice] syncing file systems...
genunix: [ID 904073 kern.notice]  done
genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c2t0d0s1, offset 
1289748480, content: kernel
ahci: [ID 722475 kern.info] NOTICE: ahci_tran_reset_dport: port 0 reset port
genunix: [ID 409368 kern.notice] ^M100% done: 143397 pages dumped, compression 
ratio 3.74, 
Apr 23 20:23:56 brighton genunix: [ID 851671 kern.notice] dump succeeded
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-24 Thread Joerg Schilling
"Richard L. Hamilton" <[EMAIL PROTECTED]> wrote:

> > I know two write-only device types:
> > 
> > WOM Write-only media
> > WORNWrite-once read never (this one is often used
> > for backups ;-)
> > 
> > Jörg
>
> Save $$ (or ??) - use /dev/null instead.

See: 

ftp://ftp.berlios.de/pub/star/testscripts/zwicky/

The Zwicky test claims that that most popular backup program is called
"no backup" ;-)

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-24 Thread Frank . Hofmann
On Thu, 24 Apr 2008, Daniel Rock wrote:

> Joerg Schilling schrieb:
>> WOM  Write-only media
>
> http://www.national.com/rap/files/datasheet.pdf

I love this part of the specification:

Cooling

The 25120 is easily cooled by employment of a six-foot fan,
1/2" from the package. If the device fails, you have exceeded
the ratings. In such cases, more air is recommended.

There was an article in German c't magazine's issue exactly 13 years ago 
this month that benchmarked various operating system's Null devices. They 
tested an unnamed "hardware null device prototype", now I finally know 
what that one actually was !

:-)

FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-24 Thread Daniel Rock
Joerg Schilling schrieb:
> WOM   Write-only media

http://www.national.com/rap/files/datasheet.pdf


Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs data corruption

2008-04-24 Thread Victor Engle
Just to clarify this post. This isn't data I care about recovering.
I'm just interested in understanding how zfs determined there was data
corruption when I have checksums disabled and there were no
non-retryable read errors reported in the messages file.

On Wed, Apr 23, 2008 at 9:52 PM, Victor Engle <[EMAIL PROTECTED]> wrote:
> Thanks! That would explain things. I don't believe it was a real disk
>  read error because of the absence of evidence in /var/adm/messages.
>
>  I'll review the man page and documentation to confirm that metadata is
>  checksummed.
>
>  Regards,
>  Vic
>
>
>
>
>  On Wed, Apr 23, 2008 at 6:30 PM, Nathan Kroenert
>  <[EMAIL PROTECTED]> wrote:
>  > I'm just taking a stab here, so could be completely wrong, but IIRC, even 
> if
>  > you disable checksum, it still checksums the metadata...
>  >
>  >  So, it could be metadata checksum errors.
>  >
>  >  Others on the list might have some funky zdb thingies you could to see 
> what
>  > it actually is...
>  >
>  >  Note: typed pre caffeine... :)
>  >
>  >  Nathan
>  >
>  >
>  >
>  >  Vic Engle wrote:
>  >
>  > > I'm hoping someone can help me understand a zfs data corruption symptom.
>  > We have a zpool with checksum turned off. Zpool status shows that data
>  > corruption occured. The application using the pool at the time reported a
>  > "read" error and zoppl status (see below) shows 2 read errors on a device.
>  > The thing that is confusing to me is how ZFS determines that data 
> corruption
>  > exists when reading data from a pool with checkdum turned off.
>  > >
>  > > Also, I'm wondering about the persistent errors in the output below. 
> Since
>  > no specific file or directory is mentioned does this indicate pool metadata
>  > is corrupt?
>  > >
>  > > Thanks for any help interpreting the output...
>  > >
>  > >
>  > > # zpool status -xv
>  > >  pool: zpool1
>  > >  state: ONLINE
>  > > status: One or more devices has experienced an error resulting in data
>  > >corruption.  Applications may be affected.
>  > > action: Restore the file in question if possible.  Otherwise restore the
>  > >entire pool from backup.
>  > >   see: http://www.sun.com/msg/ZFS-8000-8A
>  > >  scrub: none requested
>  > > config:
>  > >
>  > >NAME STATE READ WRITE 
> CKSUM
>  > >zpool1   ONLINE   2 0 > 0
>  > >  c4t60A9800043346859444A476B2D48446Fd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D484352d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D484236d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D482D6Cd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483951d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483836d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D48366Bd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483551d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483435d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D48326Bd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483150d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D483035d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D47796Ad0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D477850d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D477734d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D47756Ad0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D47744Fd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D477333d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D477169d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D47704Ed0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D476F33d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D476D68d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D476C4Ed0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D476B32d0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D476968d0  ONLINE   0 0 > 0
>  > >  c4t60A98000433468656834476B2D453974d0  ONLINE   0 0 > 0
>  > >  c4t60A98000433468656834476B2D454142d0  ONLINE   0 0 > 0
>  > >  c4t60A98000433468656834476B2D454255d0  ONLINE   0 0 > 0
>  > >  c4t60A98000433468656834476B2D45436Dd0  ONLINE   0 0 > 0
>  > >  c4t60A9800043346859444A476B2D487346d0  ONLINE   2 0 > 0
>  > >  c4t60A9800043346859444A476B2D487175d0  ONLINE

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread andrew
What is the reasoning behind ZFS not enabling the write cache for the root 
pool? Is there a way of forcing ZFS to enable the write cache?

Thanks

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-24 Thread Richard L. Hamilton
> "Dana H. Myers" <[EMAIL PROTECTED]> wrote:
> 
> > Bob Friesenhahn wrote:
> > > Are there any plans to support ZFS for write-only
> media such as 
> > > optical storage?  It seems that if mirroring or
> even zraid is used 
> > > that ZFS would be a good basis for long term
> archival storage.
> > I'm just going to assume that "write-only" here
> means "write-once,
> > read-many", since it's far too late for an April
> Fool's joke.
> 
> I know two write-only device types:
> 
> WOM   Write-only media
> WORN  Write-once read never (this one is often used
> for backups ;-)
> 
> Jörg

Save $$ (or €€) - use /dev/null instead.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Just to clarify a bit, ZFS will not enable the write cache for the root 
pool. That said, there are disk drives which have the write cache 
enabled by default. That behavior remains unchanged.

- George

George Wilson wrote:
> Unfortunately not.
>
> Thanks,
> George
>
> Par Kansala wrote:
>   
>> Hi,
>>
>> Will the upcoming zfs boot capabilities also enable write cache on a 
>> boot disk
>> like it does on regular data disks (when whole disks are used) ?
>>
>> //Par
>> -- 
>> -- 
>>  *Pär Känsälä*
>> OEM Engagement Architect
>> *Sun Microsystems*
>> Phone +46 8 631 1782 (x45782)
>> Mobile +46 70 261 1782
>> Fax +46 455 37 92 05
>> Email [EMAIL PROTECTED]
>> 
>>
>> 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
>> 
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Unfortunately not.

Thanks,
George

Par Kansala wrote:
> Hi,
>
> Will the upcoming zfs boot capabilities also enable write cache on a 
> boot disk
> like it does on regular data disks (when whole disks are used) ?
>
> //Par
> -- 
> -- 
>   *Pär Känsälä*
> OEM Engagement Architect
> *Sun Microsystems*
> Phone +46 8 631 1782 (x45782)
> Mobile +46 70 261 1782
> Fax +46 455 37 92 05
> Email [EMAIL PROTECTED]
> 
>
> 
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss