[zfs-discuss] XFS_IOC_FSGETXATTR & XFS_IOC_RESVSP64 like options in ZFS ?

2007-10-12 Thread Manoj Nayak
affected when writing to preallocated space, since extra filesystem transactions are required to convert extent flags on the range of the file written. Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensol

[zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
error: Bad file number Abort - core dumped # Thanks Manoj Nayak #!/bin/ksh # This script generates Solaris ramdisk image for works nodes PKGADD=/usr/sbin/pkgadd PKGLOG=/tmp/packages.log PKGADMIN=/tmp/pkgadmin ROOTDIR=/tmp/miniroot OPTDIR=$ROOTDIR/opt HOMEDIR=$ROOTDIR/home/kealia USRDIR

Re: [zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
Hi , I am using s10u3 in x64 AMD Opteron thumper. Thanks Manoj Nayak Manoj Nayak wrote: > Hi , > > I am getting following error message when I run any zfs command.I have > attach the script I use to create ramdisk image for Thumper. > > # zfs volinit > internal error: Ba

[zfs-discuss] How to destory a faulted pool

2007-11-16 Thread Manoj Nayak
How I can destroy the following pool ? pool: mstor0 id: 5853485601755236913 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-5E config: mstor0 UNAVAIL

[zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
512 to 128k Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
0 disk_io sd13 /devices/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a R87552 49463 0 Thanks Manoj Nayak > Do Check out : > http://blogs.sun.com/roch/entry/128k_suffice > > -r > > > Manoj Nayak write

Re: [zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
Roch - PAE wrote: > Manoj Nayak writes: > > Roch - PAE wrote: > > > Why do you want greater than 128K records. > > > > > A single-parity RAID-Z pool on thumper is created & it consists of four > > disk.Solaris 10 update 4 runs on thumper.Then zf

[zfs-discuss] ZFS vdev_cache

2008-01-22 Thread Manoj Nayak
Hi All, If any dtrace script is available to figure out the vdev_cache (or software track buffer) reads in kiloBytes ? The document says the default size of the read is 128k , However vdev_cache source code implementation says the default size is 64k Thanks Manoj Nayak

[zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread Manoj Nayak
hanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread manoj nayak
> Manoj Nayak wrote: >> Hi All. >> >> ZFS document says ZFS schedules it's I/O in such way that it manages to >> saturate a single disk bandwidth using enough concurrent 128K I/O. >> The no of concurrent I/O is decided by vq_max_pending.The default value

Re: [zfs-discuss] ZFS vdev_cache

2008-01-22 Thread manoj nayak
> > Manoj Nayak writes: > > Hi All, > > > > If any dtrace script is available to figure out the vdev_cache (or > > software track buffer) reads in kiloBytes ? > > > > The document says the default size of the read is 128k , However > > vdev_cache

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread manoj nayak
- Original Message - From: "Richard Elling" <[EMAIL PROTECTED]> To: "manoj nayak" <[EMAIL PROTECTED]> Cc: Sent: Wednesday, January 23, 2008 7:20 AM Subject: Re: [zfs-discuss] ZFS vq_max_pending value ? > manoj nayak wrote: >> >>> Mano

[zfs-discuss] ZFS volume Block Size & Record Size

2008-01-22 Thread Manoj Nayak
Hi All, How ZFS volblocksize is related to ZFS record size ? Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread Manoj Nayak
nd 5000 MB read I/0 & iopattern says around 55% of the IO are random in nature. I don't know how much prefetching through track cache is going to help here.Probably I can try disabling vdev_cache through "set 'zfs_vdev_cache_max' 1" Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-23 Thread Manoj Nayak
Roch - PAE wrote: > Manoj Nayak writes: > > Hi All. > > > > ZFS document says ZFS schedules it's I/O in such way that it manages to > > saturate a single disk bandwidth using enough concurrent 128K I/O. > > The no of concurrent I/O is decided by vq_m