affected when
writing to preallocated space, since extra filesystem transactions are
required to convert extent flags on the range of the file written.
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
error: Bad file number
Abort - core dumped
#
Thanks
Manoj Nayak
#!/bin/ksh
# This script generates Solaris ramdisk image for works nodes
PKGADD=/usr/sbin/pkgadd
PKGLOG=/tmp/packages.log
PKGADMIN=/tmp/pkgadmin
ROOTDIR=/tmp/miniroot
OPTDIR=$ROOTDIR/opt
HOMEDIR=$ROOTDIR/home/kealia
USRDIR
Hi ,
I am using s10u3 in x64 AMD Opteron thumper.
Thanks
Manoj Nayak
Manoj Nayak wrote:
> Hi ,
>
> I am getting following error message when I run any zfs command.I have
> attach the script I use to create ramdisk image for Thumper.
>
> # zfs volinit
> internal error: Ba
How I can destroy the following pool ?
pool: mstor0
id: 5853485601755236913
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
mstor0 UNAVAIL
512 to 128k
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0
disk_io sd13
/devices/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0:a R87552
49463 0
Thanks
Manoj Nayak
> Do Check out :
> http://blogs.sun.com/roch/entry/128k_suffice
>
> -r
>
>
> Manoj Nayak write
Roch - PAE wrote:
> Manoj Nayak writes:
> > Roch - PAE wrote:
> > > Why do you want greater than 128K records.
> > >
> > A single-parity RAID-Z pool on thumper is created & it consists of four
> > disk.Solaris 10 update 4 runs on thumper.Then zf
Hi All,
If any dtrace script is available to figure out the vdev_cache (or
software track buffer) reads in kiloBytes ?
The document says the default size of the read is 128k , However
vdev_cache source code implementation says the default size is 64k
Thanks
Manoj Nayak
hanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Manoj Nayak wrote:
>> Hi All.
>>
>> ZFS document says ZFS schedules it's I/O in such way that it manages to
>> saturate a single disk bandwidth using enough concurrent 128K I/O.
>> The no of concurrent I/O is decided by vq_max_pending.The default value
>
> Manoj Nayak writes:
> > Hi All,
> >
> > If any dtrace script is available to figure out the vdev_cache (or
> > software track buffer) reads in kiloBytes ?
> >
> > The document says the default size of the read is 128k , However
> > vdev_cache
- Original Message -
From: "Richard Elling" <[EMAIL PROTECTED]>
To: "manoj nayak" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday, January 23, 2008 7:20 AM
Subject: Re: [zfs-discuss] ZFS vq_max_pending value ?
> manoj nayak wrote:
>>
>>> Mano
Hi All,
How ZFS volblocksize is related to ZFS record size ?
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd 5000 MB read I/0 & iopattern says around 55%
of the IO are random in nature.
I don't know how much prefetching through track cache is going to help
here.Probably I can try disabling vdev_cache
through "set 'zfs_vdev_cache_max' 1"
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roch - PAE wrote:
> Manoj Nayak writes:
> > Hi All.
> >
> > ZFS document says ZFS schedules it's I/O in such way that it manages to
> > saturate a single disk bandwidth using enough concurrent 128K I/O.
> > The no of concurrent I/O is decided by vq_m
15 matches
Mail list logo