Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I c
> RAID-Z is a data/parity scheme like RAID-5, but it uses
> dynamic stripe width.
> Every block is its own RAID-Z stripe, regardless of blocksize. This
> means
> that every RAID-Z write is a full-stripe write. This, when
> combined with the
> copy-on-write transactional semantics of ZFS, completely
Gurus;
I am exceedingly impressed by the ZFS although it is my humble opinion
that Sun is not doing enough evangelizing for it.
But that's beside the point.
I am writing to seek help in understanding the RAID-Z concept.
Jeff Bonwick's weblog states the following;
"
RAID-Z is a data/parity
Xorg was acting *very* strangely, so in my efforts to try and get back
to a state where I could actually get work done, I did the unthinkable.
I rebooted my box. ;)
Not only did it not fix the problem, it made it worse!! Now it won't
even boot anymore!
What happens is the BIOS splash screen goe
> I have an opensolaris server running with a raidz zfs
> pool with almost 1TB of storage. This is intended
> to be a central fileserver via samba and ftp for all
> sorts of purposes. I also want to use it to backup my
> XP laptop. I am having trouble finding out how I can
> setup solaris to allo
> Bryan,
>
> Thanks for your suggestion. I am looking at this as
> more of a DR solution. However, I might be able to
> use your method if my data can be a little old.
> Perhaps this way I could sync the data nightly with a
> remote site to make sure that I am no more than 24
> hours behind in the
> To clarify further; EMC note "EMC Host Connectivity
> Guide for Solaris" indicates that ZFS is supported on
> 11/06 (aka Update 3) and onwards. However, they sneak
> in a cautionary disclaimer that snapshot and clone
> features are supported by Sun. If one reads it
> carefully it appears that the
Hello Bart,
Wednesday, May 16, 2007, 6:07:36 PM, you wrote:
BS> Bill Moloney wrote:
>> for example, doing sequential 1MB writes to a
>> previously written) zvol (simple catenation of 5
>> FC drives in a JBOD) and writing 2GB of data induced
>> more than 4GB of IO to the drives (with smaller w
I'm using a ZFS/Solaris Samba share and use "MS SyncToy" to schedule backups.
Works fine for me.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.
It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk. It looks like prefetch
aggravates this problem.
When I asked M
Nigel,
Was the iSCSI target daemon running and the targets are gone? or
did the daemon core repeatedly?
How did you created the targets?
-tim
eric kustarz wrote:
Hi Tim,
Is the iSCSI target not coming back up after a reboot a known problem?
Can you take a look?
eric
Begin forwarde
At Matt's request, I did some further experiments and have found that
this appears to be particular to your hardware. This is not a general
32-bit problem. I re-ran this experiment on a 1-disk pool using a 32
and 64-bit kernel. I got identical results:
64-bit
==
$ /usr/bin/time dd if=/test
> >*sata_hba_list::list sata_hba_inst_t satahba_next | ::print
> >sata_hba_inst_t satahba_dev_port | ::array void* 32 | ::print void* |
> >::grep ".!=0" | ::print sata_cport_info_t cport_devp.cport_sata_drive |
> >::print -a sata_drive_info_t satadrv_features_support satadrv_settings
> >satadrv
I will do that, but I'll do a couple of things first, to try to isolate the
problem more precisely:
- Use ZFS on a plain PATA drive on onboard IDE connector to see if it works
with prefetch on this 32-bit machine.
- Use this PCI-SATA card in a 64-bit, 2g RAM machine and see how it performs
there,
Marko Milisavljevic wrote:
now lets try:
set zfs:zfs_prefetch_disable=1
bingo!
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
609.00.0 77910.00.0 0.0 0.80.01.4 0 83 c0d0
only 1-2 % slower then dd from /dev/dsk. Do you think this is general
32-bit probl
Marko Milisavljevic wrote:
Got excited too quickly on one thing... reading single zfs file does
give me almost same speed as dd /dev/dsk... around 78MB/s... however,
creating a 2-drive stripe, still doesn't perform as well as it ought to:
Yes, that makes sense. Because prefetch is disabled, Z
Leon Koll <[EMAIL PROTECTED]> wrote:
> > May be this link could help you?
> >
> > http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
> >
>
> Looks exactly what we need. It's strange it wasn't posted to zfs-discuss. SO
> many people were waiting for this code.
The NFSv4 ACLs are
Bill Moloney wrote:
for example, doing sequential 1MB writes to a
previously written) zvol (simple catenation of 5
FC drives in a JBOD) and writing 2GB of data induced
more than 4GB of IO to the drives (with smaller write
sizes this ratio gets progressively worse)
How did you measure this?
writes to ZFS objects have significant data and meta-data implications, based
on the zfs copy-on write implementation ... as data is written into a file
object, for example, this update must eventually be written to a new location
on physical disk, and all of the meta-data (from the uberblock do
On May 15, 2007, at 4:49 PM, Nigel Smith wrote:
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/
001162.html
But after a reboot the iscsi target was not longer ava
> May be this link could help you?
>
> http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
>
Looks exactly what we need. It's strange it wasn't posted to zfs-discuss. SO
many people were waiting for this code.
Thanks, Dmitry.
This message posted from opensolaris.org
May be this link could help you?
http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
Got excited too quickly on one thing... reading single zfs file does give me
almost same speed as dd /dev/dsk... around 78MB/s... however, creating a
2-drive stripe, still doesn't perform as well as it ought to:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
294.30.0 3767
At 11:42 PM 5/15/2007, Michael Hale wrote:
>On May 15, 2007, at 9:32 PM, Hazvinei Mugwagwa wrote:
>
>>I have an opensolaris server running with a raidz zfs pool with
>>almost 1TB of storage. This is intended to be a central
>>fileserver via samba and ftp for all sorts of purposes. I also wan
24 matches
Mail list logo