Hmm.. my b69 installation understands zfs allow, but man zfs has no info at
all. man says it was last modified on june 28. 2007, and also:-r--r--r-- 1
root bin 59081 Jul 10 12:34 /usr/share/man/man1m/zfs.1m
I installed b69 by using live upgrade from, I think, b65.
Is this a bug that needs
I daily rsync my Mac's home directory to home solaris server box, and
snapshot it. Weekly, I rsync my photo directory to my $10/month hosting
provider. To better manage situations where I have a long rsync in progress
up to hosting provider, and the daily backup kicks in, I wrote a simple
script
Hello,
I'm sure there is a simple solution, but I am unable to figure this one out.
Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2, and I set sharenfs=on for
tank/fs (child filesystems are inheriting it as well), and I chown
user:group /tank/fs, /tank/fs/fs1 and /tank/fs/fs2, I see:
ls -la
same
ownership (user:group in question exists on both machines, and client is Mac
OSX 10.4.9)
Marko
On 6/26/07, Marko Milisavljevic [EMAIL PROTECTED] wrote:
Hello,
I'm sure there is a simple solution, but I am unable to figure this one
out.
Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2
and the view of the same over NFS to display same
ownership (user:group in question exists on both machines, and client is Mac
OSX 10.4.9)
Marko
On 6/26/07, Marko Milisavljevic [EMAIL PROTECTED] wrote:
Hello,
I'm sure there is a simple solution, but I am unable to figure this one
out.
Assuming I
The whole read-only business sounds like baloney to me. Read-only ZFS
implies that the file system would be created elsewhere - and I don't know
if there will be continuing compatibility between
Solaris/Linux(FUSE)/FreeBSD implementations - so they would presumably
support read-only of Solaris'
I have also been trying to figure out the best strategy regarding ZFS
boot... I currently have a single disk UFS boot and RAID-Z for data. I plan
on getting a mirror for boot, but I still don't understand what my options
are regarding:
- Should I set up one zfs slice for the entire drive and
You are right... I shouldn't post in the middle of the night... nForce
chipsets don't support AHCI.
On 6/4/07, J. David Beutel [EMAIL PROTECTED] wrote:
Marko Milisavljevic [EMAIL PROTECTED] wrote on 06/02/2007
02:03:56 AM:
I think nForce 430 would be using AHCI driver if you set you BIOS
I think nForce 430 would be using AHCI driver if you set you BIOS for
it, in current Nevada builds anyway, and I think that uses SATA
framework.
On 6/1/07, J. David Beutel [EMAIL PROTECTED] wrote:
Eric Schrock [EMAIL PROTECTED] wrote on Friday, June 01, 2007 12:50:50:
Only devices that use
I second that... I am trying to figure out what is missing so that I
can use ZFS exclusively... right now as far as I know two major
obstacles are no support from installer and issues with live update.
Are both of those expected to be resolved this year?
On 5/30/07, Carl Brewer [EMAIL PROTECTED]
What kind of performance are you getting with ZFS from Sil3114 card?
Can you try bonnie or dd if=file of=/dev/null... on it?
On 5/20/07, Diego Righi [EMAIL PROTECTED] wrote:
The other one is a no-brand 4 port sil3114 pci sata 1.0 controller that I
bought at a local computer fair last
It is definitely defined in b63... not sure when it got introduced.
http://src.opensolaris.org/source/xref/onnv/aside/usr/src/cmd/mdb/common/modules/zfs/zfs.c
shows tunable parameters for ZFS, under zfs_params(...)
On 5/20/07, Trygve Laugstøl [EMAIL PROTECTED] wrote:
Marko Milisavljevic wrote
Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I
0.02.2 0 99 c3d0
792.10.0 44357.90.0 0.0 1.80.02.2 0 98 c3d1
(and in Linux it saturates PCI bus at 60MB/s per drive)
On 5/15/07, Marko Milisavljevic [EMAIL PROTECTED] wrote:
set zfs:zfs_prefetch_disable=1
bingo!
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t
I will do that, but I'll do a couple of things first, to try to isolate the
problem more precisely:
- Use ZFS on a plain PATA drive on onboard IDE connector to see if it works
with prefetch on this 32-bit machine.
- Use this PCI-SATA card in a 64-bit, 2g RAM machine and see how it performs
somewhere regarding prefetch, or is this
a known issue?
Many thanks.
On 5/15/07, Matthew Ahrens [EMAIL PROTECTED] wrote:
Marko Milisavljevic wrote:
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives
me
I tried as you suggested, but I notice that output from iostat while
doing dd if=/dev/dsk/... still shows that reading is done in 56k
chunks. I haven't see any change in performance. Perhaps iostat
doesn't say what I think it does. Using dd if=/dev/rdsk/.. gives 256k,
and dd if=zfsfile gives 128k
On 5/15/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Each drive is freshly formatted with one 2G file copied to it.
How are you creating each of these files?
zpool create tank c0d0 c0d1; zfs create tank/test; cp ~/bigfile /tank/test/
Actual content of the file is random junk from
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS)
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result
whether it is single zfs
To reply to my own message this article offers lots of insight into why dd
access directly through raw disk is fast, while accessing a file through the
file system may be slow.
http://www.informit.com/articles/printerfriendly.asp?p=606585rl=1
So, I guess what I'm wondering now is, does it
Thank you for those numbers.
I should have mentioned that I was mostly interested in single disk or small
array performance, as it is not possible for dd to meaningfully access
multiple-disk configurations without going through the file system. I find
it curious that there is such a large
I missed an important conclusion from j's data, and that is that single disk
raw access gives him 56MB/s, and RAID 0 array gives him 961/46=21MB/s per
disk, which comes in at 38% of potential performance. That is in the
ballpark of getting 45% of potential performance, as I am seeing with my
puny
Thank you, Al.
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
On 5/14/07, Al Hopper [EMAIL PROTECTED] wrote:
# ptime dd if=./allhomeal20061209_01.tar of=/dev/null bs=128k count=1
1+0 records
unchached read:
HFS+: 86%
ext3 and UFS: 70%
ZFS: 45%
On 5/14/07, Richard Elling [EMAIL PROTECTED] wrote:
Marko Milisavljevic wrote:
I missed an important conclusion from j's data, and that is that single
disk raw access gives him 56MB/s, and RAID 0 array gives him
961/46=21MB/s per disk, which
/14/07, Ian Collins [EMAIL PROTECTED] wrote:
Marko Milisavljevic wrote:
To reply to my own message this article offers lots of insight into
why dd access directly through raw disk is fast, while accessing a file
through the file system may be slow.
http://www.informit.com/articles
I am very grateful to everyone who took the time to run a few tests to help
me figure what is going on. As per j's suggestions, I tried some
simultaneous reads, and a few other things, and I am getting interesting and
confusing results.
All tests are done using two Seagate 320G drives on
26 matches
Mail list logo