On Sat, Dec 5, 2009 at 12:13 PM, Jens Vilstrup yon...@gmail.com wrote:
Hi there.
I'm looking at moving my home server to ZFS and adding a second for backup
purposes.
In the process of researching ZFS I noticed iSCSI.
I'm thinking of creating a zvol, share it with iSCSI and use it with my
If feasible, you may want to generate MD5 sums on the streamed output
and then use these for verification.
That's actually not a bad idea. It should be kinda obvious, but I hadn't
thought of it because it's sort-of duplicating existing functionality.
I do have a multipipe script that behaves
Where exactly do you get zstreamdump?
I found a link to zstreamdump.c ... but is that it? Shouldn't it be part of
a source tarball or something?
Does it matter what OS? Every reference I see for zstreamdump is about
opensolaris. But I'm running solaris.
On Sat, Dec 5, 2009 at 17:17, Richard Elling richard.ell...@gmail.comwrote:
On Dec 4, 2009, at 4:11 PM, Edward Ned Harvey wrote:
Depending of your version of OS, I think the following post from Richard
Elling
will be of great interest to you:
-
On Sun, 6 Dec 2009, Edward Ned Harvey wrote:
I also have a threadzip script, because gzip is invariably the bottleneck
in the data stream. Utilize those extra cores!!! ;-)
Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot
more CPU efficient on i386 and AMD64, and even
Edward Ned Harvey wrote:
If feasible, you may want to generate MD5 sums on the streamed output
and then use these for verification.
That's actually not a bad idea. It should be kinda obvious, but I hadn't
thought of it because it's sort-of duplicating existing functionality.
I do
Bob Friesenhahn wrote:
On Sun, 6 Dec 2009, Edward Ned Harvey wrote:
I also have a threadzip script, because gzip is invariably the
bottleneck
in the data stream. Utilize those extra cores!!! ;-)
Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot
more CPU efficient on
On 5-Dec-09, at 9:32 PM, nxyyt wrote:
The rename trick may not work here. Even if I renamed the file
successfully, the data of the file may still reside in the memory
instead of flushing back to the disk. If I made any mistake here,
please correct me. Thank you!
I'll try to find out
Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot
more CPU efficient on i386 and AMD64, and even on SPARC. If the
compressor is able to keep up with the network and disk, then it is
fast enough. See http://www.lzop.org/;.
In my development/testing this week, I did time
Hi,
My reading of write code of ZFS (zfs_write in zfs_vnops.c), is that all the
writes in zfs are logged in the ZIL. And if that indeed is the case, then
yes, ZFS does guarantee the sequential consistency, even when there are
power outage or server crash. You might loose some writes if ZIL has
Hi,
I wanted to add a disk to the tank pool to create a mirror. I accidentally used
zpool add … instead of zpool attach … and now the disk is added. Is there a way
to remove the disk without loosing data? Or maybe change it to mirror?
Thanks,
Martijn
--
This message posted from
On Sun, 6 Dec 2009, Edward Ned Harvey wrote:
Threadzip performed 10x faster (hardly a performance I expect from lzop) and
compressed about 2-3% smaller than gzip. Also hardly a performance I could
expect from lzop.
The key is multiple cores. I'm on an 8-core xeon.
I am glad to see that you
Edward Ned Harvey wrote:
I use the excellent pbzip2
zfs send ... | tee (md5sum) | pbzip2 | ssh remote ...
Utilizes those 8 cores quite well :)
This (pbzip2) sounds promising, and it must be better than what I wrote.
;-) But I don't understand the syntax you've got above, using
I saw this bug report…
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 06/12/2009, at 09.05, Tim Cook wrote:
Yes. Snapshots work at the block level so you can take snapshots regardless
of the FS on top of the iSCSI LUN. The two caveats are you'll need another
Mac system at your DR to recover, and also to get a guaranteed consistent
snapshot, you'll need
Do you know a good zfs backup restore walktrough online?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12/06/09 10:11, Anurag Agarwal wrote:
Hi,
My reading of write code of ZFS (zfs_write in zfs_vnops.c), is that all
the writes in zfs are logged in the ZIL.
Each write gets recorded in memory in case it needs to be forced out
later (eg fsync()), but is not written to the on-disk log until
AcghhjkkNnmuUiui
- Original Message -
From: zfs-discuss-boun...@opensolaris.org zfs-discuss-boun...@opensolaris.org
To: Edward Ned Harvey sola...@nedharvey.com
Cc: ZFS discuss zfs-discuss@opensolaris.org
Sent: Sun Dec 06 10:54:11 2009
Subject: Re: [zfs-discuss] ZFS send | verify | receive
I'll try to find out whether ZFS binding the same file always to the same
opening transaction group.
Not sure what you mean by this. Transactions (eg writes) will go into
the current open transaction group (txg). Subsequent writes may enter
the same or a future txg. Txgs are obviously
Do you know a good zfs backup restore walktrough online?
Not really. Check out the zfs send and receive commands in
the ZFS Administration Guide:
http://docs.sun.com/app/docs/doc/819-5461/gbinw?a=view
Basically, you make a snapshot of every filesystem you want to backup,
then you use zfs send
Depending of your version of OS, I think the following post from Richard
Elling will be of great interest to you:
Where exactly do you get zstreamdump?
I found a link to zstreamdump.c ... but is that it? Shouldn't it be part of
a source tarball or something?
Does it matter what OS? Every
I have recently become aware of the capabilities of OpenSolaris, and am
attempting to setup a Storage Box to give a combination of CIFS and iSCSI
and/or NFS.
I have seemingly succsessfully been able to configure a zfs iscsi share via
iscsitgt, yet the performance is just not up to par, so I
I've spent all weekend fighting this problem on our storage server after
installing a ZFS log device, and your suggestion fixed it!
I also have a LSI 3081E-R adapter (B3 revision) connected to a SAS expander
backplane with 7 drives on it. None of the /etc/system options mentioned in
this
OS means Operating System, or OpenSolaris. This is in the second
meaning I wrote OS in my answer. It was not obvious you were using
Solaris 10 though. Sorry about that.
(FYI, zstreamdump seems to be an addition to build 125.)
Oh - I never connected OS to OpenSolaris. ;-)
So I gather
I see 3.6X less CPU
consumption from 'lzop -3' than from 'gzip -3'.
Where do you get lzop from? I don't see any binaries on their site, nor
blastwave, nor opencsw. And I am having difficulty building it from source.
___
zfs-discuss mailing list
On Sun, 6 Dec 2009, Edward Ned Harvey wrote:
I see 3.6X less CPU
consumption from 'lzop -3' than from 'gzip -3'.
Where do you get lzop from? I don't see any binaries on their site, nor
blastwave, nor opencsw. And I am having difficulty building it from source.
I just built it from source.
On Dec 5, 2009, at 11:03 AM, Mike Gerdts wrote:
On Sat, Dec 5, 2009 at 11:32 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sat, 5 Dec 2009, dick hoogendijk wrote:
On Sat, 2009-12-05 at 09:22 -0600, Bob Friesenhahn wrote:
You can also stream into a gzip or lzop wrapper in order
On Dec 6, 2009, at 1:17 PM, Amos Deering wrote:
I have recently become aware of the capabilities of OpenSolaris, and
am attempting to setup a Storage Box to give a combination of CIFS
and iSCSI and/or NFS.
I have seemingly succsessfully been able to configure a zfs iscsi
share via
On Dec 6, 2009, at 3:35 PM, Edward Ned Harvey wrote:
OS means Operating System, or OpenSolaris. This is in the second
meaning I wrote OS in my answer. It was not obvious you were using
Solaris 10 though. Sorry about that.
(FYI, zstreamdump seems to be an addition to build 125.)
Oh - I
Oh well. I built LZO, and can't seem to link it in the lzop build, despite
correctly setting the FLAGS variables they say in the INSTALL file. I'd
love to provide an lzop comparison, but can't get it. I give up ... Also,
can't build python-lzo. Also would be sweet, but hey.
For whoever
cat my_log_file | tee (gzip my_log_file.gz) (wc -l) (md5sum) |
sort | uniq -c
That is great. ;-) Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Dec 04, 2009 at 02:52:47PM -0700, Cindy Swearingen wrote:
If space/dcc is a dataset, is it mounted? ZFS might not be able to
print the filenames if the dataset is not mounted, but I'm not sure
if this is why only object numbers are displayed.
Yes, it's mounted and is quite an active
The only reason I thought this news would be of interest is that the
discussions had some interesting comments. Basically, there is a significant
outcry because zfs was going away. I saw NextentaOS and EON mentioned several
times as the path to go.
Seem that there is some opportunity for
On Sat, Dec 05, 2009 at 01:52:12AM +0300, Victor Latushkin wrote:
On Dec 5, 2009, at 0:52, Cindy Swearingen cindy.swearin...@sun.com
wrote:
The zpool status -v command will generally print out filenames, dnode
object numbers, or identify metadata corruption problems. These look
like
Thanks for the info on the yukon driver. I realize too many variables makes
things impossible to determine, but I had made these hardware changes awhile
back, and they seemed to work fine at the time. Since they aren't now, even
in the older OpenSolaris (i've tried 2009.06 and 2008.11 now), the
35 matches
Mail list logo