Does it not store a separate checksum for a parity block? If so, it
should not even need to recalculate the parity: assuming checksums match
for all data and parity blocks, the data is good.
I could understand
why it would not store a checksum for a parity block. It is not really
necessary:
2012-10-26 12:29, Karl Wagner wrote:
Does it not store a separate checksum for a parity block? If so, it
should not even need to recalculate the parity: assuming checksums match
for all data and parity blocks, the data is good.
No, for the on-disk sector allocation over M disks, zfs raidzN
I've been using this card
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
for my Solaris/Open Indiana installations because it has 8 ports. One of the
issues that this card seems to have, is that certain failures can cause other
secondary problems in other drives on the same
- Forwarded message from Josh Paetzel j...@ixsystems.com -
From: Josh Paetzel j...@ixsystems.com
Date: Fri, 26 Oct 2012 09:55:22 -0700
To: freenas-annou...@lists.sourceforge.net
Subject: [Freenas-announce] FreeNAS 8.3.0-RELEASE
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64;
rv:13.0)
On Fri, 26 Oct 2012, Jerry Kemp wrote:
Thanks for the SIIG pointer, most of the stuff I had archived from this
list pointed to LSI products.
I poked around on the site and reviewed SIIG's SATA and SAS HBA. I also
hit up their search engine. I'm not implying I did an all inclusive
search, but
I'm creating a zpool that is 25TB in size.
What are the recommendations in regards to LUN sizes?
For example:
Should I have 4 x 6.25 TB LUNS to add to the zpool or 20 x 1.25TB LUNs to
add to the pool?
Or does it depend on the size of the san disks themselves?
Or should I divide the zpool up
On Sat, Oct 27, 2012 at 4:08 AM, Morris Hooten mhoo...@us.ibm.com wrote:
I'm creating a zpool that is 25TB in size.
What are the recommendations in regards to LUN sizes?
For example:
Should I have 4 x 6.25 TB LUNS to add to the zpool or 20 x 1.25TB LUNs to
add to the pool?
Or does it
Disclaimer: I haven't used LUNs with ZFS, so take this with a grain of salt.
On Fri, Oct 26, 2012 at 4:08 PM, Morris Hooten mhoo...@us.ibm.com wrote:
I'm creating a zpool that is 25TB in size.
What are the recommendations in regards to LUN sizes?
The first standard advice I can give is that
On 10/25/2012 05:59 AM, Jerry Kemp wrote:
I have just acquired a new JBOD box that will be used as a media
center/storage for home use only on my x86/x64 box running OpenIndiana
b151a7 currently.
Its strictly a JBOD, no hw raid options, with an eSATA port to each drive.
I am looking for
Hello all,
I was describing how raidzN works recently, and got myself wondering:
does zpool scrub verify all the parity sectors and the mirror halves?
That is, IIRC, the scrub should try to read all allocated blocks and
if they are read in OK - fine; if not - fix in-place with redundant
data
I can only speak anecdotally, but I believe it does.
Watching zpool iostat it does read all data on both disks in a mirrored
pair.
Logically, it would not make sense not to verify all redundant data.
The point of a scrub is to ensure all data is correct.
On 2012-10-25 10:25, Jim Klimov
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
I can only speak anecdotally, but I believe it does.
Watching zpool iostat it does read all data on both disks in a mirrored
pair.
Logically, it would not make sense not to
2012-10-25 15:30, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
I can only speak anecdotally, but I believe it does.
Watching zpool iostat it does read all data on
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI, because buying directly from them is incredibly expensive.
Do these support eSATA? It seems
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI, because buying directly from them is
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Logically, yes - I agree this is what we expect to be done.
However, at least with the normal ZFS reading pipeline, reads
of redundant copies and parities only kick in if the
On Thu, Oct 25, 2012 at 10:13 AM, Sašo Kiselkov skiselkov...@gmail.comwrote:
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100
and
are
On 10/25/2012 04:28 PM, Patrick Hahn wrote:
On Thu, Oct 25, 2012 at 10:13 AM, Sašo Kiselkov skiselkov...@gmail.comwrote:
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI,
On 10/25/2012 05:40 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100
and
are essentially rebranded LSI 9200-8e
On 10/25/2012 11:44 AM, Sašo Kiselkov wrote:
It may be that you'll get reduced cabling range (only up to SATA
lengths, obviously), but it works. The voltage differences are very
small and should only come into play when you're pushing the envelope of
the cable length.
I have a two-drive
On Thu, Oct 25, 2012 at 7:35 AM, Jim Klimov jimkli...@cos.ru wrote:
If scrubbing works the way we logically expect it to, it
should enforce validation of such combinations for each read
of each copy of a block, in order to ensure that parity sectors
are intact and can be used for data
2012-10-25 21:17, Timothy Coalson wrote:
On Thu, Oct 25, 2012 at 7:35 AM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:
If scrubbing works the way we logically expect it to, it
should enforce validation of such combinations for each read
of each copy of a block, in
Hello Bob,
Thanks for the SIIG pointer, most of the stuff I had archived from this
list pointed to LSI products.
I poked around on the site and reviewed SIIG's SATA and SAS HBA. I also
hit up their search engine. I'm not implying I did an all inclusive
search, but nothing I came across on
On 10/24/12 03:16, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
The only thing I think Oracle should have done differently is to allow
either a downgrade or
On 10/24/12 3:59 AM, Darren J Moffat wrote:
So in this case you should have a) created the pool with a version that
matches the pool version of the backup server and b) make sure you
create the ZFS file systems with a version that is supposed by the
backup server.
And AI allows you to set the
On 10/24/12 17:44, Carson Gaspar wrote:
On 10/24/12 3:59 AM, Darren J Moffat wrote:
So in this case you should have a) created the pool with a version that
matches the pool version of the backup server and b) make sure you
create the ZFS file systems with a version that is supposed by the
I have just acquired a new JBOD box that will be used as a media
center/storage for home use only on my x86/x64 box running OpenIndiana
b151a7 currently.
Its strictly a JBOD, no hw raid options, with an eSATA port to each drive.
I am looking for suggestions for an HBA card with at least (2), but
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
One idea I have is that a laptop which only has a single HDD slot,
often has SD/MMC cardreader slots. If populated with a card for L2ARC,
can it be expected to boost the
From: Richard Elling [mailto:richard.ell...@gmail.com]
At some point, people will bitterly regret some zpool upgrade with no way
back.
uhm... and how is that different than anything else in the software world?
No attempt at backward compatibility, and no downgrade path, not even by
Actually, I think there is a world of difference.
Backwards compatibility is something we all need. We need to be able to
access content created in previous versions of software in newer
versions.
You cannot expect an older version to be compatible with the new
features in a later version.
Hi,
I have tried running
zfs create -o readonly=off tank/test
on two different Solaris 11 Express 11/11 (x86) machines resulting in
segfaults. Can anybody verify this behavior? Or is this some
idiosyncrasy of my configuration?
Any help would be appreciated.
Regards,
Andreas
Hi Andreas,
Which release is this... Can you provide the /etc/release info?
It works fine for me on a S11 Express (b162) system:
# zfs create -o readonly=off pond/amy
# zfs get readonly pond/amy
NAME PROPERTY VALUE SOURCE
pond/amy readonly off local
This is somewhat redundant
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
The only thing I think Oracle should have done differently is to allow
either a downgrade or creating a send stream in a lower version
(reformatting the data where necessary, and
On 20.10.2012 22:24, Tim Cook wrote:
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net
On 22.10.2012 06:32, Matthew Ahrens wrote:
On Sat, Oct 20, 2012 at 1:24 PM, Tim Cook t...@cook.ms
mailto:t...@cook.ms wrote:
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote:
Hi,
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
Also, see document:
ZFS ARC can shrink down without memory pressure result in slow
performance [ID 1404581.1]
Specifically, check if arc_no_grow is
On 22 October, 2012 - Robert Milkowski sent me these 3,6K bytes:
Hi,
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
Also, see document:
ZFS ARC can shrink down without memory pressure result in
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gary Mills
On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote:
Follow up question regarding this: is there any way to disable
automatic import of any non-rpool on boot
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
least in my case) to re-import rpool, and another pool, but it didn't figure
out
to re-import
Are you sure that the system with failed mounts came up NOT in a
read-only root moment, and that your removal of /etc/zfs/zpool.cache
did in fact happen (and that you did not then boot into an earlier
BE with the file still in it)?
On a side note, repairs of ZFS mount order are best done with a
Alexander Block abloc...@googlemail.com wrote:
tar/pax was the initial format that was chosen for btrfs send/receive
as it looked like the best and most compatible way. In the middle of
development however I realized that we need more then storing whole
and incremental files/dirs in the
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
After it dropped, it did build back up. Today is the first day that
these servers are working under real production load and it is looking
much better. arcstat is
Hello all,
A few months ago I saw a statement that L2ARC writes are simplistic
in nature, and I got the (mis?)understanding that some sort of ring
buffer may be in use, like for ZIL. Is this true, and the only metric
of write-performance important for L2ARC SSD device is the sequential
write
2012-10-22 20:58, Brian wrote:
hi jim,
writes are sequential and to a ring buffer. reads of course would not
be sequential, and would be intermixed with writes.
Thanks... Do I get it correctly that if a block from L2ARC is
requested by the readers, then it is fetched from the SSD and
becomes
On Oct 22, 2012, at 6:52 AM, Chris Nagele nag...@wildbit.com wrote:
If after it decreases in size it stays there it might be similar to:
7111576 arc shrinks in the absence of memory pressure
After it dropped, it did build back up. Today is the first day that
these servers are
On Oct 19, 2012, at 4:59 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
At some point, people will
From: Jim Klimov [mailto:jimkli...@cos.ru]
Sent: Monday, October 22, 2012 7:26 AM
Are you sure that the system with failed mounts came up NOT in a
read-only root moment, and that your removal of /etc/zfs/zpool.cache
did in fact happen (and that you did not then boot into an earlier
BE with
On 10/20/12 2:30 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
How does the system decide, in the absence of rpool.cache, which pools
it's going to import at boot?
I guess you are referring to zpool.cache. In that case it will
automatically import only your rpool.
On Sun, 21 Oct 2012 11:40:31 +0200, Bogdan ?ulibrk b...@default.rs
wrote:
On 10/20/12 2:30 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
How does the system decide, in the absence of rpool.cache, which pools
it's going to import at boot?
I guess you are referring to
On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote:
Follow up question regarding this: is there any way to disable
automatic import of any non-rpool on boot without any hacks of removing
zpool.cache?
Certainly. Import it with an alternate cache file. You do this by
Do make sure you're getting one that has the proper firmware.
Those with BIOS don't work in SPARC boxes, and those with OpenBoot don't
work in x64 stuff.
A quick Sun FC HBA search on ebay turns up a whole list of stuff
that's official Sun HBAs, which will give you an idea of the (max)
On Sun, Oct 21, 2012 at 1:41 PM, Erik Trimble tr...@netdemons.com wrote:
Do make sure you're getting one that has the proper firmware.
Those with BIOS don't work in SPARC boxes, and those with OpenBoot don't
work in x64 stuff.
A quick Sun FC HBA search on ebay turns up a whole list of
On Sat, Oct 20, 2012 at 1:23 AM, Arne Jansen sensi...@gmx.net wrote:
On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
Please don't bother
On Sat, Oct 20, 2012 at 1:24 PM, Tim Cook t...@cook.ms wrote:
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net
On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
Please don't bother changing libzfs (and proliferating the copypasta
there) -- do it like
If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
least in my case) to re-import rpool, and another pool, but it didn't figure
out to re-import some other pool.
How does the system decide, in the absence of rpool.cache, which pools it's
going to import at boot?
From: Timothy Coalson [mailto:tsc...@mst.edu]
Sent: Friday, October 19, 2012 9:43 PM
A shot in the dark here, but perhaps one of the disks involved is taking a
long
time to return from reads, but is returning eventually, so ZFS doesn't notice
the problem? Watching 'iostat -x' for busy
2012-10-20 16:30, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
If you rm /etc/zfs/zpool.cache and reboot...The system is smart enough
(at least in my case) to re-import rpool, and another pool, but it
didn't figure out to re-import some other pool.
How does the system decide,
2012-10-20 3:59, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
At some point, people will bitterly regret some zpool upgrade with no way
back.
uhm... and how
On Sat, Oct 20, 2012 at 7:39 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Timothy Coalson [mailto:tsc...@mst.edu]
Sent: Friday, October 19, 2012 9:43 PM
A shot in the dark here, but perhaps one of the disks
Hi. We're running OmniOS as a ZFS storage server. For some reason, our
arc cache will grow to a certain point, then suddenly drops. I used
arcstat to catch it in action, but I was not able to capture what else
was going on in the system at the time. I'll do that next.
read hits miss hit%
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen sensi...@gmx.net wrote:
On 10/20/2012 01:10 AM, Tim Cook wrote:
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at
The built in drivers support Mpha so you're good to go.
On Friday, October 19, 2012, Christof Haemmerle wrote:
Yep i Need. 4 Gig with multipathing if possible.
On Oct 19, 2012, at 10:34 PM, Tim Cook t...@cook.ms javascript:_e({},
'cvml', 't...@cook.ms'); wrote:
On Friday, October 19,
Hello all,
I have one more thought - or a question - about the current
strangeness of rpool import: is it supported, or does it work,
to have rpools on multipathed devices?
If yes (which I hope it is, but don't have a means to check)
what sort of a string is saved into the pool's labels as
On 19/10/12 04:50 PM, Jim Klimov wrote:
Hello all,
I have one more thought - or a question - about the current
strangeness of rpool import: is it supported, or does it work,
to have rpools on multipathed devices?
If yes (which I hope it is, but don't have a means to check)
what sort of a
Thanks, more Qs below ;)
2012-10-19 11:16, James C. McPherson wrote:
if you run /usr/bin/strings over /etc/zfs/zpool.cache,
you'll see that not only is the device path stored, but
(more importantly) the devid.
As an excerpt from my adventurous notebook, which only has
an rpool on SAS, I see
Arne Jansen sensi...@gmx.net wrote:
On 10/18/2012 10:19 PM, Andrew Gabriel wrote:
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
Filesystem Incremental Transport Stream
(or Filesystem Independent Transport Stream)
Is this an attempt to
On 19.10.2012 10:47, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
On 10/18/2012 10:19 PM, Andrew Gabriel wrote:
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
Filesystem Incremental Transport Stream
(or Filesystem Independent
On Wed, Oct 17, 2012 at 2:29 PM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send'. The resulting streams can
currently only be received
On 19.10.2012 11:16, Irek Szczesniak wrote:
On Wed, Oct 17, 2012 at 2:29 PM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send'. The
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough.
It's more an alternative to rsync for incremental updates. I really
like the send/receive feature and want to make it available
On 19.10.2012 12:17, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough.
It's more an alternative to rsync for incremental updates. I really
like the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
You have to create pools/filesystems with the older versions used by the
destination machine.
Apparently zpool create -d -o version=28 you might want to do on the new
system...
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James C. McPherson
As far as I'm aware, having an rpool on multipathed devices
is fine.
Even a year ago, a new system I bought from Oracle came with multipath devices
for all devices by
On 19/10/12 09:27 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James C. McPherson
As far as I'm aware, having an rpool on multipathed devices is fine.
Even a year ago,
Arne Jansen sensi...@gmx.net wrote:
On 19.10.2012 12:17, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough.
It's more an alternative to rsync for
On 19.10.2012 13:53, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
On 19.10.2012 12:17, Joerg Schilling wrote:
Arne Jansen sensi...@gmx.net wrote:
Is this an attempt to create a competition for TAR?
Not really. We'd have preferred tar if it would have been powerful enough.
Hi,
I would like to give a short talk at my organisation in order
to sell them on zfs in general, and on zfs-all-in-one and
zfs as remote backup (zfs send).
Does anyone have a short set of presentation slides or maybe
a short video I could pillage for that purpose? Thanks.
-- Eugen
On Fri, Oct 19, 2012 at 11:23 AM, Arne Jansen sensi...@gmx.net wrote:
On 19.10.2012 11:16, Irek Szczesniak wrote:
On Wed, Oct 17, 2012 at 2:29 PM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
On Oct 19, 2012, at 1:04 AM, Michel Jansens michel.jans...@ulb.ac.be wrote:
On 10/18/12 21:09, Michel Jansens wrote:
Hi,
I've been using a Solaris 10 update 9 machine for some time to replicate
filesystems from different servers through zfs send|ssh zfs receive.
This was done to store
On Oct 19, 2012, at 6:37 AM, Eugen Leitl eu...@leitl.org wrote:
Hi,
I would like to give a short talk at my organisation in order
to sell them on zfs in general, and on zfs-all-in-one and
zfs as remote backup (zfs send).
Googling will find a few shorter presos. I have full-day presos on
On Oct 19, 2012, at 12:16 AM, James C. McPherson j...@opensolaris.org wrote:
On 19/10/12 04:50 PM, Jim Klimov wrote:
Hello all,
I have one more thought - or a question - about the current
strangeness of rpool import: is it supported, or does it work,
to have rpools on multipathed devices?
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send'. The resulting streams can
currently only be received
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen sensi...@gmx.net wrote:
On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
We have finished a beta version of the feature. A webrev for it
can be
Yikes, I'm back at it again, and so frustrated.
For about 2-3 weeks now, I had the iscsi mirror configuration in production, as
previously described. Two disks on system 1 mirror against two disks on system
2, everything done via iscsi, so you could zpool export on machine 1, and then
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
At some point, people will bitterly regret some zpool upgrade with no way
back.
uhm... and how is that different than anything else in the software world?
No attempt at
Several times, I destroyed the pool and recreated it completely from
backup. zfs send and zfs receive both work fine. But strangely - when I
launch a VM, the IO grinds to a halt, and I'm forced to powercycle
(usually) the host.
A shot in the dark here, but perhaps one of the disks involved
hi there,
i need to connect some old raid subsystems to a opensolaris box via fibre
channel. can you recommend any FC HBA?
thanx
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Friday, October 19, 2012, Christof Haemmerle wrote:
hi there,
i need to connect some old raid subsystems to a opensolaris box via fibre
channel. can you recommend any FC HBA?
thanx
__
How old? If its 1gbit you'll need a 4gb or slower hba. Qlogic would be
Yep i Need. 4 Gig with multipathing if possible.
On Oct 19, 2012, at 10:34 PM, Tim Cook t...@cook.ms wrote:
On Friday, October 19, 2012, Christof Haemmerle wrote:
hi there,
i need to connect some old raid subsystems to a opensolaris box via fibre
channel. can you recommend any FC HBA?
On 10/18/12 21:09, Michel Jansens wrote:
Hi,
I've been using a Solaris 10 update 9 machine for some time to replicate
filesystems from different servers through zfs send|ssh zfs receive.
This was done to store disaster recovery pools. The DR zpools are made from
sparse files (to allow for
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/18/2012 10:19 PM, Andrew Gabriel wrote:
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
Filesystem Incremental Transport Stream
(or Filesystem Independent Transport Stream)
___
zfs-discuss mailing
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command 'zfs fits-send'. The resulting streams can
currently only be received on btrfs, but more receivers will
follow.
It would be great if anyone
Can anyone explain to me what the openindiana-1 filesystem is all about? I
thought it was the backup copy of the openindiana filesystem, when you apply
OS updates, but that doesn't seem to be the case...
I have time-slider enabled for rpool/ROOT/openindiana. It has a daily snapshot
(amongst
On 10/16/12 14:54, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
Can anyone explain to me what the openindiana-1 filesystem is all
about?I thought it was the backup copy of the openindiana filesystem,
when you apply OS updates, but that doesn't seem to be the case...
I have
801 - 900 of 45591 matches
Mail list logo