However, the zfs-discuss list seems to be archived at gmane.
On 2013-03-22 22:57, Cindy Swearingen wrote:
I hope to see everyone on the other side...
***
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available a
as used on Illumos?
I've seen a few tutorials written by people who obviously are very
action oriented; afterwards you find you have worn your keyboard down a
bit and not learned a lot at all, at least not in the sense of
understanding what zfs is and what it does and why things are the way
t
can use shadowstat(1M) to show progress.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
FS Storage Appliance.
BP rewrite is actually very complex to do correctly and safely - it it
wasn't I'm sure it would have been done by now by multiple people!
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
On 02/12/13 15:07, Thomas Nau wrote:
Darren
On 02/12/2013 11:25 AM, Darren J Moffat wrote:
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi
unmap support (I believe currently only Nexenta but correct me if I
support.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/24/13 00:04, Matthew Ahrens wrote:
On Tue, Jan 22, 2013 at 5:29 AM, Darren J Moffat
mailto:darr...@opensolaris.org>> wrote:
Preallocated ZVOLs - for swap/dump.
Darren, good to hear about the cool stuff in S11.
Just to clarify, is this preallocated ZVOL different th
as long as the underlying
LUNs have support for SCSI UNMAP
Looks like an interesting technical solution to a political problem :D
There is also a technical problem too: because if you can't inform the
backing store that you no longer need the blocks it can't free them
either so they get st
On 01/22/13 15:32, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Support for SCSI UNMAP - both issuing it and honoring it when it is the
backing store of an iSCSI target.
When I search for scsi unmap, I come up with
On 01/22/13 13:29, Darren J Moffat wrote:
Since I'm replying here are a few others that have been introduced in
Solaris 11 or 11.1.
and another one I can't believe I missed since I was one of the people
that helped design it and I did codereview...
Per file sensitively lab
is an (optional) shadowd that pushes the migration along, but it
will complete on its own anyway.
shadowstat(1M) gives information on the status of the migrations.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
of an iSCSI target.
It also has a lot of performance improvements and general bug fixes in
the Solaris 11.1 release.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
On 01/22/13 11:57, Tomas Forsman wrote:
On 22 January, 2013 - Darren J Moffat sent me these 0,6K bytes:
On 01/21/13 17:03, Sa?o Kiselkov wrote:
Again, what significant features did they add besides encryption? I'm
not saying they didn't, I'm just not aware of that man
issuing it and honoring it when it is the
backing store of an iSCSI target.
It also has a lot of performance improvements and general bug fixes in
the Solaris 11.1 release.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opens
On 11/30/12 11:41, Darren J Moffat wrote:
On 11/23/12 15:49, John Baxter wrote:
After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context
of running this in a production environment.
We have the need to
features of ZFS are lost if we use dm-crypt? My guess would be
they are related to raidz but unsure.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)
No.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/24/12 17:44, Carson Gaspar wrote:
On 10/24/12 3:59 AM, Darren J Moffat wrote:
So in this case you should have a) created the pool with a version that
matches the pool version of the backup server and b) make sure you
create the ZFS file systems with a version that is supposed by the
s and versions using
the highest version supported by the running software.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have sworn, out-of-the-box, there was no
openindiana-1.Am I simply wrong?
Initially there wouldn't have been.
Are you doing the zfs send on your own or letting time-slider do it for
you ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-dis
wever much much more importantly ZFS does not preclude the need for
off system backups. Even with mirroring, and snaphots you still have to
have a backup of important data elsewhere. No file system and more
importantly no hardware is that good.
--
I think the problem is with disks that are 4k organised, but report
their blocksize as 512.
If the disk reports it's blocksize correctly as 4096, then ZFS should
not have a problem.
At least my 2TB Seagate Barracuda disks seemed to report their
blocksizes as 4096, and my zpools on those machin
icial NIST
name of that hash.
With the internal enum being: ZIO_CHECKSUM_SHA512_256
CR 7020616 already exists for adding this in Oracle Solaris.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
ums the
uncompressed data.
No ZFS checksums are over the data as it is stored on disk so the
compressed data.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On x86 on zfs cyl 0 must be left out of zfs root pools.
Been there.
Skickat från min Android MobilRichard Elling
skrev:On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
> by the way
> when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for
I'm on openindiana 151-a4
Skickat från min Android MobilCindy Swearingen
skrev:Hi Hans,
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
Thanks,
Cindy
On 06/15/12 06:14, Hans J Albertsson wrote:
> I've got my root pool on a mi
ol
8)replace old HDD with new HDD
9)format the HDD
10)attach the HDD to the new root pool
regards
On 6/15/2012 8:14 AM, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only ha
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector, though,
and a suitable external cabinet for connecting one extra disk.
How would I go about migrati
here is an effort to integrate open-sourced
SAM-QFS into illumos
or smartos/oi/illumian.
Okay, then it would have been clearer if you had asked that question but
you asked about SAM-QFS on a zfs discuss alias.
--
Darren J Moffat
___
zfs-discuss mailin
ct of data-tying.
If you want to know Oracle's roadmap for SAM-QFS then I recommend
contacting your Oracle account rep rather than asking on a ZFS
discussion list. You won't get SAM-QFS or Oracle roadmap answers from
this alias.
--
Darren J Moffat
___
On 04/30/12 04:00, Fred Liu wrote:
The subject says it all.
Still a fully supported product from Oracle:
http://www.oracle.com/us/products/servers-storage/storage/storage-software/qfs-software/overview/index.html
--
Darren J Moffat
___
zfs-discuss
another (readonly) implementation of ZFS
encryption:
http://bazaar.launchpad.net/~vcs-imports/grub/grub2-bzr/view/head:/grub-core/fs/zfs/zfscrypt.c
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
.
For IO depends what level you want to look at, if it is the device level
iostat, if it is how ZFS is using the devices look at 'zpool iostat'.
If it is the filesystem level look at fsstat.
Also look acctadm(1M).
--
Darren J Moffat
___
z
ource=...'
Using a hacked up libzfs that removes the check that 'zfs inherit' does
so I can get out of the situation and make the datasets accessible
again. So this is fixable so don't abandon hope yet.
--
Darren J Moffat
___
d. To unload the key, see zfs
key.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/21/12 13:27, Edward Ned Harvey wrote:
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Sent: Monday, February 20, 2012 12:46 PM
GRUB2 has support
for encrypted ZFS file systems already.
I assume this requires a pre-boot password, right? Then I have two
questions...
The ZFS
every leaf dataset - you didn't need to do that it would have been
inherited.
What this means is that even though you have the same passphrase for
each dataset the actual data encryption key is different because the
passphrase value plus the h
ilesystems you call 'base'
and the fsys ones.
What is important here is understanding where the encryption and
keysource properties are set and where they are inherited.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolar
ncrypted ZFS file systems already.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Import the pool without mounting any file systems.
If it isn't mounted it can't be shared.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
main pool devices) go for a given
dataset are determined by a combination of things including (but not
limited to) the presence of a SLOG device, the logbias property and the
size of the data.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-di
import and read
datasets in a deduped pool just fine. You can't enable dedup on a
dataset and any writes won't dedup they will "rehydrate".
So it is more like partial dedup support rather than it not being there
at all.
--
Darren J Moffat
___
s not familiar with how FreeBSD is installed and boots can
you explain how boot works (ie do you use GRUB at all and if so which
version and where the early boot ZFS code is).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ams cannot be received on systems that do not
support the stream deduplication feature.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if you get rid of the HBA and log device, and run with ZIL
> disabled (if your work load is compatible with a disabled ZIL.)
By "get rid of the HBA" I assume you mean put in a battery-backed RAID
card instead?
-J
___
zfs-discuss mai
ALWAYS let ZFS manage the redundancy otherwise it can't
self-heal.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/18/11 14:04, Jim Klimov wrote:
2011-10-18 16:26, Darren J Moffat пишет:
On 10/18/11 13:18, Edward Ned Harvey wrote:
* btrfs is able to balance. (after adding new blank devices,
rebalance, so
the data& workload are distributed across all the devices.) zfs is not
able to do this yet.
te/usr/src/uts/common/fs/zfs/metaslab.c
See lines 1356-1378
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to CF adaptors at the time
too.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/13/11 09:27, Fajar A. Nugraha wrote:
On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
wrote:
Have you looked at the time-slider functionality that is already in Solaris
?
Hi Darren. Is it available for Solaris 10? I just installed Solaris 10
u10 and couldn't find it.
No it i
the clear.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ut that is set one-time in the
SMF service properties.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Can
anybody confirm?.
Of course if we didn't do that we would be leaking user data.
2. What happens with L2ARC?. Since ARC is not encrypted (in RAM), is
it encrypted when evicted to L2ARC?.
Use of the L2ARC is disabled for data from encrypted datasets at this time.
--
Darren J M
On Tue, 6 Sep 2011, Tyler Benster wrote:
It seems quite likely that all of the data is intact, and that something
different is preventing me from accessing the pool. What can I do to
recover the pool? I have downloaded the Solaris 11 express livecd if
that would be of any use.
Try running zd
.
Note the following is an implementation detail subject to change:
It is NOT checksumed on disk only in memory, but the L2ARC data on disk
is not used after reboot anyway just now.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
ing a role name of 'myrole' and a ZFS pool called 'tank' it would
be something like this:
# roleadd -R myrole
# passwd myrole
...
# useradd -R myrole cephas
# zfs allow -u myrole send,receive,snapshot,mount tank
--
Darren J Moffat
__
Hi Doug,
The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup, pref
This might be related to your issue:
http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html
On Saturday, August 6, 2011, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or
ar does well in mass storage configs.
-J
Sent via iPhone
Is your email Premiere?
On Aug 6, 2011, at 10:45, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> We have a few servers with WD Black (and some green) drives on Super Micro
> systems. We've seen both drives work well
On 08/05/11 15:09, Richard Elling wrote:
On Aug 5, 2011, at 6:14 AM, Darren J Moffat wrote:
On 08/05/11 13:11, Edward Ned Harvey wrote:
After a certain rev, I know you can set the "sync" property, and it
takes effect immediately, and it's persistent across reboots. But that
d
startup script which applies it to filesystems
other than rpool. Which feels kludgy. Is there a better way?
echo "set zfs:zil_disable = 1" > /etc/system
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
traffic encryption.
Indeed, plus you don't necessarily want to always have your backups
encrypted by the same keys as the live data (ie the policy for key
management and retention could be different on purpose).
--
Darren J Moffat
___
zfs-di
node(Solaris) needs extra software, is it correct?
I believe so, also it is more than just the T1C drive you need it
needs to be in a library and you also need the Oracle Key Management
system to be able to do the key management for it.
--
Darren J Moffat
never does update-in-place and UFS only does update-in-place for
Note quite never, there are some very special cases where blocks are
allocated ahead of time and could be written to "in place" more than
once. In particular the special type of ZVOLs used for dump
d what transport layer is.
But basically it is not provided by ZFS itself it is up to the person
building the system to secure the transport layer used for ZFS send.
It could also be write directly to a T10k encrypting tape drive.
--
Darren J Moffat
___
only way.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vdev layer concept - ie below the DMU layer.
There is nothing in the send stream format that knows what an ashift
actually is.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
did actually make it slower.
I removed the separate log device from both of those pools (by manual
hacking with specially build zfs kernel modules because slog removal
didn't exist back then.).
--
Darren J Moffat
___
zfs-discuss mailin
ew/Community+Group+zfs/31
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e I'm not commenting about any specific issue here but about the way
your conclusion was written it doesn't follow that because the pool and
version number are the same that no zfs/zpool/dedup code was changed.
--
Darren J Moffat
___
zfs-di
a "write" is whey you
save the document but at the ZPL layer that is multiple write(2) calls
and maybe even some rename(2)/unlink(2)/close(2) calls as well.
If you move further down then doing a snapshot on every dmu_write() call
is fundame
of people
do, and it's good.
Not recommended by who ? Which documentation says this ?
As I pointed out last time this came up the NDMP service on Solaris 11
Express and on the Oracle ZFS Storage Appliance uses the 'zfs send'
stream as what is to be stored on
I am running 4 of the 128GB version in our DR environment as L2ARC. I don't
have anything bad to say about them. They run quite well.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tomas Ögren
Sent: Wednesday, June 0
systems support contract with Oracle you should
be able to log a support ticket and request a backport of the fix for CR
6989185.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
://download.oracle.com/docs/cd/E19963-01/html/821-1462/ndmpd-1m.html
http://download.oracle.com/docs/cd/E19963-01/html/821-1462/ndmpstat-1m.html
What you mean by supporting it ?
I believe (though I haven't tested it) it works with Oracle Secure
Backup as well as NetBackup and Networker.
--
Darren J M
l. Sparse files are a filesystem level
concept that is understood my many filesystems including CIFS and ZFS
and many others.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote:
> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> > Is there anyway, yet, to import a pool with corrupted space_map
> > errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions?
>...
> Well, by commenting out the VERIFY line for zio->io_t
access would just be a totally
useless brick).
As an engineer I'm curious have you actually tried a suitably sized
S7000 or are you assuming it won't perform suitably for you ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opens
) *are* storage appliances.
They may be storage appliances, but the user can not put their own
software on them. This limits the appliance to only the features that
Oracle decides to put on it.
Isn't that the very definition of an Appliance ?
--
Darren J Moffat
___
The fix for 6991788 would probably let the 40mb drive work, but it would
depend on the asize of the pool.
On Fri, 4 Mar 2011, Cindy Swearingen wrote:
Hi Robert,
We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.
Yes, yo
user in a
zone to do zfs operations on delegated datasets ? Doing this for the
global zone is a little harder but for a local zone it can be done by
extending the 'zfs allow' mechanism.
See:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=7011365
--
Darren J Moffat
_
shell script is
find/ls/grep. You could write a C program that uses the same method
that ls does to get the attributes but you will still have to visit
every file in the file system.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolari
ke if 'foo' was quarantined.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
up and incremental restore is supported.
This has been tested and is known to work with at least the following
backup applications:
• Oracle Secure Backup 10.3.0.2 and above
• Enterprise Backup Software (EBS) / Legato Networker 7.5 and above
• Symantec NetBackup 6.5.3 and above
--
Darren J Moff
On 07/01/2011 11:56, Sašo Kiselkov wrote:
On 01/07/2011 10:26 AM, Darren J Moffat wrote:
On 06/01/2011 23:07, David Magda wrote:
On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
Fletcher is faster than SHA-256, so I think that must be what you're
asking about: "can Fletcher+Verif
o be quite complicated to fix due to very
early boot issues.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
und the company that aren't Sun hardware as well as
servers.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 23/12/2010 17:09, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffat wrote:
On 22/12/2010 20:27, Garrett D'Amore wrote:
That said, some operations -- and cryptographic ones in particular --
may use floating point registers and operations because for some
architectures (sun4u
entry/zfs_encryption_what_is_on where I
explain all the type of keys used and how they are generated as well as
how passphrases are turned into AES wrapping keys (using PKCS#5 PBE).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
nd SHA256.
So those optimistations for floating point don't come into play for ZFS
encryption.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
art of the problem.
Another alternative to try would be setting primarycache=metadata on the
ZFS dataset that contains the mmap files. That way you are only turning
of the ZFS ARC cache of the file content for that one dataset rather
than clamping the ARC.
--
Darre
case.
$ ppriv -e -s EPIL=basic,!file_write myapp
If it is being started by an SMF service you can remove file_write in
the method_credential section - see smf_method(5).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ions start with these resources:
Oracle Solaris 11 Express Trusted Extensions Collection
http://docs.sun.com/app/docs/coll/2580.1?l=en
OpenSolaris Security Community pages on TX:
http://hub.opensolaris.org/bin/view/Community+Group+security/tx
-
he structs necessary for the on disk
format are in the CTF data of the binaries.
http://blogs.sun.com/darren/entry/zfs_encryption_what_is_on
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On Mon, 6 Dec 2010, Curtis Schiewek wrote:
Hi Mark,
I've tried running "zpool attach media ad24 ad12" (ad12 being the new
disk) and I get no response. I tried leaving the command run for an
extended period of time and nothing happens.
What version of solaris are you running?
__
y clean up ad24 & ad18
for you.
On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante wrote:
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
ad8ONLINE 0 0 0
ad10 ONLINE 0 0 0
thread was cross posted to zfs-crypto-discuss).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/12/2010 13:36, f...@ll wrote:
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
No.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
;t have to plug it back into to the original system you could
have just forced the import.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 1247 matches
Mail list logo