Ray Clark webcl...@rochester.rr.com wrote:
The April 2009 ZFS Administration Guide states ...tar and cpio commands,
to save ZFS files. All of these utilities save and restore ZFS file
attributes and ACLs.
Be careful, Sun tar and Sun cpio do not support sparse files.
Jörg
--
Hello all,
I have a critical ZFS problem, quick history
I have a production machine which backplane has burnt (litteraly) that
had 2 pools : applis storage. Those pools are RAIDz1 + 1 spare.
Then we switched to the backup one, all right.
Backup machine is the exact replica of production one,
One of the disks in my RAIDZ array was behaving oddly (lots of bus errors) so I
took it offline to replace it. I shut down the server, put in the replacement
disk, and rebooted. Only to discover that a different drive had chosen that
moment to fail completely. So I replace the failing (but not
Le mercredi 30 septembre 2009 à 11:43 +0200, Nicolas Szalay a écrit :
Hello all,
I have a critical ZFS problem, quick history
[snip]
little addition : zdb -l /dev/rdsk/c7t0d0 sees the metadatas
Isn't it just the phys_path that is wrong ?
LABEL 0
Ray Clark wrote:
When using zfs send/receive to do the conversion, the receive creates a new
file system:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive afx01/home.sha256
Where do I get the chance to zfs set checksum=sha256 on the new file system
before all of
On 30.09.09 14:30, Nicolas Szalay wrote:
Le mercredi 30 septembre 2009 à 11:43 +0200, Nicolas Szalay a écrit :
Hello all,
I have a critical ZFS problem, quick history
[snip]
little addition : zdb -l /dev/rdsk/c7t0d0 sees the metadatas
What does zdb -l /dev/rds/c7t0d0s0 show?
Victor
Check S10 U8 SRT, as i remember there is a way to some cache device to
a pool
On 09/29/09 18:23, Ted Ward wrote:
Hello Claire.
That feature is in OpenSolaris but not regular Solaris 10
(http://www.opensolaris.org/os/community/zfs/version/10/):
ZFS Pool
Version 10
This page
On Tue, Sep 29, 2009 at 7:28 AM, rwali...@washdcmail.com wrote:
On Sep 29, 2009, at 2:41 AM, Eugen Leitl wrote:
On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:
personally i like this case:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
it's got 20 hot
I am looking to use Opensolaris/ZFS to create an iscsi SAN to provide storage
for a collection of virtual systems and replicate to an offiste device.
While testing the environment I was surprised to see the size of the
incremental snapshots, which I need to send/receive over a WAN connection,
On 09/29/09 10:23 PM, Marc Bevand wrote:
If I were you I would format every 1.5TB drive like this:
* 6GB slice for the root fs
As noted in another thread, 6GB is way too small. Based on
actual experience, an upgradable rpool must be more than
20GB. I would suggest at least 32GB; out of 1.5TB
It appears that I have waded into a quagmire. Every option I can find
(cpio, tar (Many versions!), cp, star, pax) has issues. File size and
filename or path length, and ACLs are common shortfalls. Surely there is
an easy answer he says naively!
I simply want to copy one zfs filesystem
Also, one of those drives will need to be the boot drive. (Even if it's
possible I don't want to boot from the data dive, need to keep it focused
on video storage.) So it'll end up being 11 drives in the raid-z.
--
This message posted from opensolaris.org
FWIW, most enclosures like the
One of the disks in my RAIDZ array was behaving oddly (lots of bus errors)
so I took it offline to replace it. I shut down the server, put in the
replacement disk, and rebooted. Only to discover that a different drive
had chosen that moment to fail completely. So I replace the failing (but
On Sep 30, 2009, at 5:48 AM, Brian Hubbleday wrote:
I am looking to use Opensolaris/ZFS to create an iscsi SAN to
provide storage for a collection of virtual systems and replicate to
an offiste device.
While testing the environment I was surprised to see the size of the
incremental
On Sep 30, 2009, at 5:48 AM, Brian Hubbleday wrote:
I am looking to use Opensolaris/ZFS to create an iscsi SAN to
provide storage for a collection of virtual systems and replicate to
an offiste device.
While testing the environment I was surprised to see the size of the
incremental
I took binary dumps of the snapshots taken in between the edits and this showed
that there was actually very little change in the block structure, however the
incremental snapshots were very large. So the conclusion I draw from this is
that the snapshot simply contains every written block since
Just realised I missed a rather important word out there, that could confuse.
So the conclusion I draw from this is that the --incremental-- snapshot simply
contains every written block since the last snapshot regardless of whether the
data in the block has changed or not.
--
This message
Somewhat hairy, but interesting. FYI.
https://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=97t=1902
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com
I had a 50mb zfs volume that was an iscsi target. This was mounted into a
Windows system (ntfs) and shared on the network. I used notepad.exe on a remote
system to add/remove a few bytes at the end of a 25mb file.
--
This message posted from opensolaris.org
On Wed, September 30, 2009 07:14, Thomas Burgess wrote:
For the money, it's a much better option. you'll be able to afford many
more drives. In my opinion, for a home system, the more you can save on
the
case and power supply, the more hard drives you can buy. Right now 1 TB
and
1.5 TB
On Wed, September 30, 2009 08:21, p...@paularcher.org wrote:
It appears that I have waded into a quagmire. Every option I can find
(cpio, tar (Many versions!), cp, star, pax) has issues. File size and
filename or path length, and ACLs are common shortfalls. Surely there
is
an easy answer
On Wed, Sep 30, 2009 at 10:48 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Wed, September 30, 2009 07:14, Thomas Burgess wrote:
For the money, it's a much better option. you'll be able to afford many
more drives. In my opinion, for a home system, the more you can save on
the
case and
David Dyer-Bennet wrote:
And I haven't been able to make incremental replication send/receive work.
Supposed to be working on that, but now I'm having trouble getting a
VirtualBox install that works (my real NAS is physical, but I'm using
virtual systems to test things).
I've had good
Heh :-) Disk usage is directly related to available space.
At home I have a 4x1Tb raidz filled to overflowing with music, photos,
movies, archives, and backups for 4 other machines in the house. I'll
be adding another 4 and an SSD shortly.
It starts with importing CDs into iTunes or WMP,
On Sep 30, 2009, at 10:40 AM, Brian Hubbleday b...@delcam.com wrote:
Just realised I missed a rather important word out there, that could
confuse.
So the conclusion I draw from this is that the --incremental--
snapshot simply contains every written block since the last snapshot
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a
big help.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, September 30, 2009 10:07, Robert Thurlow wrote:
David Dyer-Bennet wrote:
And I haven't been able to make incremental replication send/receive
work.
Supposed to be working on that, but now I'm having trouble getting a
VirtualBox install that works (my real NAS is physical, but I'm
Many sysadmins recommends raidz2. The reason is, if a drive breaks and you have
to rebuild your array, it will take a long time with a large drive. With a 4TB
drive or larger, it could take a week to rebuild your array! During that week,
there will be heavy load on the rest of the drives, which
Requires a login...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
just remove the s in https:// and you can read it
On Wed, Sep 30, 2009 at 12:11 PM, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
Requires a login...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Frank Middleton f.middleton at apogeect.com writes:
As noted in another thread, 6GB is way too small. Based on
actual experience, an upgradable rpool must be more than
20GB.
It depends on how minimal your install is.
The OpenSolaris install instructions recommend 8GB minimum, I have
one
Depending on the data content that you're dealing you can compress the
snapshots inline with the send/receive operations by piping the data
through gzip. Given that we've been talking about 500Mb text files,
this seems to be a very likely solution. There was some mention in the
Kernel
I had a zfs partition written using zfs113 for Mac large around 1.37
TB, then under freebsd 7.2 following a guide on wiki I had wrote 'zpool
create trunk' eventually rewriting the partition. Now the question is
how to recover the partition or to recover data from it? Thanks
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I made a typo... I only have one pool. I should have typed:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive zfs01/home.sha256
Does that change the answer?
And independently if it does or not, zfs01 is a pool, and the property is on
the home zfs file system.
I
Ray Clark wrote:
I made a typo... I only have one pool. I should have typed:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive zfs01/home.sha256
Does that change the answer?
No it doesn't change my answer
And independently if it does or not, zfs01 is a pool,
On 09/30/09 12:59 PM, Marc Bevand wrote:
It depends on how minimal your install is.
Absolutely minimalist install from live CD subsequently updated
via pkg to snv111b. This machine is an old 32 bit PC used now
as an X-terminal, so doesn't need any additional software. It
now has a bigger
I have a raidz2 pool on an x4500 running Solaris 10 update 7.
One of the drives has been replaced with a spare (too many errors), but
the resilver restarts every time data is replicated to the pool with zfs
receive.
I thought this problem was fixed long ago?
--
Ian.
On Wed, 30 Sep 2009 11:01:13 PDT, Carson Gaspar
carson.gas...@gmail.com wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
Perhaps you can try some subcommand of cfgadm to get c7t0d0
online, then import the
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention to the fact that the
pool wasn't imported.
My guess is that if
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention
to the fact that the
pool wasn't imported.
My
On Wed, 30 Sep 2009 11:01:13 PDT, Carson Gaspar
carson.gas...@gmail.com wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
Perhaps you can try some subcommand of cfgadm to get
c7t0d0
online, then
Carson Gaspar wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention
to the fact that the
pool wasn't imported.
On Tue, Sep 29, 2009 at 11:46 PM, Cyril Plisko
cyril.pli...@mountall.com wrote:
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson henr...@henkis.net wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online now:
We have a production server which does nothing but nfs from zfs. This
particular machine has plenty of free memory. Blogs and Documentation state
that zfs will use as much memory as is necessary but how is necessary
calculated? If the memory is free and unused would it not be beneficial to
Dynamite!
I don't feel comfortable leaving things implicit. That is how
misunderstandings happen.
Would you please acknowlege that zfs send | zfs receive uses the checksum
setting on the receiving pool instead of preserving the checksum algorithm used
by the sending block?
Thanks a
On 30-Sep-09, at 10:48 AM, Brian Hubbleday wrote:
I had a 50mb zfs volume that was an iscsi target. This was mounted
into a Windows system (ntfs) and shared on the network. I used
notepad.exe on a remote system to add/remove a few bytes at the end
of a 25mb file.
I'm astonished that's
Sinking feeling...
zfs01 was originally created with fletcher2. Doesn't this mean that the sort
of root level stuff in the zfs pool exist with fletcher2 and so are not well
protected?
If so, is there a way to fix this short of a backup and restore?
--
This message posted from opensolaris.org
hi,
I'm using a SUN Unified Storage 7410 cluster, on which we access CIFS shares
from WinXP and Win2000 clients.
If we map a CIFS share on the 7410 to a drive letter on a winXP client, we
observe that when we do a `dir`from a dosbox on the mapped drive, the files
are shown in a seemingly
Ross Walker wrote:
On Sep 30, 2009, at 10:40 AM, Brian Hubbleday b...@delcam.com wrote:
Just realised I missed a rather important word out there, that could
confuse.
So the conclusion I draw from this is that the --incremental--
snapshot simply contains every written block since the last
Victor Latushkin wrote:
Carson Gaspar wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention
to the fact that
Carson Gaspar wrote:
Victor Latushkin wrote:
Carson Gaspar wrote:
is zdb happy with your pool?
Try e.g.
zdb -eud poolname
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
plugging the disks, I get:
Carson Gaspar wrote:
Carson Gaspar wrote:
Victor Latushkin wrote:
Carson Gaspar wrote:
is zdb happy with your pool?
Try e.g.
zdb -eud poolname
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
On Mon, Sep 28, 2009 at 1:12 PM, Ware Adams rwali...@washdcmail.com wrote:
SuperMicro 7046A-3 Workstation
http://supermicro.com/products/system/4U/7046/SYS-7046A-3.cfm
I'm using a SuperChassis 743TQ-865B-SQ for my home NAS, which is what
that workstation uses. It's very LARGE and very quiet.
I might have this mentioned already on the list and can't find it now,
or I might have misread something and come up with this ...
Right now, using hot spares is a typical method to increase storage
pool resiliency, since it minimizes the time that an array is
degraded. The downside is that
I have a raidz2 pool on an x4500 running Solaris 10 update 7.
One of the drives has been replaced with a spare (too many errors), but
the resilver restarts every time data is replicated
to the pool with zfs receive.
I thought this problem was fixed long ago?
The bug was reported as
On Wed, Sep 30, 2009 at 7:06 PM, Brandon High bh...@freaks.com wrote:
I might have this mentioned already on the list and can't find it now,
or I might have misread something and come up with this ...
Right now, using hot spares is a typical method to increase storage
pool resiliency, since
Brandon High wrote:
I might have this mentioned already on the list and can't find it now,
or I might have misread something and come up with this ...
Right now, using hot spares is a typical method to increase storage
pool resiliency, since it minimizes the time that an array is
degraded. The
Brandon,
Yes, this is something that should be possible once we have bp rewrite (the
ability to move blocks around). One minor downside to hot space would be
that it couldn't be shared among multiple pools the way that hot spares can.
Also depending on the pool configuration, hot space may
Erik Trimble wrote:
From a global perspective, multi-disk parity (e.g. raidz2 or raidz3) is
the way to go instead of hot spares.
Hot spares are useful for adding protection to a number of vdevs, not a
single vdev.
Even when using raidz2 or 3, it is useful to have hot spares so that
I too went with a 5in3 case for HDDs, in a nice portable Mini-ITX case, with
Intel Atom. More of a SOHO NAS for home use, rather than a beast. Still, I can
get about 10TB in it.
http://lundman.net/wiki/index.php/ZFS_RAID
I can also recommend the embeddedSolaris project for making a small
On Wed, Sep 30, 2009 at 10:54 PM, David Dyer-Bennet d...@dd-b.net wrote:
On Wed, September 30, 2009 10:07, Robert Thurlow wrote:
David Dyer-Bennet wrote:
And I haven't been able to make incremental replication send/receive
work.
Supposed to be working on that, but now I'm having trouble
Fajar A. Nugraha wrote:
Are you using x86 or sparc? solaris or opensolaris?
If opensolaris on x86, you can use xvm (xen) to achieve the same
functionality as virtualbox.
If sparc T series, you can use LDOM.
x86, OpenSolaris. But I'm not terribly attracted to the idea of
switching to
On Thu, Oct 1, 2009 at 8:46 AM, David Dyer-Bennet d...@dd-b.net wrote:
Fajar A. Nugraha wrote:
x86, OpenSolaris. But I'm not terribly attracted to the idea of switching
to another, less familiar, virtualization product in hopes that it will
work. I really rather expected Sun's
Joerg, Thanks. As you (of all people) know, this area is quite a quagmire. I
am confident that I don't have any sparse files, or if I do that they are small
and loosing this property would not be a big impact. I have determined that
none of the files have extended attributes or ACLs. Some
Carson Gaspar wrote:
I'll also note that the kernel is certainly doing _something_ with my
pool... from iostat -n -x 5:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
40.55.4 1546.40.0 0.0 0.30.07.5 0
i looked at possibly doing one of those too - but only 5 disks was too
small for me. and i was too nervous about compatibility with mini-itx
stuff.
On Wed, Sep 30, 2009 at 6:22 PM, Jorgen Lundman lund...@gmo.jp wrote:
I too went with a 5in3 case for HDDs, in a nice portable Mini-ITX case, with
Carson Gaspar wrote:
Carson Gaspar wrote:
I'll also note that the kernel is certainly doing _something_ with my
pool... from iostat -n -x 5:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
40.55.4 1546.40.0 0.0
On Sep 30, 2009, at 6:03 PM, Matthew Ahrens wrote:
Erik Trimble wrote:
From a global perspective, multi-disk parity (e.g. raidz2 or
raidz3) is the way to go instead of hot spares.
Hot spares are useful for adding protection to a number of vdevs,
not a single vdev.
Even when using raidz2
69 matches
Mail list logo