Hi Bob,
It is necessary to look at all the factors which
might result in data
loss before deciding what the most effective steps
are to minimize
the probability of loss.
Bob
I am under the impression that exactly those were the considerations for both
the ZFS designers to implement a
You can do in the kernel by calling vnodetopath(). I don't know if it
is exposed to user space.
Yes, in /proc/*/path (kinda).
But that could be slow if you have large directories so you have to
think about where you would use it.
The kernel caches file names; however, it cannot be use for
On May 2, 2010, at 8:47 AM, Steve Staples wrote:
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and
I am using a mirrored system pool on 2 80G drives - however I was only using
40G since I thought I might use the rest for something else. ZFS Time Slider
was complaining the pool was filled for 90% and I decided to increase pool
size.
What I did was a zpool detach of one of the mirrored hdds and
- Jan Riechers jan.riech...@googlemail.com skrev:
I am using a mirrored system pool on 2 80G drives - however I was only using
40G since I thought I might use the rest for something else. ZFS Time Slider
was complaining the pool was filled for 90% and I decided to increase pool
size.
Hi! We're building our first dedicated ZFS-based NAS/SAN (probably using
Nexenta) and I'd like to run the specs by you all to see if you have any
recommendations. All of it is already bought, but it's not too late to add to
it.
Dell PowerEdge R9102x Intel X7550 2GHz, 8 cores each plus
One can rename a zpool on import
zpool import -f pool_or_id newname
Is there any way to rename it (back again, perhaps)
on export?
(I had to rename rpool in an old disk image to access
some stuff in it, and I'd like to put it back the way it
was so it's properly usable if I ever want to boot
- Richard L. Hamilton rlha...@smart.net skrev:
One can rename a zpool on import
zpool import -f pool_or_id newname
Is there any way to rename it (back again, perhaps)
on export?
(I had to rename rpool in an old disk image to access
some stuff in it, and I'd like to put it back the
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
Of casper@sun.com
It is certainly possible to create a .zfs/snapshot_byinode but it is
not
clear when it helps but it can be used for finding the earlier copy of
a
directory (netapp/.snapshot)
Do you happen to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Staples
My problem is, is that not all 2TB hard drives are the same size (even
though they should be 2 trillion bytes, there is still sometimes a +/-
(I've
only noticed this 2x so
On May 2, 2010, at 8:47 AM, Steve Staples wrote:
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want
some
redundancy for my files/media. What I am looking to do, is get a bunch
of
2TB drives,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Staples
My problem is, is that not all 2TB hard drives are the same size (even
though they should be 2 trillion bytes, there is still sometimes a +/-
(I've
only noticed this 2x
Hi all
Testing variable size 'disks' in mirror, I followed Victor Latushkin's example
bash-4.0# mkfile -n 2 d0
bash-4.0# zpool create pool $PWD/d0
bash-4.0# mkfile -n 1992869543936 d1
bash-4.0# zpool attach pool $PWD/d0 $PWD/d1
and so on - this works well. Now, to try to mess with
I am currently using OpenSolaris 2009.06
If I was to upgrade to the current developer version, forgive my
ignorance
(since I am new to *solaris), but how would I do this?
# pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
# pkg image-update
That'll take you to snv_134 or
- Ian D rewar...@hotmail.com skrev:
Hi! We're building our first dedicated ZFS-based NAS/SAN (probably using
Nexenta) and I'd like to run the specs by you all to see if you have any
recommendations. All of it is already bought, but it's not too late to add to
it.
Dell PowerEdge R910
On Sun, 2 May 2010, Tonmaus wrote:
I am under the impression that exactly those were the considerations
for both the ZFS designers to implement a scrub function to ZFS and
the author of Best Practises to recommend performing this function
frequently. I am hearing you are coming to a
Hello,
thanks for the feedback and sorry for the delay in answering.
I checked the log and the fmadm. It seems the log does not show changes,
however fmadm shows:
Apr 23 2010 18:32:26.363495457 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 23 2010 18:32:26.363482031
On Sun, May 2, 2010 at 3:51 PM, Jan Riechers jan.riech...@googlemail.comwrote:
On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.netwrote:
- Jan Riechers jan.riech...@googlemail.com skrev:
I am using a mirrored system pool on 2 80G drives - however I was only
using
On May 2, 2010, at 10:27 AM, Jan Riechers wrote:
On Sun, May 2, 2010 at 3:51 PM, Jan Riechers jan.riech...@googlemail.com
wrote:
On Sun, May 2, 2010 at 6:06 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net
wrote:
- Jan Riechers jan.riech...@googlemail.com skrev:
I am using a mirrored
On May 1, 2010, at 1:56 PM, Bob Friesenhahn wrote:
On Fri, 30 Apr 2010, Freddie Cash wrote:
Without a periodic scrub that touches every single bit of data in the pool,
how can you be sure
that 10-year files that haven't been opened in 5 years are still intact?
You don't. But it seems that
- Roy Sigurd Karlsbakk r...@karlsbakk.net skrev:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly usable while scrubbing the
pool.
How can I address this?
Hi guys
I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my system, I
picked a single 2TB disk and installed opensolaris (therefore zpool was created
by the installer)
I went ahead and created a new pool gpool with raidz (the kind of redundancy
I want. Here's the output:
You can't get rid of rpool. That's the pool you're booting from. Root
pools can only be single disks or n-way mirrors.
As to your other question, you can view the snapshots by using the
command zfs list -t all, or turn on the listsnaps property for the
pool. See
- Giovanni g...@csu.fullerton.edu skrev:
Hi guys
I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my
system, I picked a single 2TB disk and installed opensolaris
(therefore zpool was created by the installer)
I went ahead and created a new pool gpool with raidz (the
On Sun, 2 May 2010, Richard Elling wrote:
These calculations are based on fixed MTBF. But disk MTBF decreases with
age. Most disks are only rated at 3-5 years of expected lifetime. Hence,
archivists
use solutions with longer lifetimes (high quality tape = 30 years) and plans for
migrating the
On 5/2/10 3:12 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On the flip-side, using 'zfs scrub' puts more stress on the system
which may make it more likely to fail. It increases load on the power
supplies, CPUs, interfaces, and disks. A system which might work fine
under normal
On Sun, 2 May 2010, Roy Sigurd Karlsbakk wrote:
Any guidance on how to do it? I tried to do zfs snapshot
You can't boot off raidz. That's for data only.
Unless you use FreeBSD ...
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Thank you. I was not aware that root pools could not be moved.
But here's the kicker, what if I have a single drive for root pool, and its
failing... I connect a new HDD to replace the boot drive thats dying, ZFS has
no way of migrating to a new drive?
Thanks
--
This message posted from
On Sun, 2 May 2010, Dave Pooser wrote:
If my system is going to fail under the stress of a scrub, it's going to
fail under the stress of a resilver. From my perspective, I'm not as scared
I don't disagree with any of the opinions you stated except to point
out that resilver will usually hit
On Sun, 2 May 2010, Giovanni wrote:
Thank you. I was not aware that root pools could not be moved.
But here's the kicker, what if I have a single drive for root pool,
and its failing... I connect a new HDD to replace the boot drive
thats dying, ZFS has no way of migrating to a new drive?
You do know that OpenSolaris + VirtualBox can trash your ZFS raid? You can
loose your data. There is a post about write cache and ZFS and VirtualbBox, I
think you need to disable it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing
On Sun, May 2, 2010 at 1:55 PM, Giovanni g...@csu.fullerton.edu wrote:
But here's the kicker, what if I have a single drive for root pool, and its
failing... I connect a new HDD to replace the boot drive thats dying, ZFS has
no way of migrating to a new drive?
You can move root pools, I did
On May 2, 2010, at 12:05 PM, Roy Sigurd Karlsbakk wrote:
- Roy Sigurd Karlsbakk r...@karlsbakk.net skrev:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly
On 2010-May-02 04:06:41 +0800, Diogo Franco diogomfra...@gmail.com wrote:
regular data corruption and then the box locked up. I had also
converted the pool to v14 a few days before, so the freebsd v13 tools
couldn't do anything to help.
Note that ZFS v14 was imported to FreeBSD 8-stable in
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
bash-4.0# mkfile -n 2 d0
bash-4.0# zpool create pool $PWD/d0
bash-4.0# mkfile -n 1992869543936 d1
bash-4.0# zpool attach pool $PWD/d0 $PWD/d1
As long as
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Sunday, May 02, 2010 11:55 AM
I am currently using OpenSolaris 2009.06
If I was to upgrade to the current developer version, forgive my
ignorance
(since I am new to *solaris), but how would I do this?
# pkg set-publisher
From: Steve Staples [mailto:thestapler...@gmail.com]
I am currently using OpenSolaris 2009.06
If I was to upgrade to the current developer version, forgive my
ignorance
(since I am new to *solaris), but how would I do this?
If you go to genunix.org (using the URL in my previous email) you
37 matches
Mail list logo