Thanx for your suggestions. Maybe we can come to a satisfactory conclusion
together?
I have a freshly installed b95. With new install of b90 or so, everything
worked fine.
If I wait for OpenSolaris in october, then maybe I can access my ZFS raid? Or
should I sell my computer and get an old
Without more specifics it's hard to suggest more. You say it's a P45
motherboard, which one? Is it on the Solaris HCL?
What exact problems are you having when you login with a clean install of b95?
You say everything becomes completely unstable, but what exactly do you mean by
everything?
Hi,
Is there any easiest way to display only names of disks taking part from a
Zpool :
zpool status |awk '/c.t./{print $1}'
I am wondering if it's enought while disks can be named differently as cXtY !
Thank's for your help.
@del.
This message posted from opensolaris.org
Thanks, not as much as I was hoping for but still extremely helpful.
Can you, or others have a look at this: http://cuddletech.com/arc_summary.html
This is a PERL script that uses kstats to drum up a report such as the
following:
System Memory:
Physical RAM: 32759 MB
Free
Ian Collins wrote:
Al Hopper writes:
Interesting thread - thanks to all the contributors. I've seen, on
several different forums, that many CF users lean towards Sandisk for
reliability and longevity. Does anyone else see consensus in terms of
CF brands?
The people to ask are probably
Hello Aaron,
Wednesday, August 20, 2008, 7:11:01 PM, you wrote:
All,
I'm currently working out details on an upgrade from UFS/SDS on DAS to ZFS on a SAN fabric. I'm interested in hearing how ZFS has behaved in more traditional SAN environments using gear that scales vertically like EMC
I dont think the mother board is on the HCL. But everything worked fine in b90.
I realize I havent provided all necessary info. Here is more info.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654tstart=0
The thing is, Ive upgraded ZFS to the newest version with b95. And b95 is very
- Original Message -
From: Robert Milkowski [EMAIL PROTECTED]
Date: Thursday, August 21, 2008 5:47 am
Subject: Re: [zfs-discuss] ZFS with Traditional SAN
To: Aaron Blew [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Hello Aaron,
Wednesday, August 20, 2008, 7:11:01 PM, you
Orvar Korvar wrote:
I dont think the mother board is on the HCL. But everything worked fine in
b90.
I realize I havent provided all necessary info. Here is more info.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654tstart=0
The thing is, Ive upgraded ZFS to the newest version
Yes, I read that thread.
The problem is nothing you're describing sounds like a problem with ZFS. You
say internet dies suddenly, but what do you mean? Is firefox crashing, are
pages just not loading any more?
Also, you're still saying Wine dies on startup. That's not a standard part of
On 21 August, 2008 - Ben Rockwood sent me these 2,2K bytes:
Thanks, not as much as I was hoping for but still extremely helpful.
Can you, or others have a look at this: http://cuddletech.com/arc_summary.html
This is a PERL script that uses kstats to drum up a report such as the
Can ADM ease the pain by migrating data only from one pool to the other. I
know it's not what most of you want but...
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]
Brian Wilson wrote:
- Original Message -
From: Robert Milkowski [EMAIL PROTECTED]
Date: Thursday, August 21, 2008 5:47 am
Subject: Re: [zfs-discuss] ZFS with Traditional SAN
To: Aaron Blew [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Hello Aaron,
Wednesday, August 20,
On Thu, Aug 21, 2008 at 11:46:47AM +0100, Robert Milkowski wrote:
Wednesday, August 20, 2008, 7:11:01 PM, you wrote:
I'm currently working out details on an upgrade from UFS/SDS on DAS to
ZFS on a SAN fabric. I'm interested in hearing how ZFS has behaved in
more
Hi, this is mi first post in this list!
I have OpenSolaris (snv_95) installed into my laptop (single sata disk)
and tomorrow I updated my pool with:
# zpool -V 11 -a
and after I start a scrub into the pool with:
# zpool scrub rpool
it found this (after 80% of scrubbing done):
# zpool status
That's the one that's been an issue for me and my customers - they
get billed back for GB allocated to their servers by the back end
arrays.
To be more explicit about the 'self-healing properties' -
To deal with any fs corruption situation that would traditionally
require an fsck on
On 08/21/08 14:55, Luca Morettoni wrote:
Hi, this is mi first post in this list!
I have OpenSolaris (snv_95) installed into my laptop (single sata disk)
and tomorrow I updated my pool with:
# zpool -V 11 -a
sorry, the command was:
# zpool upgrade -V 11 -a
I read that upgrade 11 is about
I have OpenSolaris (snv_95) installed into my laptop (single sata disk)
and tomorrow I updated my pool with:
# zpool -V 11 -a
and after I start a scrub into the pool with:
# zpool scrub rpool
# zpool status -vx
NAMESTATE READ WRITE CKSUM
rpool
On 08/21/08 17:26, Jürgen Keil wrote:
Looks like bug 6727872, which is fixed in build 96.
http://bugs.opensolaris.org/view_bug.do?bug_id=6727872
that pool contains normal OpenSolaris mountpoints, what do you meen
abount umounting and remounting it?
I need to do this with a live cd?
Thanks and
On 08/21/08 17:26, Jürgen Keil wrote:
Looks like bug 6727872, which is fixed in build 96.
http://bugs.opensolaris.org/view_bug.do?bug_id=6727872
that pool contains normal OpenSolaris mountpoints,
Did you upgrade the opensolaris installation in the past?
AFAIK the opensolaris upgrade
On 08/21/08 17:45, Jürgen Keil wrote:
Did you upgrade the opensolaris installation in the past?
sure :)
The bug happens with unmounted filesystems, so you
need to mount them first, then umount.
ok, now all is clear, after I try to make this and scrubbing again...
Thanks for the help!
--
* Orvar Korvar ([EMAIL PROTECTED]) wrote:
I dont think the mother board is on the HCL. But everything worked fine in
b90.
I realize I havent provided all necessary info. Here is more info.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654tstart=0
The thing is, Ive upgraded ZFS
| The errant command which accidentally adds a vdev could just as easily
| be a command which scrambles up or erases all of the data.
The difference between a mistaken command that accidentally adds a vdev
and the other ways to loose your data with ZFS is that the 'add a vdev'
accident is only
Hello,
I have been experimenting with ZFS on a test box, preparing to present it to
management.
One thing I cannot test right now is our real-world application load. We
write to CIFS shares currently in small files.
We write about 250,000 files a day, in various sizes (1KB to 500MB). Some
Its a starting point anyway. The key is to try and draw useful conclusions
from the info to answer the torrent of why is my ARC 30GB???
There are several things I'm unclear on whether or not I'm properly
interpreting such as:
* As you state, the anon pages. Even the comment in code is, to
On Aug 21, 2008, at 9:51 AM, Brent Jones wrote:
Hello,
I have been experimenting with ZFS on a test box, preparing to
present it to management.
One thing I cannot test right now is our real-world application
load. We write to CIFS shares currently in small files.
We write about 250,000
Why would the customer need to use raidz or zfs
mirroring if the array
is doing it for them? As someone else posted,
metadata is already
redundant by default and doesn't consume a ton of
space.
Because arrays drives can suffer silent errors in the data that are not found
until too
On Thu, 21 Aug 2008, Brent Jones wrote:
I have been experimenting with ZFS on a test box, preparing to present it to
management.
One thing I cannot test right now is our real-world application load. We
write to CIFS shares currently in small files.
We write about 250,000 files a day, in
You're the second person to ask a question like this, but I can't for the life
of me find the post of the first person who asked. I'm sure somebody was
asking about either hundreds of thousands or millions of files in a single
directory. It was quite an interesting thread to read.
While
vf == Vincent Fox [EMAIL PROTECTED] writes:
vf Because arrays drives can suffer silent errors in the data
vf that are not found until too late. My zpool scrubs
vf occasionally find FIX errors that none of the array or
vf RAID-5 stuff caught.
well, just to make it clear again:
New version is available (v0.2) :
* Fixes divide by zero,
* includes tuning from /etc/system in output
* if prefetch is disabled I explicitly say so.
* Accounts for jacked anon count. Still need improvement here.
* Added friendly explanations for MRU/MFU Ghost lists counts.
Page and
Hi,
One of the thing you could have done to continue the resilver is zpool clear
This would have let you continue to replace the drive you pulled out. Once that
was done you could have them figured out what was wrong with the second faulty
drive.
The second drive only had check sum errors, ZFS
[EMAIL PROTECTED] said:
That's the one that's been an issue for me and my customers - they get billed
back for GB allocated to their servers by the back end arrays. To be more
explicit about the 'self-healing properties' - To deal with any fs
corruption situation that would traditionally
Question #1:
I've seen 5-6 disk zpools are the most recommended setup.
In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks
usable) out of the 15 disks (like raidz2 I suppose). What would make the most
sense to setup 15 disks with ~ 13 disks of usable space? This is for a
On Fri, Aug 22, 2008 at 00:15, mike [EMAIL PROTECTED] wrote:
Question #1:
I've seen 5-6 disk zpools are the most recommended setup.
In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks
usable) out of the 15 disks (like raidz2 I suppose). What would make the most
sense
35 matches
Mail list logo