On Mon, Dec 29, 2008 at 8:45 PM, Larry Hastings larrya...@hastings.org wrote:
Hope I'm posting this in the right place.
I've got a RAIDZ2 volume made of 14 SATA 1TB drives. The box they're in is
absolutely packed full; I know of no way to add any additional drives, or
internal SATA
Hi everyone,
I have a serious problem and need some assistance. I was doing a rolling
upgrade of a raidz1, replacing 320GB drives with 1.5TB drives (ie. zpool
replace). I had replaced three of the drives and they had resilvered without
errors and then I started on the fourth one. It went
Michael McKnight wrote:
Hi everyone,
I have a serious problem and need some assistance. I was doing a rolling
upgrade of a raidz1, replacing 320GB drives with 1.5TB drives (ie. zpool
replace). I had replaced three of the drives and they had resilvered without
errors and then I started
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to be true
for Areca controllers :)
Are you sure about that ? The latest research I know of [1] says that
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to be true
for Areca controllers :)
Are you sure about that ? The latest research I
Hello all,
# zpool status
pool: mypool
state: ONLINE
scrub: scrub completed after 0h2m with 0 errors on Fri Dec 19 09:32:42 2008
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror ONLINE 0 0 0
Yeah
Thanks a lot to timf and mgerdts, it's working now !
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What kind of snapshot do I need to be on the safe side patching a S10u6
system? rpool? rpool/ROOT? rpool/ROOT/BE?
And how/what do I do to reverse to the non-patched system in case
something goes terribly wrong? ;-)
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u6
Dear Admin
I have a server with 6 HDD,in fresh installation i select two disks for
mirroring and create rpool and installing solaris10,but i need further space
than default rpool`s file systems for installing my application. (for
example mysql).so i decide create another pool (named tank) with 4
If memory serves me right, sometime around 12:34am, Michael McKnight told me:
I have tried import -f, import -d, import -f -d ... nothing works.
Did you try zpool export 1st?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Dec 30, 2008 at 02:06:16PM +0100, dick hoogendijk wrote:
What kind of snapshot do I need to be on the safe side patching a S10u6
system? rpool? rpool/ROOT? rpool/ROOT/BE?
Use Live Upgrade. Create a new boot environment and apply the
patches to that. Activate the new BE and `init 6'.
On Tue, Dec 30, 2008 at 3:32 PM, Weldon S Godfrey 3 wel...@excelsus.comwrote:
If memory serves me right, sometime around 12:34am, Michael McKnight told
me:
I have tried import -f, import -d, import -f -d ... nothing works.
Did you try zpool export 1st?
He did say he was doing
On Mon, 29 Dec 2008, Larry Hastings wrote:
I could swap the dying drive with the fresh drive, then run an
in-place zpool replace. But the drive isn't [i]dead,[/i] it is
merely [i]dying[/i]. That seems like overkill.
I am a bit slow today. It seems like a dying drive should be replaced
On Tue, Dec 30, 2008 12:18 AM, Brandon High wrote:
You should be able to export the volume and
physically replace the disk at that point.
Again, noob here, so just making sure I understand what you suggest:
1) zpool replace volume baddisk newdisk
2) zpool export volume
3) physically remove
Marcelo,
Thanks for the details ! This rules out a bug that I was suspecting :
http://bugs.opensolaris.org/view_bug.do?bug_id=6664765
This needs more analysis.
What does the rm command fail with ?
We could probably run truss on the rm command like : truss -o
/tmp/rm.truss rm filename
You then
I am a bit slow today. It seems like a dying drive should be replaced
ASAP.
Completely agree with Bob on this. I drive an 8.000lb truck and the
tires have industrial strength runflats. If I get a puncture or tear
in a tire I replace it as soon as I can, not when it is convenient.
The
On Tue, 30 Dec 2008, Larry Hastings wrote:
Again, noob here, so just making sure I understand what you suggest:
1) zpool replace volume baddisk newdisk
2) zpool export volume
3) physically remove baddisk and replace it with newdisk (which for
me requires shutting down, sigh)
4) zpool
On Tue, 30 Dec 2008, Bob Friesenhahn wrote:
I don't see a need for the zpool export and import
unless you have a way to convert your USB drive
into an drive which works directly in your chassis.
Ah, but that's exactly what I've got. I'm not using a dedicated USB drive, I'm
using a simple
execve(/usr/bin/rm, 0x08047DBC, 0x08047DC8) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
resolvepath(/usr/bin/rm, /usr/bin/rm, 1023) = 11
sysconfig(_CONFIG_PAGESIZE)
Jay wrote:
hi *,
i'm currently playing around with the setup of an opensolaris server as
home nas and am experiencing occasional read/write problems with the
zfs pool.
the short version (details below/attached):
* 6-disk raidz pool attached to the sata controller on an nvidia MCP78S
Marcelo,
Thanks for the details.
Comments inline...
Marcelo Leal wrote:
execve(/usr/bin/rm, 0x08047DBC, 0x08047DC8) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
on Mon Dec 29 2008, David Abrahams dave-AT-boostpro.com wrote:
on Tue Nov 11 2008, Mario Goebbels me-AT-tomservo.cc wrote:
Is it possible to install a GRUB that can boot a ZFS root, but installing it
from
within Linux?
I was planning on getting a new unmanaged dedicated server, which
execve(/usr/bin/ls, 0x08047DA8, 0x08047DB4) argc = 2
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,
-1, 0) = 0xFEFF
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
resolvepath(/usr/bin/ls, /usr/bin/ls, 1023) = 11
xstat(2, /usr/bin/ls, 0x08047A58)
Hi again,
No ideas? I have spent quite some time trying to recover, but no luck
yet. Any ideas or hints on recovery would be great! I'm soon running
out of time and will have to rebuild the zones and restore the data
but I'd much rather like to be able to recover it from the datasets.
div id=jive-html-wrapper-div
Dear AdminbrI have a server with 6 HDD,in fresh
installation i select two disks for mirroring and
create rpool and installing solaris10,but i need
further space than default rpool`s file systems for
installing my application. (for example mysql).so i
decide
On Tue, Dec 30, 2008 at 10:46 AM, Magnus Bergman m...@citynetwork.se wrote:
Hi again,
No ideas? I have spent quite some time trying to recover, but no luck
yet. Any ideas or hints on recovery would be great! I'm soon running
out of time and will have to rebuild the zones and restore the data
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
Well, I probably need to wade through the paper (and recall Galois field
theory) before answering this. We did a few tests in a 16 disk RAID6
where we wrote data to the RAID, powered the system down, pulled out one
disk, inserted it into
Umm, why do you need to do it the complicated way ? Here it is from zpool man
page-
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent
to attaching new_device, waiting for it to resilver, and
then detaching
I'm not an expert but for what it's worth-
1. Try the original system. It might be a fluke/bad cable or anything else
intermittent. I've seen it happen here. If so, your pool may be alright.
2. For the (defunct) originals, I'd say we'd need to take a look into the
sources to find if something
I'm not an expert but for what it's worth-
1. Try the original system. It might be a fluke/bad
cable or anything else intermittent. I've seen it
happen here. If so, your pool may be alright.
2. For the (defunct) originals, I'd say we'd need to
take a look into the sources to find if
On Tue, Dec 30, 2008 at 12:18 AM, Brandon High bh...@freaks.com wrote:
Use a USB enclosure for the new drive, and do:
zfs replace bad_disk new_disk
You should be able to export the volume and physically replace the
disk at that point.
It was late when I wrote that, so let me clarify a few
On Tue, Dec 30, 2008 at 11:30, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to
Que? So what can we deduce about HW raid? There are some controller cards that
do background concistency checks? And error detection of various kind?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Orvar, did you see my post on consistency check and data integrity?
It does not matter what HW RAID has, the point is what HW RAID does not
have...
Please, for the respect for Bill, please study, here are more.
THE LAST WORD IN FILE SYSTEMS
http://www.sun.com/software/solaris/zfs_lc_preso.pdf
On Tue, Dec 30, 2008 at 2:58 PM, Brandon High bh...@freaks.com wrote:
4) zpool import volume
Alas--it did not work.
r...@elephant# zpool import home
cannot import 'home': no such pool available
I was able to import it by force:
r...@elephant# zpool import -d /dev/mapper home
I exported it
Marcello,
Comments inline...
On Tue, Dec 30, 2008 at 10:35:37AM -0800, Marcelo Leal wrote:
pathconf(., 20) = 2
acl(., ACE_GETACLCNT, 0, 0x) = 6
stat64(., 0x08046890) = 0
acl(., ACE_GETACL, 6, 0x08071C48) = 6
36 matches
Mail list logo