Hello,
Am Mittwoch, 10. November 2004 22:31 schrieben Sie:
ok, your instructions worked like a charm. So i'm running my nice 4
member SCSI gvinum raid5 array (with softupdates turned on), and it's
zipping along.
Fine! :-)
Now I need to test just how robust this is.
Ouhh... ;-)
Matthias Schuendehuette wrote:
I'm not sure if this is a problem of (g)vinum or if FreeBSD has other
problems in this area.
just logged a kern bug on this
And we all have to consider that gvinum is in a relatively early
development phase (IMHO) - it is basically working, that is, it's
ok, your instructions worked like a charm. So i'm running my nice 4
member SCSI gvinum raid5 array (with softupdates turned on), and it's
zipping along. Now I need to test just how robust this is. camcontrol
is too nice. I want to test a more real world failure. I'm running
dbench and
Am Sonntag, 7. November 2004 06:30 schrieb secmgr:
On Sat, 2004-11-06 at 04:16, Matthias Schuendehuette wrote:
Did you try to simply 'start' the plex? This works for
initialisation of a newly created RAID5-Plex as well as for
recalculating parity informations on a degraded RAID5-Plex.
I
secmgr wrote:
No, I mean self corrupting raid5 sets during initialization. Discussed
about 2-3 weeks ago.
In the following message you seemed to claim that adding 64 sectors of
slack to the
beginning of the vinum partition fixed this problem, as I suggested. Did
that fix it or not?
The
It did, but can you tell me anywhere in the docs it says to do that? Or
maybe that vinum should sense that and throw some error rather than just
blindly corrupting itself.
jim
On Sun, 2004-11-07 at 09:38, Joe Koberg wrote:
secmgr wrote:
No, I mean self corrupting raid5 sets during
Am Mittwoch, 3. November 2004 21:27 schrieb secmgr:
Just ran into this myself. I had a perfectly happy raid 5 plex under
5.3 RC1. I upgrade to RC2, and the whole plex goes stale. I deleted
everything from the volume on down (except for the drives), and tried
to recreate the vol/plex/sd's.
Am Mittwoch, 3. November 2004 21:27 schrieb secmgr:
I hate to sound like a whiney baby, but WTF
is going on? It feels like vinum from 4.x has basicly been abandoned
(short of crashes with no workaround), and gvinum ain't near ready
for primetime.
If you mean the 'dangling vnode'-problem with
On Monday, 1 November 2004 at 10:05:16 +1100, Carl Makin wrote:
Greg 'groggy' Lehey wrote:
On Monday, 25 October 2004 at 14:21:33 -0600, secmgr wrote:
It's beginning to look like that's a bad idea. Lukas is
(understandably) working only on gvinum, and since I know it's nearly
there, I'm
No, I mean self corrupting raid5 sets during initialization. Discussed
about 2-3 weeks ago.
On Sat, 2004-11-06 at 05:09, Matthias Schuendehuette wrote:
If you mean the 'dangling vnode'-problem with vinum-classic:
Try to start 'classic' vinum *after* the system has come up. Either
I did a gvinum start.
On Sat, 2004-11-06 at 04:16, Matthias Schuendehuette wrote:
Am Mittwoch, 3. November 2004 21:27 schrieb secmgr:
Just ran into this myself. I had a perfectly happy raid 5 plex under
5.3 RC1. I upgrade to RC2, and the whole plex goes stale. I deleted
everything from
Adrian Wontroba wrote:
On Mon, Nov 01, 2004 at 10:05:16AM +1100, Carl Makin wrote:
Do you want to yank it in 5 or 6-CURRENT? There are a *lot* of people
using vinum and yanking it in 5-STABLE would force us all to use the 5.3
security branch until gvinum caught up.
From my experiences
On Mon, Nov 01, 2004 at 10:05:16AM +1100, Carl Makin wrote:
Do you want to yank it in 5 or 6-CURRENT? There are a *lot* of people
using vinum and yanking it in 5-STABLE would force us all to use the 5.3
security branch until gvinum caught up.
From my experiences today with setting up a
On Friday, 29 October 2004 at 14:20:40 -0600, secmgr wrote:
Greg 'groggy' Lehey wrote:
A bit of background: we know that 'gvinum' will replace Vinum; the
original intention had been to do it seamlessly, but for various
reasons that did happen. Then we decided that we should leave them
both
Hi Greg,
Greg 'groggy' Lehey wrote:
On Monday, 25 October 2004 at 14:21:33 -0600, secmgr wrote:
It's beginning to look like that's a bad idea. Lukas is
(understandably) working only on gvinum, and since I know it's nearly
there, I'm not going to do any further work on Vinum in FreeBSD 5.
Given
On Monday, 25 October 2004 at 14:21:33 -0600, secmgr wrote:
Andrew Konstantinov wrote:
On Mon, 2004-10-25 at 05:55, Oliver Torres Delgado wrote:
I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the
handbook and all work perfect
but when try run vinum with rc.conf there
Greg 'groggy' Lehey wrote:
A bit of background: we know that 'gvinum' will replace Vinum; the
original intention had been to do it seamlessly, but for various
reasons that did happen. Then we decided that we should leave them
both in the tree until gvinum had the full functionality of Vinum.
It's
I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime= 4s
Cannot dump. No dump device defined
Automatic reboot in 15 seconds.
why happened ?
I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime= 4s
Cannot dump. No dump device defined
Automatic reboot in 15 seconds.
why happened ?
On Mon, 2004-10-25 at 05:55, Oliver Torres Delgado wrote:
I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime= 4s
Cannot dump. No
Andrew Konstantinov wrote:
On Mon, 2004-10-25 at 05:55, Oliver Torres Delgado wrote:
I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime=
21 matches
Mail list logo