ng subscription, and I would be interested in
the subject line
so I can search and do further reading.
--
Sam Fourman Jr.
Fourman Networks
http://www.fourmannetworks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
stable than Open Solaris
but to be fair, I understand FreeBSD, and I have only loaded Open solaris
with default settings and the most ram I ever gave open Solaris is a 8GB machine
so if you wanted to go with Open solaris (for dedupe and such) I would
use a Lot of ram
from many people I have talked
use zdb or mdb to recover my pool
I would very much appreciate it.
I believe the metadata is corrupt on my zpool
--
Sam Fourman Jr.
Fourman Networks
http://www.fourmannetworks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
2009.6 is having issues with it
so I want to switch over to my 8 port Areca raid card but can I simply export
the array, shut down the computer and move the 8 cables from the 3ware to the
areca, power back up and import the array again?
Thanks,
Sam
--
This message posted from opens
I have a 10 drive Raidz2 setup with one hot spare, I checked the status of my
array this morning and it had a weird reading, it shows all 10 of my drives in
ONLINE no-fault status but that my hot-spare is also currently replacing a
perfectly ok drive:
NAME STATE READ WRITE
Ok so the capacity is ruled out, it still bothers me that after experiencing
the error if I do a 'zpool status' it just hangs (forever) but if I reboot the
system everything comes back up fine (for a little while).
Last night I installed the latest SXDE and I'm going to see if that fixes it,
if
I was hoping that this was the problem (because just buying more discs is the
cheapest solution given time=$$) but running it by somebody at work they said
going over 90% can cause decreased performance but is unlikely to cause the
strange errors I'm seeing. However, I think I'll stick a 1TB dr
files again).
Is this a systemic problem at 90% capacity or do I perhaps have a faulty drive
in the array that only gets hit at 90%? If it is a faulty drive why does
'zpool status' report fully good health, that makes it hard to find the problem
drive?
Thanks,
Sam
--
This
funct.
Is there some other way to figure out which drive(s) are corrupt?
Thanks,
Sam
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Could this someway be related to this rather large (100GB) difference that 'zfs
list' and 'zpool list' report:
NAME SIZE USED AVAILCAP HEALTH ALTROOT
pile 4.53T 4.31T 223G95% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pile 3.44T 120G 3.44T /pile
I know th
0 0
c3d0ONLINE 0 0 1
c4d0ONLINE 0 0 0
So it says its a minor error but still one to be concerned about, I thought
resilvering takes care of checksum errors, does it not? Should I be running to
buy 3 new 500GB drives?
Thanks,
Sam
[i]It only scrubs the "used space" so it largely depends on how much data
you have stored in the pool.
Watch it with snapshots. I believe there is still a bug that restarts
(or kills) scrubbing operations in case a new snapshot is taken. If you
have automatic snapshots, that's something to keep an
Thanks all, I guess I'll setup a process to start scrubbing at 1am once or
twice a month. Any estimates on how much time it takes to scrub 10x500GB
drives (3.7TB effective)?
Sam
This message posted from opensolaris.org
___
zfs-discuss ma
I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
periodic maintenance to the filesystem to keep it in tip top shape?
Sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
e or some
such that I can use to figure out where this is hung in the kernel.
Cheers!
-sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was indeed using the new CIFS, I tried making Samba go last night but it
wasn't working out for whatever reason, and CIFS wouldn't work with my Windows
Vista x64. I just BFUed to the 81 release which fixed my Vista connection
problems, going to see if it fixes read/write too.
Sa
to try to do 25/40MB/s R/W (which failed) and went over to
33MB/s read.
Anybody encounter this problem before, I tried to google around and search here
on sumul. read/write but nothing came up.
Sam
This message posted from opensolaris.org
___
zpool list reports 3.67T, df reports 2.71 which is pretty close to 2.73 so I
imagine you guys are right in the difference being 465GB vs 500GB for the size
of each disc, guess I'll go pick up another pair :)
Thanks!
Sam
This message posted from opensolari
K 2.67T 40.4K /pile
What happened to my 330GB?
Sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks guys,
Tim answered the general questions and haik thanks for pointing out Solaris
Express and CIFS, downloading that now instead, much closer to what I need.
Sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
r the 4x750 I'd just
do a new RAIDz on them (one disc is for parity) and add it to the ZFS group.
-If I do that it won't write data across both RAID volumes will it?
-If I want more control over what files go on which array should I keep them as
distinct volumes?
Thanks,
Sa
21 matches
Mail list logo