Paul Kraus wrote:
I also like being able to see how much space I am using for
each with a simple df rather than a du (that takes a while to run). I
can also tune compression on a data type basis (no real point in
trying to compress media files that are already compressed MPEG and
OTOH, when someone whom I don't know comes across as
a pushover, he loses
credibility.
It may come as a shock to you, but some people couldn't care less about those
who assess 'credibility' on the basis of form rather than on the basis of
content - which means that you can either lose out
For a home user, data integrity is probably as, if not more, important
than for a corporate user. How many home users do regular backups?
I'm a heavy computer user and probably passed the 500GB mark way before
most other home users, did various stunts like running a RAID0 on IBM
Deathstars,
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem
K wrote:
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good
On 11/19/07, Ian Collins [EMAIL PROTECTED] wrote:
For a home user, data integrity is probably as, if not more, important
than for a corporate user. How many home users do regular backups?
Let me correct a point I made badly the first time around, I
value the data integrity provided by
On Mon, Nov 19, 2007 at 11:10:32AM +0100, Paul Boven wrote:
Any suggestions on how to further investigate / fix this would be very
much welcomed. I'm trying to determine whether this is a zfs bug or one
with the Transtec raidbox, and whether to file a bug with either
Transtec (Promise) or zfs.
Neil Perrin writes:
Joe Little wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now all
Roch - PAE wrote:
Neil Perrin writes:
Joe Little wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over
If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks
pick up without any errrors?
I have a 3120 JBOD and I went and yanked out a disk and the everything
got hosed. It's okay, because I'm just testing stuff and wanted to see
raidz2 in action when a disk goes down.
Am I
On Mon, Nov 19, 2007 at 04:33:26PM -0700, Brian Lionberger wrote:
If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks
pick up without any errrors?
I have a 3120 JBOD and I went and yanked out a disk and the everything
got hosed. It's okay, because I'm just testing stuff
: Big talk from someone who seems so intent on hiding
: their credentials.
: Say, what? Not that credentials mean much to me since I evaluate people
: on their actual merit, but I've not been shy about who I am (when I
: responded 'can you guess?' in registering after giving billtodd as
Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
began. But, accidentally deleted the replacement drive on the array
via CAM.
# zpool status -v
...
raidz2 DEGRADED 0 0 0
You should be able to do a 'zpool detach' of the replacement and then
try again.
- Eric
On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
began. But, accidentally deleted the replacement drive on the array
via
On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote:
Right now I have noticed that LSI has recently began offering some
lower-budget stuff; specifically I am looking at the MegaRAID SAS
8208ELP/XLP, which are very reasonably priced.
I looked up the 8204XLP, which is really quite
So... issues with reslivering yet again. This is ~3TB pool. I have one raid-z
of 5 500GB disks, and a second pool of 3 300GB disks. One of the 300GB disks
failed, so I have replaced the drive. After doing the resliver, it takes
approximately 5 minutes for it to complete 68.05% of the
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
You should be able to do a 'zpool detach' of the replacement and then
try again.
Thanks. That worked.
- Eric
On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
Running ON b66 and had a drive fail. Ran 'zfs replace'
Hello All,
Mike Speirs at Sun in New Zealand pointed me toward you-all. I have
several sets of questions, so I plan to group them and send several emails.
This question is about the name/attribute mapping layer in ZFS. In the
last version of the source-code that I read, it provides
After messing around... who knows what's going on with it now. Finally
rebooted because I was sick of it hanging. After that, this is what it came
back with:
root:= zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
James Cone wrote:
Hello All,
Here's a possibly-silly proposal from a non-expert.
Summarising the problem:
- there's a conflict between small ZFS record size, for good random
update performance, and large ZFS record size for good sequential read
performance
Poor sequential read
That locked up pretty quickly as well, one more reboot and this is what I'm
seeing now:
root:= zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the
Asif Iqbal wrote:
I have the following layout
A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
A1 anfd B1 controller port 4Gbps speed.
Each controller has 2G NVRAM
On 6140s I setup raid0 lun per SAS disks with 16K segment size.
On 490 I created a zpool with 8 4+1
Regardless of the merit of the rest of your proposal, I think you have put your
finger on the core of the problem: aside from some apparent reluctance on the
part of some of the ZFS developers to believe that any problem exists here at
all (and leaving aside the additional monkey wrench that
On Nov 19, 2007 1:43 AM, Louwtjie Burger [EMAIL PROTECTED] wrote:
On Nov 17, 2007 9:40 PM, Asif Iqbal [EMAIL PROTECTED] wrote:
(Including storage-discuss)
I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
ST337FC (300GB - 1 RPM FC-AL)
Those disks are 2Gb disks,
24 matches
Mail list logo