Sure, and thanks for the quick reply.
Controller: Supermicro AOC-SAT2-MV8 plugged into a 64-big PCI-X 133 bus
Drives: 5 x Seagate 7200.11 1.5TB disks for the raidz1.
Single 36GB western digital 10krpm raptor as system disk. Mate for this is in
but not yet mirrored.
Motherboard: Tyan Thunder K8W S
So the place we are arriving is to push the RFE for shrinkable pools?
Warning the user about the difference in actual drive size, then
offering to shrink the pool to allow a smaller device seems like a
nice solution to this problem.
The ability to shrink pools might be very useful in other situat
Can you share your hardware configuration?
cheers,
Blake
On Mon, Jan 19, 2009 at 11:56 PM, Brad Hill wrote:
> Greetings!
>
> I lost one out of five disks on a machine with a raidz1 and I'm not sure
> exactly how to recover from it. The pool is marked as FAULTED which I
> certainly wasn't expec
If you've got enough space on /var, and you had a dump partition configured,
you should find a bunch of "vmcore.[n]" files in /var/crash by now. The system
normally dumps the kernel core into the dump partition (which can be the swap
partition) and then copies it into /var/crash on the next suc
Looks like a corrupted pool -- you appear to have a mirror block pointer with
no valid children. From the dump, you could probably determine which file is
bad, but I doubt you could delete it; you might need to recreate your pool.
--
This message posted from opensolaris.org
_
Greetings!
I lost one out of five disks on a machine with a raidz1 and I'm not sure
exactly how to recover from it. The pool is marked as FAULTED which I certainly
wasn't expecting with only one bum disk.
r...@blitz:/# zpool status -v tank
pool: tank
state: FAULTED
status: One or more devic
On Mon, Jan 19 at 23:14, Greg Mason wrote:
>So, what we're looking for is a way to improve performance, without
>disabling the ZIL, as it's my understanding that disabling the ZIL
>isn't exactly a safe thing to do.
>
>We're looking for the best way to improve performance, without
>sacrificing
>
> Good idea. Thor has a CF slot, too, if you can find a high speed
> CF card.
> -- richard
We're already using the CF slot for the OS. We haven't really found
any CF cards that would be fast enough anyways :)
___
zfs-discuss mailing list
zfs-discu
On Mon, 19 Jan 2009, Greg Mason wrote:
> The current solution we are considering is disabling the cache
> flushing (as per a previous response in this thread), and adding one
> or two SSD log devices, as this is similar to the Sun storage
> appliances based on the Thor. Thoughts?
You need to add
Greg Mason wrote:
> So, what we're looking for is a way to improve performance, without
> disabling the ZIL, as it's my understanding that disabling the ZIL isn't
> exactly a safe thing to do.
>
> We're looking for the best way to improve performance, without
> sacrificing too much of the safet
So, what we're looking for is a way to improve performance, without
disabling the ZIL, as it's my understanding that disabling the ZIL
isn't exactly a safe thing to do.
We're looking for the best way to improve performance, without
sacrificing too much of the safety of the data.
The current
I switched to the CIFS filesharing system.
All that I needed to do was to disable the samba wins swat services, then i
started the smb/server service.
I followed the CIFS administration guide. Almost everything worked without
problems.
The only problem I got was a ?wins? resolution error. So, i
> And again, I say take a look at the market today, figure out a percentage,
> and call it done. I don't think you'll find a lot of users crying foul over
> losing 1% of their drive space when they don't already cry foul over the
> false advertising that is drive sizes today.
Perhaps it's quaint,
Greg Mason wrote:
> We're running into a performance problem with ZFS over NFS. When working
> with many small files (i.e. unpacking a tar file with source code), a
> Thor (over NFS) is about 4 times slower than our aging existing storage
> solution, which isn't exactly speedy to begin with (17
On Mon, Jan 19, 2009 at 2:55 PM, Adam Leventhal wrote:
> Drive vendors, it would seem, have an incentive to make their "500GB"
> drives
> as small as possible. Should ZFS then choose some amount of padding at the
> end of each device and chop it off as insurance against a slightly smaller
> drive
Hello,
We recently had SAN corruption (hard power outage), and we lost a few
transaction that were waiting to be written to real disk. The end result as
we all know is CKSUM errors on the zpool from a scrub, and we also had a few
corrupted files reported by ZFS.
My question is, what is the proper
Another option to look at is:
set zfs:zfs_nocacheflush=1
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Best option is to get a a fast ZIL log device.
Depends on your pool as well. NFS+ZFS means zfs will wait for write
completes before responding to a sync NFS write ops. I
We're running into a performance problem with ZFS over NFS. When working
with many small files (i.e. unpacking a tar file with source code), a
Thor (over NFS) is about 4 times slower than our aging existing storage
solution, which isn't exactly speedy to begin with (17 minutes versus 3
minutes)
Tim wrote:
> On Mon, Jan 19, 2009 at 1:12 PM, Bob Friesenhahn
> mailto:bfrie...@simple.dallas.tx.us>> wrote:
>
> On Mon, 19 Jan 2009, Adam Leventhal wrote:
>
>
> Are you telling me zfs is deficient to the point it can't
> handle basic
> right-sizing like
On Mon, Jan 19, 2009 at 01:35:22PM -0600, Tim wrote:
> > > Are you telling me zfs is deficient to the point it can't handle basic
> > > right-sizing like a 15$ sata raid adapter?
> >
> > How do there $15 sata raid adapters solve the problem? The more details you
> > could provide the better obvious
So personally I find ZFS to be fantastic, it's only missing three
features from my ideal filesystem:
1) The ability to easily recover the portions of a filesystem that are
still intact after a catastrophic failure (It looks like zfs scrub can
do this as long as a damaged pool could be imported,
> "b" == Blake writes:
b> removing the zfs cache file located at /etc/zfs/zpool.cache
b> might be an emergency workaround?
just the opposite. There seem to be fewer checks blocking the
autoimport of pools listed in zpool.cache than on 'zpool import'
manual imports. I'd expect th
On Mon, Jan 19, 2009 at 1:12 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 19 Jan 2009, Adam Leventhal wrote:
>
>>
>> Are you telling me zfs is deficient to the point it can't handle basic
>>> right-sizing like a 15$ sata raid adapter?
>>>
>>
>> How do there $15 sata raid a
On Mon, Jan 19, 2009 at 12:39 PM, Adam Leventhal wrote:
>
> Sorry, I must have missed your point. I thought that you were saying that
> HDS, NetApp, and EMC had a different model. Were you merely saying that the
> software in those vendors' products operates differently than ZFS?
>
Gosh, was the
I think this is probably true, and I suspect that Sun is also
targeting media warehousing shops like some of the big social
networking/video sites, where storage is coming online too fast to
make manual tuning a sensible thing to do. Look at many enterprise
storage graphs showing bytes on the x an
> "tr" == Timothy Renner writes:
tr> zfs set copies=2 zfspool/test2
'copies=2' says things will be written twice, but regardless of
discussion about where the two copies are written, copies=2 says
nothing at all about being able to *read back* your data if one of the
copies disappears.
On Mon, 19 Jan 2009, Adam Leventhal wrote:
>
>> Are you telling me zfs is deficient to the point it can't handle basic
>> right-sizing like a 15$ sata raid adapter?
>
> How do there $15 sata raid adapters solve the problem? The more details you
> could provide the better obviously.
It is really qu
On Mon, 19 Jan 2009, Tim wrote:
> Remember that one time when I talked about limiting snapshots to protect a
> user from themselves, and you joined into the fray of people calling me a
> troll? Can you feel the irony oozing out between your lips, or are you
> completely oblivious to it?
Tim,
I
Miles,
that's correct - I got muddled in the details of the thread.
I'm not necessarily suggesting this, but is this an occasion when
removing the zfs cache file located at /etc/zfs/zpool.cache might be
an emergency workaround?
Tom, please don't try this until someone more expert replies to my qu
> BWAHAHAHAHA. That's a good one. "You don't need to setup your raid, that's
> micro-managing, we'll do that."
>
> Remember that one time when I talked about limiting snapshots to protect a
> user from themselves, and you joined into the fray of people calling me a
> troll?
I don't remember thi
> > > Since it's done in software by HDS, NetApp, and EMC, that's complete
> > > bullshit. Forcing people to spend 3x the money for a "Sun" drive that's
> > > identical to the seagate OEM version is also bullshit and a piss-poor
> > > answer.
> >
> > I didn't know that HDS, NetApp, and EMC all all
>> Creating a slice, instead of using the whole disk, will cause ZFS to
>> not enable write-caching on the underlying device.
> Correct. Engineering trade-off. Since most folks don't read the manual,
> or the best practices guide, until after they've hit a problem, it is really
> just a CYA entr
> "nk" == Nathan Kroenert writes:
> "b" == Blake writes:
nk> I'm not sure how you can class it a ZFS fail when the Disk
nk> subsystem has failed...
The disk subsystem did not fail and lose all its contents. It just
rebooted a few times.
b> You can get a sort of redundanc
On Mon, Jan 19, 2009 at 11:02 AM, Adam Leventhal wrote:
> > "The recommended number of disks per group is between 3 and 9. If you
> have
> > more disks, use multiple groups."
> >
> > Odd that the Sun Unified Storage 7000 products do not allow you to
> control
> > this, it appears to put all the h
On Mon, Jan 19, 2009 at 11:05 AM, Adam Leventhal wrote:
> > Since it's done in software by HDS, NetApp, and EMC, that's complete
> > bullshit. Forcing people to spend 3x the money for a "Sun" drive that's
> > identical to the seagate OEM version is also bullshit and a piss-poor
> > answer.
>
> I
> "edm" == Eric D Mudama writes:
edm> If, instead of having ZFS manage these differences, a user
edm> simply created slices that were, say, 98%
if you're willing to manually create slices, you should be able to
manually enable the write cache, too, while you're in there, so I
wouldn't
Jim Dunham wrote:
> Richard,
>
>> Ross wrote:
>>
>>> The problem is they might publish these numbers, but we really have
>>> no way of controlling what number manufacturers will choose to use
>>> in the future.
>>>
>>> If for some reason future 500GB drives all turn out to be slightly
> Since it's done in software by HDS, NetApp, and EMC, that's complete
> bullshit. Forcing people to spend 3x the money for a "Sun" drive that's
> identical to the seagate OEM version is also bullshit and a piss-poor
> answer.
I didn't know that HDS, NetApp, and EMC all allow users to replace the
> "The recommended number of disks per group is between 3 and 9. If you have
> more disks, use multiple groups."
>
> Odd that the Sun Unified Storage 7000 products do not allow you to control
> this, it appears to put all the hdd's into one group. At least on the 7110
> we are evaluating there is
Asif Iqbal wrote:
> On Mon, Jan 19, 2009 at 10:47 AM, Andrew Gabriel
> wrote:
>> I've seen a webpage (a blog, IIRC) which compares the performance of
>> RAIDZ with differing numbers of disks in each RAIDZ group. I can't now
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Gu
Andrew,
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
"The recommended number of disks per group is between 3 and 9. If you have more
disks, use multiple groups."
Odd that the Sun Unified Storage 7000 products do
Richard,
> Ross wrote:
>> The problem is they might publish these numbers, but we really have
>> no way of controlling what number manufacturers will choose to use
>> in the future.
>>
>> If for some reason future 500GB drives all turn out to be slightly
>> smaller than the current ones you'
Ross wrote:
> The problem is they might publish these numbers, but we really have no way of
> controlling what number manufacturers will choose to use in the future.
>
> If for some reason future 500GB drives all turn out to be slightly smaller
> than the current ones you're going to be stuck. R
It really is sad when you have to start filtering technical mailing
lists to weed out the junk.
On Sun, Jan 18, 2009 at 4:17 PM, JZ wrote:
> Obama just made a good speech.
> I hope you were watching TV...
>
> Best,
> z
> ___
> zfs-discuss mailing list
>
This make sense. Given set of devices, ZFS can only write to free
blocks. If the only free blocks are close together or on the same
dev, then the protection can't be as great. This is quite likely to
happen on a fullish disk. copies > 1, however, is still better than
none (a single dropped block
On Mon, Jan 19, 2009 at 10:47 AM, Andrew Gabriel wrote:
> I've seen a webpage (a blog, IIRC) which compares the performance of
> RAIDZ with differing numbers of disks in each RAIDZ group. I can't now
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
section:
RAID-Z Config
I've seen a webpage (a blog, IIRC) which compares the performance of
RAIDZ with differing numbers of disks in each RAIDZ group. I can't now
find this, and can't seem to find the right things to get google to
search on. Does anyone recall where this is? ISTR the optimum number of
disks was 5-6.
You can get a sort of redundancy by creating multiple filesystems with
'copies' enabled on the ones that need some sort of self-healing in
case of bad blocks.
Is it possible to at least present your disks as several LUNs? If you
must have an abstraction layer between ZFS and the block device,
pre
I'm going waaay out on a limb here, as a non-programmer...but...
Since the source is open, maybe community members should organize and
work on some sort of sizing algorithm? I can certainly imagine Sun
deciding to do this in the future - I can also imagine that it's not
at the top of Sun's priori
Z,
> Beloved Tim,
> You challenged me a while ago, as a friend.
> I did what you asked me to do, in the honor of my father.
>
> Best,
> z
Please don't post personal stuff like this or links to wikipedia or
other ephemera/apocrypha to this/any list unless they are relevant.
Thanks... Sean.
___
The problem is they might publish these numbers, but we really have no way of
controlling what number manufacturers will choose to use in the future.
If for some reason future 500GB drives all turn out to be slightly smaller than
the current ones you're going to be stuck. Reserving 1-2% of spac
Chookiex writes:
> Hi all,
>
> I have 2 questions about ZFS.
>
> 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to
> pool2/data2. but I found the USED in zfs list is different:
> NAME USED AVAIL REFER MOUNTPOINT
> pool2/data2 160G 1.44T
yes, it's the same make and model as most of the other disks in the zpool and
reports the same number of sectors
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
Toby Thain wrote:
> On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
>
>> Hey, Tom -
>>
>> Correct me if I'm wrong here, but it seems you are not allowing ZFS any
>> sort of redundancy to manage.
Every other file system out there runs fine on a single LUN, when things
go wrong you have a fsck uti
54 matches
Mail list logo