Has anybody here got any thoughts on how to resolve this problem:
http://www.opensolaris.org/jive/thread.jspa?messageID=261204&tstart=0
It sounds like two of us have been affected by this now, and it's a bit of a
nuisance your entire server hanging when a drive is removed, makes you worry
about
Richard Elling wrote:
> Rainer Orth wrote:
>
>> Richard Elling writes:
>>
>>
>>
I've found out what the problem was: I didn't specify the -F zfs option to
installboot, so only half of the ZFS bootblock was written. This is a
combination of two documentation bugs and a ter
Hello, I've hit this same problem.
Hernan/Victor, I sent you an email asking for the description of this solution.
I've also got important data on my array. I went to b93 hoping there'd be a
patch for this.
I caused the problem in a manner identical to Hernan; by removing a zvol clone.
Exact
On Wed, Jul 23, 2008 at 7:21 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
*SNIP*
>
> Anyway, you can find more anecdotes in the archives of this list.
> IIRC someone else corroborated that he found, among non-DoA drives,
> failures are more likely in the first month than in the second month,
> but
>> SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
> That's a very old release, have you considered upgrading?
> Ian.
>
It was the absolute latest version available when we received the x4500,
and now it is live and supporting a large number of customers. However,
the 2nd unit will arrive nex
Jorgen Lundman writes:
>
> We are having slow performance with the UFS volumes on the x4500. They
> are slow even on the local server. Which makes me think it is (for once)
> not NFS related.
>
>
> Current settings:
>
> SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
>
That's a very ol
We are having slow performance with the UFS volumes on the x4500. They
are slow even on the local server. Which makes me think it is (for once)
not NFS related.
Current settings:
SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
# cat /etc/release
Solaris Express Developer Ed
On Wed, 23 Jul 2008, Miles Nordin wrote:
> the problem is that it's common for a very large drive to have
> unreadable sectors. This can happen because the drive is so big that
> its bit-error-rate matters. But usually it happens because the drive
> is starting to go bad but you don't realize th
> > 3. burn in the raidset for at least one month before trusting the
> > disks to not all fail simultaneously.
> >
> Has anyone ever seen this happen for real? I seriously doubt it will
> happen
> with new drives.
I have seen it happen on my own home ZFS fileserver...
purchased two new
> "ic" == Ian Collins <[EMAIL PROTECTED]> writes:
ic> I'd use mirrors rather than raidz2. You should see better
ic> performance
the problem is that it's common for a very large drive to have
unreadable sectors. This can happen because the drive is so big that
its bit-error-rate mat
On Wed, 23 Jul 2008, Ian Collins wrote:
> I don't know if such a tool exists, but I'm in the process or writing one
> (as part of a larger ACL admin tool) if you are intersted.
If there is no standard routine to handle this functionality, I would very
much appreciate a copy of your code...
Thank
On Wed, 23 Jul 2008, Brandon High wrote:
>
> With raidz2, you can grab any two disks. With mirroring, you have to
> grab the correct two.
>
> Personally, with only 4 drives I would use raidz to increase the
> available storage or mirroring for better performance rather than use
> raidz2.
If mirror
On 23 July, 2008 - Brandon High sent me these 1,3K bytes:
> On Wed, Jul 23, 2008 at 3:21 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
> >> 2. get four disks and do raidz2.
> >>
> >> In addition to increasing MTTF, this is good because if you need
> >> to leave in a hurry, you can grab two o
On Wed, Jul 23, 2008 at 3:21 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
>> 2. get four disks and do raidz2.
>>
>> In addition to increasing MTTF, this is good because if you need
>> to leave in a hurry, you can grab two of the disks and still leave
>> behind a working file server. I t
On Wed, Jul 23, 2008 at 03:20:47PM -0700, Brendan Gregg - Sun Microsystems
wrote:
> G'Day Jeff,
>
> On Tue, Jul 22, 2008 at 02:45:13PM -0400, Jeff Taylor wrote:
> > When will L2ARC be available in Solaris 10?
>
> There are no current plans to back port;
Sorry - I should have said that I wasn't
Miles Nordin writes:
>> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
>
> mh> http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/
>
> that's very helpful. I'll reshop for nForce 570 boards. i think my
> untested guess was an nForce 630 or something, so it probably won't
G'Day Jeff,
On Tue, Jul 22, 2008 at 02:45:13PM -0400, Jeff Taylor wrote:
> When will L2ARC be available in Solaris 10?
There are no current plans to back port; if we were to, I think it would be
ideal (or maybe a requirement) to sync up zpool features:
VER DESCRIPTION
--- -
Thommy M. wrote:
> Richard Gilmore wrote:
>
>> Hello Zfs Community,
>>
>> I am trying to locate if zfs has a compatible tool to Veritas's
>> vxbench? Any ideas? I see a tool called vdbench that looks close, but
>> it is not a Sun tool, does Sun recommend something to customers moving
>> fro
On Wed, Jul 23, 2008 at 2:05 PM, Steve <[EMAIL PROTECTED]> wrote:
> bhigh:
> so the best is 780G?
I'm not sure if it's the best, but it's a good choice. A motherboard
and cpu can be had for about $150. Personally, I'm waiting for the AMD
790GX / SB750 which is due out this month. The 780G has 1 x1
W. Wayne Liauh wrote:
> Is it possible to input the value of zfs:zfs_arc_max in 10-based format or
> other more common form (e.g., zfs:zfs_arc_max = 1GB, etc.), in addition to
> the current hex format?
>
>
Parameters set in /etc/system follow the rules as described in the
system(4) man page
> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
mh> http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/
that's very helpful. I'll reshop for nForce 570 boards. i think my
untested guess was an nForce 630 or something, so it probably won't
work.
I would add:
1. do not ge
> "s" == Steve <[EMAIL PROTECTED]> writes:
s> Apart from the other components, the main problem is to choose
s> the motherboard. The offer is incredibly high and I'm lost.
here is cut-and-paste of my shopping so far:
2008-07-18
via
http://www.logicsupply.com/products/sn1eg
Paul B. Henson writes:
>
> I was curious if there was any utility or library function available to
> evaluate a ZFS ACL. The standard POSIX access(2) call is available to
> evaluate access by the current process, but I would like to evaluate an ACL
> in one process that would be able to determin
I was curious if there was any utility or library function available to
evaluate a ZFS ACL. The standard POSIX access(2) call is available to
evaluate access by the current process, but I would like to evaluate an ACL
in one process that would be able to determine whether or not some other
user ha
Richard Gilmore wrote:
> Hello Zfs Community,
>
> I am trying to locate if zfs has a compatible tool to Veritas's
> vxbench? Any ideas? I see a tool called vdbench that looks close, but
> it is not a Sun tool, does Sun recommend something to customers moving
> from Veritas to ZFS and like vxb
Thank you for all the replays!
(and in the meantime I was just having a dinner! :-)
To recap:
tcook:
you are right, in fact I'm thinking to have just 3/4 for now, without anything
else (no cd/dvd, no videocard, nothing else than mb and drives)
the case will be the second choice, but I'll try to
Is it possible to input the value of zfs:zfs_arc_max in 10-based format or
other more common form (e.g., zfs:zfs_arc_max = 1GB, etc.), in addition to the
current hex format?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
Hello Zfs Community,
I am trying to locate if zfs has a compatible tool to Veritas's
vxbench? Any ideas? I see a tool called vdbench that looks close, but
it is not a Sun tool, does Sun recommend something to customers moving
from Veritas to ZFS and like vxbench and its capabilities?
Thanks,
On Wed, Jul 23, 2008 at 12:37 PM, Steve <[EMAIL PROTECTED]> wrote:
> Minimum requisites should be:
> - working well with Open Solaris ;-)
> - micro ATX (I would put in a little case)
> - low power consumption but more important reliable (!)
> - with Gigabit ethernet
> - 4+ (even better 6+) sata 3gb
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Steve wrote:
| I'm a fan of ZFS since I've read about it last year.
|
| Now I'm on the way to build a home fileserver and I'm thinking to go
with Opensolaris and eventually ZFS!!
|
| Apart from the other components, the main problem is to choose the
mo
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
<[EMAIL PROTECTED]> wrote:
>
> Dear Mark/All,
>
> Our trading system is writing to local and/or array volume at 10k
> messages per second.
> Each message is about 700bytes in size.
>
> Before ZFS, we used UFS.
> Even with UFS, there
I am wondering how many SATA controllers most motherboards have for
their built-in SATA ports.
Mine, an ASUS M2A-VM, has four ports, but OpenSolaris reports them as
belonging to two controllers.
I have seen motherboards with 6+ SATA ports, and would love to know if
any of them have more controlle
On Wed, Jul 23, 2008 at 2:37 PM, Steve <[EMAIL PROTECTED]> wrote:
> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm thinking to go with
> Opensolaris and eventually ZFS!!
>
> Apart from the other components, the main problem is to choo
I'm a fan of ZFS since I've read about it last year.
Now I'm on the way to build a home fileserver and I'm thinking to go with
Opensolaris and eventually ZFS!!
Apart from the other components, the main problem is to choose the motherboard.
The offer is incredibly high and I'm lost.
Minimum req
Rainer Orth wrote:
> Richard Elling writes:
>
>
>>> I've found out what the problem was: I didn't specify the -F zfs option to
>>> installboot, so only half of the ZFS bootblock was written. This is a
>>> combination of two documentation bugs and a terrible interface:
>>>
>>>
>> Mainl
[EMAIL PROTECTED] wrote:
> On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
>
>
>> 10,000 x 700 = 7MB per second ..
>>
>> We have this rate for whole day
>>
>> 10,000 orders per second is minimum requirments of modern day stock
>> exchanges ...
>>
>> Cache still help us for
On Wed, 23 Jul 2008, [EMAIL PROTECTED] wrote:
> Rainer,
>
> Sorry for your trouble.
>
> I'm updating the installboot example in the ZFS Admin Guide with the
> -F zfs syntax now. We'll fix the installboot man page as well.
>
> Mark, I don't have an x86 system to test right now, can you send me the
I wrote:
> Bill Sommerfeld wrote:
> > On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > > I ran a scrub on a root pool after upgrading to snv_94, and got
> > > > checksum errors:
> > >
> > > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> > > on a system that is
Cindy,
> Sorry for your trouble.
no problem.
> I'm updating the installboot example in the ZFS Admin Guide with the
> -F zfs syntax now. We'll fix the installboot man page as well.
Great, thanks.
Rainer
-
Rain
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Mark, I don't have an x86 system to test right now, can you send
me the correct installgrub syntax for booting a ZFS file system?
T
Richard Elling writes:
> > I've found out what the problem was: I didn't specify the -F zfs option to
> > installboot, so only half of the ZFS bootblock was written. This is a
> > combination of two documentation bugs and a terrible interface:
> >
>
> Mainly because there is no -F option?
Hu
Would adding a dedicated ZIL/SLOG (what is the difference between those 2
exactly? Is there one?) help meet your requirement?
The idea would be to use some sort of relatively large SSD drive of some
variety to absorb the initial write-hit. After hours when things quieit down
(or perhaps during
Rainer Orth wrote:
> Rainer Orth <[EMAIL PROTECTED]> writes:
>
>
>>> instlalboot on the new disk and see if that fixes it.
>>>
>> Unfortunately, it didn't. Reconsidering now, I see that I ran installboot
>> against slice 0 (reduced by 1 sector as required by CR 6680633) instead of
>> sli
=?UTF-8?Q?J=C3=BCrgen_Keil?= <[EMAIL PROTECTED]> writes:
> > Recently, I needed to move the boot disks containing a ZFS root pool in an
> > Ultra 1/170E running snv_93 to a different system (same hardware) because
> > the original system was broken/unreliable.
> >
> > To my dismay, unlike with UF
On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
> 10,000 x 700 = 7MB per second ..
>
> We have this rate for whole day
>
> 10,000 orders per second is minimum requirments of modern day stock exchanges
> ...
>
> Cache still help us for ~1 hours, but after that who will help
> Recently, I needed to move the boot disks containing a ZFS root pool in an
> Ultra 1/170E running snv_93 to a different system (same hardware) because
> the original system was broken/unreliable.
>
> To my dismay, unlike with UFS, the new machine wouldn't boot:
>
> WARNING: pool 'root' could no
One can carve furniture with an axe, especially if it's razor-sharp,
but that doesn't make it a spokeshave, plane and saw.
I love star office, and use it every day, but my publisher uses
Frame, so that's what I use for books.
--dave
W. Wayne Liauh wrote:
>>I doubt so. Star/OpenOffice are wor
Rainer Orth <[EMAIL PROTECTED]> writes:
> > instlalboot on the new disk and see if that fixes it.
>
> Unfortunately, it didn't. Reconsidering now, I see that I ran installboot
> against slice 0 (reduced by 1 sector as required by CR 6680633) instead of
> slice 2 (whole disk). Doing so doesn't f
On Tue, Jul 22, 2008 at 10:44 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
> More than anything, Bob's reply is my major feeling on this. Dedup may
> indeed turn out to be quite useful, but honestly, there's no broad data
> which says that it is a Big Win (tm) _right_now_, compared to finishing
> o
On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
> 10,000 x 700 = 7MB per second ..
>
> We have this rate for whole day
>
> 10,000 orders per second is minimum requirments of modern day stock exchanges
> ...
>
> Cache still help us for ~1 hours, but after that who will help
> txt_time/D
mdb: failed to dereference symbol: unknown symbol name
> txg_time/D
mdb: failed to dereference symbol: unknown symbol name
Am I doing something wrong
Robert Milkowski wrote:
Hello Tharindu,
Wednesday, July 23, 2008, 6:35:33 AM, you wrote:
TRB> Dear Mark/All,
TRB> Our
10,000 x 700 = 7MB per
second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock
exchanges ...
Cache still help us for ~1 hours, but after that who will help us ...
We are using 2540 for current testing ...
I have tried same with
Hello Tharindu,
Wednesday, July 23, 2008, 6:35:33 AM, you wrote:
TRB> Dear Mark/All,
TRB> Our trading system is writing to local and/or array volume at 10k
TRB> messages per second.
TRB> Each message is about 700bytes in size.
TRB> Before ZFS, we used UFS.
TRB> Even with UFS, there was evey 5
the os 's / first is on mirror /dev/dsk/c1t0d0s0 and /dev/dsk/c1t1d0s0, and
then created home_pool using mirror, here is the mirror information.
pool: omp_pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
omp_pool ONLINE 0
> with other Word files. You will thus end up seeking all over the disk
> to read _most_ Word files. Which really sucks.
> very limited, constrained usage. Disk is just so cheap, that you
> _really_ have to have an enormous amount of dup before the performance
> penalties of dedup are co
55 matches
Mail list logo