So I can manage the file system mounts/automounts using the legacy option ,
but I can't manage the auto-import of the pools . Or I should delete the
zpool.cache file during boot .
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi,
I deployed ZFS on our mailserver recently, hoping for eternal peace after
running on UFS and moving files witch each TB added.
It is mailserver - it's mdirs are on ZFS pool:
capacity operationsbandwidth
poolused avail read
Lieven De Geyndt wrote:
So I can manage the file system mounts/automounts using the legacy option
, but I can't manage the auto-import of the pools . Or I should delete
the zpool.cache file during boot .
Doesn't this come back to the problem which is self-induced, namely
that they are trying
On September 7, 2006 6:55:48 PM +1000 James C. McPherson [EMAIL PROTECTED]
wrote:
Doesn't this come back to the problem which is self-induced, namely
that they are trying poor man's cluster ??
If you want cluster functionality then pay for a proper solution.
If you can't afford a proper
We are trying to obtain a mutex that is currently held
by another thread trying to get memory.
Hmm, reminds me a bit on the zvol swap hang I got
some time ago:
http://www.opensolaris.org/jive/thread.jspa?threadID=11956tstart=150
I guess if the other thead is stuck trying to get memory, then
Lieven De Geyndt wrote:
I know this is not supported . But we try to build a safe configuration,
till zfs is supported in Sun cluster. The customer did order SunCluster,
but needs a workarround till the release date . And I think it must be
possible to setup .
So build them a configuration
Jürgen Keil wrote:
We are trying to obtain a mutex that is currently held
by another thread trying to get memory.
Hmm, reminds me a bit on the zvol swap hang I got
some time ago:
http://www.opensolaris.org/jive/thread.jspa?threadID=11956tstart=150
I guess if the other thead is stuck trying
Hello Mark,
Thursday, September 7, 2006, 12:32:32 AM, you wrote:
MM Robert Milkowski wrote:
On Wed, 6 Sep 2006, Mark Maybee wrote:
Robert Milkowski wrote:
::dnlc!wc
1048545 3145811 76522461
Well, that explains half your problem... and maybe all of it:
After I reduced vdev
Ivan,
What mail clients use your mail server? You may be seeing the
effects of:
6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue
parallel IOs when fsyncing
This bug was fixed in nevada build 43, and I don't think made it into
s10 update 2. It will, of course, be in
On Thu, Sep 07, 2006 at 11:32:18AM -0700, Darren Dunham wrote:
I know that VxVM stores the autoimport information on the disk
itself. It sounds like ZFS doesn't and it's only in the cache (is this
correct?)
I'm not sure what 'autoimport' is, but ZFS always stores enough
information on the
On 9/7/06, Torrey McMahon [EMAIL PROTECTED] wrote:
Nicolas Dorfsman wrote:
The hard part is getting a set of simple
requirements. As you go into
more complex data center environments you get hit
with older Solaris
revs, other OSs, SOX compliance issues, etc. etc.
etc. The world where
most
[EMAIL PROTECTED] wrote:
This is the case where I don't understand Sun's politics at all: Sun
doesn't offer really cheap JBOD which can be bought just for ZFS. And
don't even tell me about 3310/3320 JBODs - they are horrible expansive :-(
Yep, multipacks are EOL for some time now -- killed by
Hi, thanks for respose.
As this is close-source mailserver (CommuniGate pro), I can't say 100% answer,
but the writes that I see that take too much time (15-30secs) are writes from
temp queue to final storage, and from my understanding, they are sync so the
queue manager can guarantee they
The bigger problem with system utilization for software RAID is the cache, not
the CPU cycles proper. Simply preparing to write 1 MB of data will flush half
of a 2 MB L2 cache. This hurts overall system performance far more than the few
microseconds that XORing the data takes.
(A similar
On Thu, Sep 07, 2006 at 01:09:47PM -0700, Frank Cusack wrote:
That zfs needs to address.
What if I simply lose power to one of the hosts, and then power is restored?
Then use a layered clustering product - that's what this is for. For
example, SunCluster doesn't use the cache file in the
On Thu, Sep 07, 2006 at 01:52:33PM -0700, Darren Dunham wrote:
What are the problems that you see with that check? It appears similar
to what VxVM has been using (although they do not use the `hostid` as
the field), and that appears to have worked well in most cases.
I don't know what
A determined administrator can always get around any checks and cause problems.
We should do our very best to prevent data loss, though! This case is
particularly bad since simply booting a machine can permanently damage the pool.
And why would we want a pool imported on another host, or not
17 matches
Mail list logo