On 6/28/05, Joe Cheng <[EMAIL PROTECTED]> wrote:
> I said *if* InnoDB is a journaling database that doesn't protect you
> against crashes, then they don't know what they're doing.  The very
> point of journaling is to protect against this exact problem.

Agreed.

Incidentally, you can control the flush rules for InnoDB, which is
what I've done for my James install.  I only found this from listening
to the Top 5 Performance Tips webinar, not from the docs.  You can
either have it:
- flush from memory to the journal log at the completion of
transaction (true ACID)
- flush every second
- flush at some longer interval

I disabled the transaction sanctity, so I get the performance benefit
of row-level locking without any potential performance hit from
synchronous journaling.  I wouldn't recommend this by default, but I
found it interesting.

> Again, I'm not talking about the transactionality aspect and losing
> in-flight messages (although perhaps now we should!).  I am talking
> about journaling and whether arbitrary amounts of data can be lost in a
> database failure, and whether this problem can be avoided by simply
> defaulting to InnoDB.

Transactions and journaling are rather related, though I agree we're
not discussing whether James is transaction-based.  James is not,
regardless of what we set the repository to, and it's an SMTP server
so it doesn't need to be... the protocol is supposed to support
retries and duplicate messages.  The code throughout errs on the side
of duplicates instead of losing a message.

I just would prefer not have an InnoDB table come right back online
after a crash rather than run myismchk.  It's an easy command, but
until you learn it is somewhat frustrating.  Well, it was frustrating
to me :)

> >PS: it is REALLY easy to setup a slave mysql server that replicate your
> >original server for better availability.
> >
> >
> Yes, it's also very easy to tell James to use InnoDB.  I'm just assuming
> that most users will run James in whatever mode is the default.

I would be curious about how having a slave would solve any of these
problems.  Aside from the data being potentially lagged, how do you
cut over safely?  The MySQL driver has failover support, but to
read-only connections.  Yes, you can override that now, but then you
potentially lose messages because you are reading and writing against
a lagged database, and you would have to just dump the master rather
than figure out what messages had not yet been replicated.

Anyway, I'm dealing with this mysql failover for another project and
can't seem to avoid losing database.  Maybe we should just all switch
to NDBCLUSTER table types (MySQL Cluster) and never have database
failures again!

-- 
Serge Knystautas
Lokitech >> software . strategy . design >> http://www.lokitech.com
p. 301.656.5501
e. [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to