Emmanuel Cecchet wrote:
Jan-willem,

It seems that there are a couple of issues:

We use auto commit normally, but libmysequoia doesn't like it (gives an error 'Operation not allowed after ResultSet closed'), so we ran without auto commit, but didn't perform manual commits...

Ok, so it looks like you don't really understand what transactions are about. If you set autocommit to false, it starts a transaction. If you never commit, nothing will ever be persisted in the database (all your updates will be rollback when the connection dies). The 'Operation not allowed after ResultSet closed' is due to the fact that you are trying to access a ResultSet after it has been closed. This is likely a problem in your application logic.


I do know what transactions are. We just don't use them for our normal database. The claim that libmysequoia can be used to run your applications without changing a line of code just isn't true. When autocommit is true, our normal perl code that fetches a list of rows (something like this:)

 $sth = $sbh->prepare('select * from table where field = ?');
 $sth->execute($var);

Causes the 'Resultset closed' error. (even before we start with $sth->fetchrow()).

What we found is that Sequoia has a tendency to stop a backend (crash) on trivial things like a typing error in an sql statement.

No it does not. It is only when a backend does not behave as the others (one fails but the other succeed). If you send garbage to the cluster, all nodes will fail the same way and reject the statement. In your scenario where you just start a single big transaction, the transaction is aborted after the first error on a write (typical database behavior). So you have to rollback the transaction and start a new one if you want to make progress.

In the console (bin/console.sh):

 select * form table;

 show tables
 like '%this%'
 ;

Both caused a node shutdown. (first is a typo, the second is a multi-line statement).

Combined with the trouble of performing a dump/restore whenever a backend looses its state (not a quick job for a multiple GB database), it isn't the solution for us (at least not at the moment).

A backend should never lose its state unless a real failure happens. The backup operation should not take more time than for a single database instance where you should also do backups I guess.

We have a slave DB in case of total failure, but otherwise we can get away with a repair of the MyISAM files (most of the time).

We also had a lot of problems getting Appia to work correctly, but that's probably for another forum?
Yes, you can post on [EMAIL PROTECTED] Group communication setup is not trivial.
Although there is a lot of documentation, it's still a bit of a hit & miss affair to get things up and running, especially if you want to combine Sequoia, Appia and libmysequoia. I could have done with an idiot's guide for that :-)
Yes, you are right, we are certainly lacking documentation on these aspects. All contributions are welcome to help the community.

I'll see if I can reserve some time to write our experiences down in a consistent fashion. That should at least prevent a lot of pitfalls while setting things up.

Problem is that research is always something that gets squeezed in between the other projects...


Thanks for your feedback,
Emmanuel

You're welcome :-)

Jan-willem


_______________________________________________
Sequoia mailing list
[email protected]
https://forge.continuent.org/mailman/listinfo/sequoia

Reply via email to