RE: MyISAM or InnoDB

2002-09-26 Thread Orr, Steve

PROVOCATIVE RANT

MySQL doesn't provide native support for transactions but is dependent on
the Berkeley or InnoDB table types. 

Does MySQL really provide TRUE row level locking? That's the claim but is it
valid? The InnoDB web site seems to be more accurate by explaining that
locking is a next-key value locking mechanism that is indexed based. Here's
what the InnoDB manual says, InnoDB does the row level locking so that when
it searches or scans an index of a table, it sets shared or exclusive locks
on the index records in encounters. Thus the row level locks are more
precisely called index record locks. 

I developed some of my own benchmark tests which intentionally stress
transactions in an artificial manner. The results were that MySQL with
InnoDB didn't even come close to the performance of Oracle. Maybe my tests
weren't very realistic and maybe a real world application won't encounter
locking problems but I wouldn't just assume that InnoDB will scale like
Oracle or meet the needs of your application. You've got to test it!

Open-source database providers readily admit that their databases are not
yet geared toward powering high-end enterprise systems, such as ERP and CRM
applications. From an Infoworld article about open-source databases
entitled Finding an opening...
http://www.infoworld.com/articles/pl/xml/02/09/23/020923pldatabses.xml

One more thing to rant on... Regarding benchmark test...
http://www.mysql.com/doc/en/MySQL-PostgreSQL_bench_marks.html 
Quoting from this page: We know of two benchmark tests that claim that
PostgreSQL performs better than MySQL Server. These both where multi-user
tests, a test that we here at MySQL AB haven't had time to write and include
in the benchmark suite, mainly because it's a big task to do this in a
manner that is fair to all databases. 

It's funny how MySQL AB just hasn't had time to write a multi-user
benchmark. Based on the age of the link they haven't had time for a long
time now. Yet they say benchmarking is important? Give me a break! 

What we desperately need are some TPC benchmarks for the open-source
databases. Benchmarks which are independenttly audited and verified. 

/PROVOCATIVE RANT


-Original Message-
From: Josh Trutwin [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 25, 2002 10:03 AM
To: [EMAIL PROTECTED]
Subject: Re: MyISAM or InnoDB



 We are moving a databse over from Oracle to MySQL 4.0.3 and I was
 wondering if I should use MyISAM or InnoDb.  The databse currently has
 800-900 updates and inserts done to it in a day along with about 1000
 selects.  Would I run in to any table locking problems with MyISAM?

If you want to support transactions and/or foriegn key constraints then
you have to go InnoDB.  As for locking problems, I would guess not, but
I'll leave that to more experienced list members.

Josh

 Thanks in advance.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Orr, Steve

Background:
I've developed a simplistic Perl program to test database performance with
concurrent session queries. The queries involve inserts, updates, and
deletes in order to test database performance in an OLTP mult-user ACID
compliant scenario. Obviously this is not a real world test but it does
stress the database engine's ability to manage transactions so it is
somewhat valid for comparison purposes. 

Problem:
When I do an insert/update/delete I sporadically get the following:
:mysql::st execute failed: Deadlock found when trying to get lock; Try
restarting transaction at dafunc.pm line 340... The word deadlock is
misleading because all the database changes are based on a session number
ensuring that no session is trying to change data that is also being changed
by another session. It appears a time out is occurring before the shared row
level locks are acquired. Changing innodb_lock_wait_timeout doesn't seem to
do much. How do I tune this? 

Given table locking problems associated with MyISAM, it was thought that
InnoDB would perform better but in my tests performance is worse. It appears
InnoDB is taking a long time to acquire shared row level locks and is timing
out. If that's the case then the benefit of row level locks over table level
locks is more than offset by the internal InnoDB overhead to manage the
locks. Any other explanations? Any known performance issues with InnoDB? Any
Perl DBI driver performance issues?

Here's a snippet of the Perl code for the curious:
-
## Update by session, rand_val...
sub updSessionRand {
 eval {
  my $rtnval= 0  ;
  $estart   = time() ;
  my $dbh   = @_[1]  ;
  $sess_val = @_[2]  ;
  $sqlStmt  = UPDATE bench_data SET text_val='updated text by rand_val', 
  timestamp_val=$timestamp
  WHERE sess_val = ? AND rand_val between ? AND ? ;

  my $stmtHdl=$dbh-prepare($sqlStmt);
  $stmtHdl-execute($sess_val,$sess_val+900,$sess_val+1500) ;
  $dbh-commit();
  $edone = time();
  $totsec = $edone-$estart;
  print Session:$sess_val, upd02, seconds:$totsec\n;
 };
 if ($@) {
warn Session $sess_val upd02 failed.\n $@;
$rtnval = 1 ;
}
 return $rtnval ;
}
-

(Side Note: In a separate process I found out that the syntax SELECT ...
FOR UPDATE produces exclusive locks so I changed it to SELECT ... LOCK IN
SHARE MODE and that helped matters. I also tried setting the transaction
isolation level to serializable but that was worse.)

I am running MySQL version 3.23.52-max-log with Linux 2.4.18-3 
(RedHat 7.3.2.96-110) on 2 PentiumIII processors with 2GB RAM.

Here are some current innodb related my.cnf settings:
set-variable = innodb_lock_wait_timeout=300
innodb_flush_log_at_trx_commit=1
set-variable = innodb_buffer_pool_size=384M
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_thread_concurrency=4


TIA! 
Steve Orr
sql,query

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail mysql-unsubscribe-##L=##[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Orr, Steve

Hello again Heikki and thanks for your informative reply.

Regarding...
 innodb_flush_log_at_trx_commit=2
This is not an option as we must guarantee no lost transactions. But I will
test it out of curiosity just to see what the performance difference is. 

Regarding...
 InnoDB uses next-key locking to ensure serializability and that 'phantom
 rows' do not appear. You can get lock conflicts even if the queries
 seemingly would not overlap.
Does this mean that InnoDB is locking the next leaf in the B-Tree? That
would explain the problem as those rows could be updated by other sessions.
If that's the case then I think the next-key locking architecture is the
problem because it introduces artificial deadlocks on heavily used tables
and indexes that would otherwise not occur. 
(See http://portal.acm.org/citation.cfm?id=335461dl=ACMcoll=portal#)

Using the same code on the same machine I'm getting dramatically better
performance with PostgreSQL and Oracle both of which implement multi-version
concurrency control with an ANSI isolation level of read committed. I
understand that this isolation level allows for unrepeatable reads but this
is easily overcome programatically (if needed). It seems like the repeatable
read isolation level isn't as practical and isn't really needed that often.
Based on the work arounds you listed in the coping with deadlocks link, I
don't see any way around my performance problem. (I had already tried
transaction resubmission but it just perpetuates the problem.) 

If the repeatable read isolation level presents a performance problem that I
can't work around then I'd rather have the read committed isolation level.
Are there any plans to enable the read committed isolation level in InnoDB?
It seems like read committed is the most commonly implemented isolation
level amongst the other database vendors so what was behind the decision to
implement the repeatable read isolation level in InnoDB? Just curious. :-)

In the link you gave you state:
You can get deadlocks even in the case of transactions which just insert or
delete a single row. That is because these operations are not really
'atomic': they automatically set locks on the (possibly several) index
records of the row inserted/deleted.

So... if these operations are not atomic then does that mean that MySQL
still does not pass the ACID test even with InnoDB?

Thanks again and I'm eagerly awaiting your reply.


Respectfully,
Steve Orr





-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 9:05 AM
To: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...


Steve,

- Original Message -
From: Orr, Steve [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Thursday, September 05, 2002 5:52 PM
Subject: Performance Problems with InnoDB Row Level Locking...


 Background:
 I've developed a simplistic Perl program to test database performance with
 concurrent session queries. The queries involve inserts, updates, and
 deletes in order to test database performance in an OLTP mult-user ACID
 compliant scenario. Obviously this is not a real world test but it does
 stress the database engine's ability to manage transactions so it is
 somewhat valid for comparison purposes.

 Problem:
 When I do an insert/update/delete I sporadically get the following:
 :mysql::st execute failed: Deadlock found when trying to get lock; Try
 restarting transaction at dafunc.pm line 340... The word deadlock is
 misleading because all the database changes are based on a session number
 ensuring that no session is trying to change data that is also being
changed

InnoDB uses next-key locking to ensure serializability and that 'phantom
rows' do not appear. You can get lock conflicts even if the queries
seemingly would not overlap.

For example,

CREATE TABLE emptytable(a INT NOT NULL, PRIMARY KEY(a)) TYPE = InnoDB;

user 1: SELECT * FROM emptytable WHERE a = 100 LOCK IN SHARE MODE;

will make

user 2: INSERT INTO emptytable VALUES (150);

wait for a next-key lock on the 'supremum' of the primary index.

These operations would not overlap when using so-called predicate locking,
but that is too expensive to implement.

In transactional databases deadlocks are a classic problem. Please refer to
http://www.innodb.com/ibman.html#Cope_with_deadlocks.

 by another session. It appears a time out is occurring before the shared
row
 level locks are acquired. Changing innodb_lock_wait_timeout doesn't seem
to
 do much. How do I tune this?

 Given table locking problems associated with MyISAM, it was thought that
 InnoDB would perform better but in my tests performance is worse. It
appears
 InnoDB is taking a long time to acquire shared row level locks and is
timing
 out. If that's the case then the benefit of row level locks over table
level
 locks is more than offset by the internal InnoDB overhead to manage the
 locks. Any other explanations? Any known performance issues

RE: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Orr, Steve

Heikki,

 Next-key locking in InnoDB allows you to lock the non-existence of rows 
 and thus prevents phantom rows from appearing.
OK, now I understand what you're getting at with phantom rows. 

But given the tradeoff between the inadvertant next-key deadlocking
challenge and a the phantom rows challenge, I think I'd rather have the
phantom rows challenge because: 1) it's not as common of a problem; and 2)
on the few occasions when I am confronted with it I can easily prevent it
with a table lock. The need to work around next-key deadlocking issues is
constant (EVERY database insert, update or delete) and it seems the solution
is more difficult to achieve and may eventually come back to serialization
or table level locks which is what I'm trying to avoid to begin with. I've
already addressed some deadlock issues with frequent commits, smaller
transactions, and better indexes and I only want to lock tables when I
absolutely have to. 

I may be wrong but it just seems to me that the next-key locking approach
merely creates another concurrency issue and the subsequent next-key
deadlock problem is just too significant to ignore. 


Humbly,
Steve





-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 2:54 PM
To: Orr, Steve
Cc: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...


Steve,

- Original Message -
From: Orr, Steve [EMAIL PROTECTED]
To: 'Heikki Tuuri' [EMAIL PROTECTED]
Sent: Thursday, September 05, 2002 11:04 PM
Subject: RE: Performance Problems with InnoDB Row Level Locking...


 Heikki,

 You wrote...
  You are getting so many deadlocks that some transactions do not pass at
  all?
 No, the transactions eventually succeed but performance suffers. Like I
 said, this is a stress test to identify bottlenecks in database
performance.

cut

some optimization of next-key locking is possible. I recently removed some
spurious deadlocks in 4.0 because a big customer complained of the problem.

 
  Consider a consistency rule: sum of all balances in table ACCOUNT must
  be 1,000,000. How do you keep that true if your database cannot block
  phantom rows?
 Can't that be accomplished by the SELECT ... FOR UPDATE syntax?

That does not block new inserts to the table in Oracle. If you do:

INSERT INTO account2 SELECT * FROM account;

and someone else at the same time inserts within a single transaction 2 rows
('Jones', 1000), ('Smith', -1000) to table account, you may end up with
table account2 where the sum of balances is not 1,000,000, though table
account always had 1,000,000 as the sum.

The way to work around these serializability issues in Oracle is to use
table level locks. In the 1980's, when Oracle did not yet have foreign keys
constraints, people used table level locks to implement referential
integrity. You have to lock the NON-existence of child rows when you delete
a parent row.

Next-key locking in InnoDB allows you to lock the non-existence of rows and
thus prevents phantom rows from appearing.

 Thanks again,
 Steve

Best regards,

Heikki

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, September 05, 2002 1:30 PM
 To: Orr, Steve; [EMAIL PROTECTED]
 Subject: Re: Performance Problems with InnoDB Row Level Locking...


 Steve,

 - Original Message -
 From: Orr, Steve [EMAIL PROTECTED]
 To: 'Heikki Tuuri' [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Thursday, September 05, 2002 9:49 PM
 Subject: RE: Performance Problems with InnoDB Row Level Locking...


  Hello again Heikki and thanks for your informative reply.
 
  Regarding...
   innodb_flush_log_at_trx_commit=2
  This is not an option as we must guarantee no lost transactions. But I
 will
  test it out of curiosity just to see what the performance difference is.

 if you want to get high performance for disk flushes, you should buy a
disk
 with a battery-backed cache, and check that the disk driver is aware of
it.

 Otherwise we are constrained by the disk rotation speed, some 150 rounds /
 second.

  Regarding...
   InnoDB uses next-key locking to ensure serializability and that
'phantom
   rows' do not appear. You can get lock conflicts even if the queries
   seemingly would not overlap.
  Does this mean that InnoDB is locking the next leaf in the B-Tree? That

 No, only the next index record in alphabetical order.

  would explain the problem as those rows could be updated by other
 sessions.
  If that's the case then I think the next-key locking architecture is the
  problem because it introduces artificial deadlocks on heavily used
 tables
  and indexes that would otherwise not occur.

 Yes, that is true.

  (See http://portal.acm.org/citation.cfm?id=335461dl=ACMcoll=portal#)
 
  Using the same code on the same machine I'm getting dramatically better
  performance with PostgreSQL and Oracle both of which implement
 multi-version

RE: Persisten Connections

2002-08-16 Thread Orr, Steve

 I profiled my PHP app with a debugger
A single session and a few queries does not make a very good test. The
reason for persistent connections is to scale web apps to handle intense
multi-user activity like 1000 queries per second or 100's or even 1000's of
concurrent database connections. Do a for..loop that launches 1000 *nix
background processes each of which has a suite of queries with slightly
different values in the where clause. You need to look at overall
performance of the database server and its ability to handle the load of
many concurrent users using different connection and/or programming
techniques. The best implementation I've seen is where the connection
pooling function was written in C in a 3 tier architecture having a web
server, a db server, and an application server where the connection pooling
was done... VERY scalable, just like the fish I like to catch.


-Original Message-
From: mos [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 10:36 AM
To: John Wards
Cc: [EMAIL PROTECTED]
Subject: Re: Persisten Connections


At 08:40 AM 8/14/2002, you wrote:
I am running a website which generates around 100,000 pageviews a day and I
am wondering if I stop using persistent conections to the MySQL database
and
use
normal open and close conections this would reduce the load onto my server?

Most conections are either made through my main file or the phorum
message
board system.

Thanks in advance
John Wards

John,
 I profiled my PHP app with a debugger and discovered connecting to 
MySQL 4.x with persistent connections actually took 50% longer than 
non-persistent connections. (It added an extra 80ms to connect). So I don't 
see why persistent connections is useful at all with MySQL.

Mike


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Persisten Connections

2002-08-16 Thread Orr, Steve

 There is at least one reason, which is that persistent connections cause
 the server to hold open connection slots even the connection isn't
actively
 being used at the moment. 

But isn't this really just a failure of the connection pooling mechanism? If
you have a connection pooling program on a separate server in a 3 tier
architecture then it should be configurable to tune timeouts, connection
reuse or disconnects. No need to blame the database if the connection
multiplexer isn't doing a good job.



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, August 14, 2002 11:26 AM
To: Tod Harter; Thomas Seifert; [EMAIL PROTECTED]
Subject: Re: Persisten Connections


At 11:14 -0400 8/14/02, Tod Harter wrote:
On Wednesday 14 August 2002 09:54 am, Thomas Seifert wrote:

I disagree entirely

Persistent connections have little or nothing to do with increasing load!

Given that you mention you are using PHP I'll assume you have mod_php
running
in Apache. Each Apache child process in this configuration will maintain
ONE
open database handle, so 100k pageviews per day I would expect you might
max
at like 30k in one hour, or around 10/second, so you might top out at
roughly
100 Apache child processes at any one time, thus 100 database connections.

Each DB connection is not a huge overhead, but creating and destroying 10
database handles PER SECOND is a large overhead!!! Remember, every time
mysql
creates a connection it has to do internal queries on the grant tables. I
don't know exactly what the overhead of that is going to be, but ANYTHING
that creates 10 queries per second is putting some strain on your database
server!

One of the main goals of using Apache modules for scripting was to allow
persistent database connections. There is really NO reason to give up that
advantage. Remember, MySQL is multi-threaded, with one thread per
connection,
so the resources for a database connection are on the order of under 100k
of
memory per connection.

There is at least one reason, which is that persistent connections cause
the server to hold open connection slots even the connection isn't actively
being used at the moment.  This can cause the MySQL server to run out of
connection slots and result in connections being refused.  Using
non-persistent
connections under such circumstances can be beneficial because on average,
the server need hold fewer connections open.

This has been found to solve connection-refused issues in a number of
cases on this list.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL vs. Oracle (not speed)

2002-08-16 Thread Orr, Steve

Regarding your testing... did you test many concurrent processes or
concurrent multi-user access? If you need concurrent multi-user access for
an OLTP app then you should test it.

To this end I developed a Perl routine which launched concurrent sessions in
the background. I could adjust the number of concurrent sessions via
arguments passed to the program. Each session performed a series of SQL
statements involving a real world mix of selects, inserts, updates and
deletes with pseudo random values provided for key column ranges. As soon as
I cranked up the volume to around 20 concurrent sessions MySQL barfed but
Oracle scaled much higher without a hitch. MySQL is great for raw speed with
individual queries or batch inserts but it doesn't seem to scale as well
with OLTP apps having many concurrent sessions. I'm hoping this will change
with InnoDB and future enhancements as MySQLAB strives for ANSI
compatibility like the other guys (PostgreSQL and Interbase).


-Original Message-
From: Kenneth Hylton [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 10:47 AM
To: [EMAIL PROTECTED]
Subject: RE: MySQL vs. Oracle (not speed)


Our experience has been totally the opposite.  

We recently ported a Delphi application from MS-SQL to MySQL.

The MySQL server was a less powerful box than MS-SQL was running on.

I will not go into great detail because some of the software is proprietary
and we make $$$ providing the service.

But here are out times:

Process #1 (heavy inserts, few updates) MS-SQL 9 hours MySQL 90 minutes
Process #2 (heavy updates, few inserts) MS-SQL 17 hours MySQL 2.5 hours

The databases were tuned to get maximum performance from MS-SQL (you can see
why) and NO changes were made to optimize
for MySQL, as we didn't need to.

Ken Hylton
Programmer Analyst IV
LEC Systems  Programming

Billing Concepts, Inc.
7411 John Smith Drive
San Antonio, Texas 78229-4898
(210) 949-7261



-Original Message-
From: Mary Stickney [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 8:53 AM
To: Francisco; Elizabeth Bogner; [EMAIL PROTECTED]
Subject: RE: MySQL vs. Oracle (not speed)



I have been doing speed tests  the same query ran on MYSQL took 45
minutes
on MS-SQL  it took 11 minutes..

yes you do get what you pay for

-Original Message-
From: Francisco [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 8:47 AM
To: Mary Stickney; Elizabeth Bogner; [EMAIL PROTECTED]
Subject: RE: MySQL vs. Oracle (not speed)


Hi,

I am beging using MySQL for quite a while and it is a
very good choice if you don't really need stored
procedures. MySQL provides a pretty good
implementation of a subset of MySQL-92, performance is
great, it is cross-platform, provides transactions,
and its price... well is free.

Hope it helps.
--- Mary Stickney [EMAIL PROTECTED] wrote:

 It doesn't suport alot of differnt things
 it dosent have store procedures , dosent have a
 complete SQL command set...

 I am using it becasue I am being forced to...



 -Original Message-
 From: Elizabeth Bogner
 [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, August 15, 2002 9:25 PM
 To: [EMAIL PROTECTED]
 Subject: MySQL vs. Oracle (not speed)



 A company I work with is in the process of upgrading
 its databases from
 some
 motheaten system to something current. My impression
 is that they
 want to go with Oracle, and I'm not sure if this is
 based on anything
 other than being impressed with the size and
 presumed quality support
 of Oracle. I'd like to encourage them to at least
 seriously consider
 using
 MySQL instead.

 I don't think that speed is a huge factor here; we
 do a lot of XML
 publishing
 and content management, but at most we'd have
 several gigabytes of
 data and several dozen simultaneous users, so well
 within the
 capabilities
 of MySQL. I've looked at various things I could
 find, like the benchmarks
 pages (probably not relevant) and the MySQL myths
 page, which was
 somewhat helpful, but I couldn't find anything more
 along the lines of
 How to Convince my Management to go with MySQL. I
 don't even know
 what to expect from them, but I'm imagining they'll
 say, But MySQL
 doesn't support sub-selects, to which I can reply,
 But you can write
 most of those as joins anyway, so it won't matter
 because the software
 will all be written from scratch. Etc.

 Are there pointers anyone can give me?

 E. Bognewitz



-
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list
 archive)

 To request this thread, e-mail
 [EMAIL PROTECTED]
 To unsubscribe, e-mail

[EMAIL PROTECTED]
 Trouble unsubscribing? Try:
 http://lists.mysql.com/php/unsubscribe.php



-
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list

RE: Backup automation..

2002-07-31 Thread Orr, Steve

If you've got to make copies to disk then why not just maintain copies on
disk via replication? Stop the slave, backup the slave, restart the slave.

-Original Message-
From: Nicholas Stuart [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 31, 2002 8:55 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: Backup automation..


Aye, I forgot to mention to lock the db before backing up...my bad. But
again the idea is the same just schedule win2k to do it.

-Nick

Ed Carp said:
  On windows you could simply copy the entire data directory with a
 Scheduled job. Copying the files should be all you need to do for
 windows.
  For a cleaner, and what most people would say is a better way you
 could schedule a batch file to run mysqldump that would dump the
 data and structure to a file.
  mysqldump info can be found at:
  http://www.mysql.com/doc/m/y/mysqldump.html
 
 I hope you're shutting down MySQL before you do this.  If yuo're not,
 your
 backups are probably worthless.

 sql, query

 Shouldn't it be enough to lock the tables and FLUSH before doing the
 copy? I want to keep read access while doing the backup, and let
 writes queue (i.e. writers will find the database a bit soggy during
 backups, but read work OK and INSERT DELAYED will just be delayed - I
 hope.

 If you can guarantee that writes will not be propagated to the disk, you
 may be all right.  You might want to test to make sure you can ready
 your backups, though - just in case.

 We use mysqldump here for backups - it's fast, gives us ASCII SQL files
 to look at if something happens, and compresses well.  I'd highly
 recommend using it as opposed to copying the actual disk files around.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Accessing multiple indexes

2002-07-27 Thread Orr, Steve

It's my understanding that MySQL will only use one index per table on a
given query. For example...

SELECT * FROM HUGE_TABLE
WHERE col1 = val1
AND col2  val2
AND col3  val3 ;

If col1, col2, and col3 are indexed the query can only use one index, right?


Single index access is a problem when you very large tables. What if you
have a query with a result set of just 10 rows but there are no indexed
columns that can limit the result set to  1 million rows? I really need to
be able to use multiple indexes in a single table query and I don't want to
have to perform self joins or create temp tables.

Is there another way? Are there plans to overcome this limitation?


TIA



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MyISAM v. InnoDB

2002-07-15 Thread Orr, Steve

InnoDB COULD help if there's an issue with table locking and intense
concurrent multi-user access. InnoDB's row level locking should help with
this. Some queries will be faster with InnoDB and some will be slower. For
instance, select count(*) from table_name will do a full table scan with
InnoDB. Besides specific queries, the nature of the application may
determine whether InnoDB speeds up overall performance. Intuitively I'm
thinking concurrent, multi-user OLTP type apps will benefit. I'm getting
ready to do some benchmarking on a particular app and I think its necessary
before drawing any conclusions.

IMHO,
Steve Orr

-Original Message-
From: Cal Evans [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 2:39 PM
To: Chris Boget; [EMAIL PROTECTED]
Subject: RE: MyISAM v. InnoDB


switching to InnoDB won't help. Take a look at optimizing your queries.  Are
you using LIKE? have you looked at the output of EXPLAIN?

=C=

*
* Cal Evans
* The Virtual CIO
* http://www.calevans.com
*


-Original Message-
From: Chris Boget [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 1:53 PM
To: [EMAIL PROTECTED]
Subject: MyISAM v. InnoDB


When dealing with a considerable number of records
(10s of thousands) in a particular table, which is better
to use?  Currently we are using MyISAM and the
queries on those tables are kind of slow.  I've set the
most frequently used (in a query) columns as keys and
that isn't speeding it up any.  I'm wondering if switching
to InnoDB might solve my problem?

Chris




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: RE: Multiple masters to 1 slave??

2002-07-02 Thread Orr, Steve

 A slave can only have one master.
You obviously don't work around here. :-)

-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Subject: Re: Multiple masters to 1 slave??

On Tue, Jul 02, 2002 at 10:54:44AM -0500, Jim Crippen wrote:
  
 In the my.cnf file on the off-site slave, can there be multiple masters
 specified where it will replicate all 3 in-house servers to different
 databases on itself?

No.  A slave can only have one master.

Jeremy
sql,query

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: create a new table in a tablespace...

2002-06-18 Thread Orr, Steve

It's not THAT much like Oracle. ;-)

-Original Message-
From: Silmara Cristina Basso [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 18, 2002 9:16 AM
To: [EMAIL PROTECTED]; Egor Egorov
Subject: Re: create a new table in a tablespace...

If i had more than one tablespace, don't i control where it will be stored?

- Original Message -
From: Egor Egorov [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, June 18, 2002 11:06 AM
Subject: Re: create a new table in a tablespace...

 Silmara,
 Tuesday, June 18, 2002, 3:43:48 PM, you wrote:

 SCB I'm using MySQL 4.0.1 and my question is How can i do to create a
new table
 SCB in a tablespace?.

 Tablespaces are used only for InnoDB tables. If you create InnoDB
 table, it will be stored in InnoDB tablespace. How to create InnoDB
 tables read in the manual:
http://www.mysql.com/doc/U/s/Using_InnoDB_tables.html

 SCB Thank you!


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: HELP!!! major situation with mysql on windows 2000

2002-06-18 Thread Orr, Steve

The port of Cartesia does ship a lot of product.  ;-)

-Original Message-
From: Norris, Joseph [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 18, 2002 1:02 PM
To: 'Benjamin Pflugmann'; Norris, Joseph
Cc: Mysql_List (E-mail)
Subject: RE: HELP!!! major situation with mysql on windows 2000

Thanks to all - another coder showed me the error of my ways :)

I had a join without a where clause - a big no-no.
I had to restart the server and repair my code and now I am back to normal.

Thanks to all.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL/InnoDB Hot Backups - What's a binlog segment?

2002-06-12 Thread Orr, Steve

Thanks Heikki,

I knew about the binlogs but that segment word confused me. 
Now if I may pester you with 2 more questions...  :-)

Quoting... From the binlog segment you see if any of the .frm files changed
between the moment you took a .frm files backup and the moment ibbackup
finished its work. 

1) Do you mean to say that you can actually look into the binlogs to
determine what .frm's changed? (If so how?) Or do you mean that backing up
the post ibbackup binlogs provides a snapshot containing changes that have
occurred since ibbackup began? 

2) Why backup the .frm's and binlogs before and after ibbackup? Why not just
run ibbackup, backup the .frm's, do a mysqladmin flush-logs, backup the
prior (non-current) binlogs then purge them?


Thanks again for your answers and your patience,
Steve Orr



-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 12, 2002 2:56 PM
To: [EMAIL PROTECTED]
Subject: Re: MySQL/InnoDB Hot Backups - What's a binlog segment?


Steve,

the binlog is the MySQL logical log which it writes if you specify

[mysqld]
log-bin

in my.cnf. They are the files called 'hostname'-bin.00x in the datadir of
MySQL. They contain all INSERT, UPDATE, etc. SQL statements in an almost
human-readable form.

The binlog is used to recover from a backup. Normally you should always
archive the binlog files if you have important data. The 'binlog segment'
means the binlog file(s) generated when the backup was running.

Regards,

Heikki

- Original Message -
From: Orr, Steve [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Wednesday, June 12, 2002 8:20 PM
Subject: MySQL/InnoDB Hot Backups - What's a binlog segment?


 I'm confused about the meaning of the help text from ibbackup --help.

 Here's the text:
 You should make backups of the .frm files... both BEFORE and AFTER
ibbackup
 finishes its work, and also store the MySQL binlog segment which is
 generated between the moment you copy the .frm files to a backup and the
 moment ibbackup finishes its work... From the binlog segment you see if
any
 of the .frm files changed between the moment you took a .frm files backup
 and the moment ibbackup finished its work.

 So what exactly is the binlog segment? Is it some mysterious file or
 merely the text sections from the ibbackup output that refer to the lsn's?

 Here's some sample output:
  . . .
 ibbackup: Found checkpoint at lsn 0 1418349381
 ibbackup: Starting log scan from lsn 0 1418349056
 . . .
 ibbackup: A copied database page was modified at 0 1418349381
 ibbackup: Scanned log up to lsn 0 1418349381
 ibbackup: Full backup completed!


 Clueless in Montana,
 Steve Orr

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: InnoDB Hot Backups... ALL OR NOTHING ???

2002-06-07 Thread Orr, Steve

Thanks for your response Daniel,

1. I appreciate that InnoDB is more robust than ISAM- passes the ACID
test. 

2.  Just convert InnoDB tables to MyISAM and backup the MyISAM version.
I did this on a large table- it took a while and generated lots of I/O.
Multiply this by 200 databases and 1000 tables on a single server- it's a
production support issue. We still need a hot backup solution that's more
capable than all or nothing. Using hot-swappable RAID with journaling file
systems doesn't obviate the need for online database backups. Another option
is full database replication with duplicate servers and disks. $igh...

3.  you can have many tablespaces...
I don't think so. According to the docs it's one tablespace with many files
and the data is comingled. On a 100GB database with 50 2GB files, what
happens when one file is lost? Restore the entire system while all the
databases are down? InnoDB is great but I'd like to be able to recover a
single database from backups while the other databases are up and running.


Enhancement Request:
1. The ability to associate a database with named tablespaces/files. 
2. Given 1, the ability to backup and recover a single database, tablespace,
or file set without impacting on the online availability of other databases,
tablespaces, and/or files.

Without this capability, the loss of one data file effectively equates to
the loss of the entire database server and the mean time to recovery (MTTR)
is unecessarily long. 


IMHO :-)




-Original Message-
From: Kiss Dániel [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 07, 2002 12:04 AM
To: Orr, Steve; [EMAIL PROTECTED]
Subject: Re: InnoDB Hot Backups... ALL OR NOTHING ???


First of all, there are many aspects of your problem.

1. The InnoDB uses transaction safe table types, and uses the log files to 
restore if anything goes wrong during the tsanasction. So it is almost 
impossible to have a permanent database error, that cannot be repaired by 
InnoDB itself. If MySQL crashes, you just have to restart it, and InnoDB 
repaires automatically all the tables containing errors.

2. In spite of the first section, its a good idea to create backups of your 
InnoDB tablespace, because it can happen that the hard disk you have your 
tablespace files fails and in a case like this you don't have anything 
else, just your backup files.
It's a little bit difficult to save all the InnoDB tablespaces onto another 
backup disk, because they can be very big, although they are compressable 
very well, because the empty spaces inside the tablespace contain zeros.
A simple solution is not to backup directly the InnoDB tablespaces. Just 
convert the InnoDB tables into MyISAM and backup the MyISAM version of them.

3. The third aspect is that you can have many tablespaces, not only one big.
For example instead of an InnoDB initialization command in my.cnf like this
  innodb_data_file_path = ibdata1:2000M
you can use this
  innodb_data_file_path = 
ibdata1:200M;ibdata2:200M;ibdata3:200M;..ibdata10:200M

Good luck,
 Daniel

At 15:21 2002.06.06. -0600, you wrote:
Can you backup/restore just one MySQL database with InnoDB hot backup? From
what I gather it's an all or nothing proposition. As I understand it,
there's only one tablespace (with any number of data files) and all
database tables of type 'InnoDB' are comingled in the one tablespace.
Therefore, if a single datafile becomes corrupt, all the databases with
InnoDB type tables are down and you have to restore everything. Is that
right? If so are there any plans to have multiple named tablespaces?

We have a single server with 150+ databases (one for each hosted customer).
If one customer database goes down then we can restore the MYISAM type
tables without affecting the 24X7 availability for the other 149 customers.
However, if we convert tables to type InnoDB and a data file is corrupted
or
lost, then all databases are down and we have to restore EVERYTHING. Is
this
correct?


Sincere thanks in advance...

-
Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL Replication

2002-06-07 Thread Orr, Steve

What's the best way to lock down a slave to ensure that only select SQL
queries are executed and that all command line SQL inserts, updates, and
deletes are performed only on the master?

TIA

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysqlhotcopy broken ???

2002-06-06 Thread Orr, Steve

It appears the v4.0.1 mysqlhotcopy perl script has been modified and broken.

When using cp for the backup it's passing a null value for the file(s) to be
copied.  Below is a session clipping of an execution with --debug turned on:

# /u02/mysql/mysql-4.0.1/bin/mysqlhotcopy test_myisam --allowold --debug
Using copy suffix '_copy'
Filtering tables with '(?-xism:.*)'
$VAR1 = [
  {
'tables' = [
  'test_myisam.incidents'
],
't_regex' = '.*',
'src' = 'test_myisam',
'raid_dirs' = [],
'index' = [],
'files' = [
 'incidents.MYD',
 'incidents.MYI',
 'incidents.frm'
   ],
'target' = '/u02/mysql/data/test_myisam_copy'
  }
];
Existing hotcopy directory renamed to '/u02/mysql/data/test_myisam_copy_old'
Locked 1 tables in 0 seconds.
Flushed tables (test_myisam.incidents) in 0 seconds.
Copying 3 files...
Executing 'cp -p /u02/mysql/data/test_myisam_copy'
cp: missing destination file
Try `cp --help' for more information.
Burp ('scuse me). Trying backtick execution...
cp: missing destination file
Try `cp --help' for more information.
Copying indices for 0 files...
Unlocked tables.
Deleting previous copy in /u02/mysql/data/test_myisam_copy_old
mysqlhotcopy copied 1 tables (3 files) in 0 seconds (0 seconds overall).


Any fixes for this?

Steve Orr

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




InnoDB Hot Backups... ALL OR NOTHING ???

2002-06-06 Thread Orr, Steve

Can you backup/restore just one MySQL database with InnoDB hot backup? From
what I gather it's an all or nothing proposition. As I understand it,
there's only one tablespace (with any number of data files) and all
database tables of type 'InnoDB' are comingled in the one tablespace.
Therefore, if a single datafile becomes corrupt, all the databases with
InnoDB type tables are down and you have to restore everything. Is that
right? If so are there any plans to have multiple named tablespaces?

We have a single server with 150+ databases (one for each hosted customer).
If one customer database goes down then we can restore the MYISAM type
tables without affecting the 24X7 availability for the other 149 customers.
However, if we convert tables to type InnoDB and a data file is corrupted or
lost, then all databases are down and we have to restore EVERYTHING. Is this
correct? 


Sincere thanks in advance...

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL-InnoDB autoextend syntax gives errors

2002-05-23 Thread Orr, Steve

I changed my.cnf to add an autoextending datafile but I get errors on
startup.

Here's the my.cnf syntax:
innodb_data_file_path = hat/hatdata1:64M;hat/hatdata2:64M:autoextend

Here's the error output from mysqld:
InnoDB: syntax error in innodb_data_file_path
020523 16:34:24  Can't init databases

When I remove :autoextend from my.cnf then the database comes up and the
new datafile is added but it's not autoextensible. Is there a documentation
error on the autoextend syntax?


AtDhVaAnNkCsE,
Steve Orr

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Column Header/ Column Description

2002-05-15 Thread Orr, Steve

Metadata in the database is NOT a bad idea! It's a basic part of the
relational model, Codd's 4th rule. (E. F. Codd is the originator of the
relational model.) While no one has implemented all 12 of Codd's rules, this
is pretty basic/easy. 

Here's the 4th rule: The database description is represented at the logical
level in the same way as ordinary data, so authorized users can apply the
same relational language to its interrogation as they apply to regular
data.

I'm not a relational purist but metadata documentation about the database
should be maintained in the database and there's no reason for it to be
viewed as unnecessary overhead that would somehow slow down non-metadata
queries.


IMHO,
Steve Orr
Bozeman, Montana


-Original Message-
From: Mark Matthews [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 4:28 PM
To: Keith C. Ivey; [EMAIL PROTECTED]
Subject: Re: Column Header/ Column Description



- Original Message -
From: Keith C. Ivey [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, May 15, 2002 4:54 PM
Subject: Re: Column Header/ Column Description


 On 15 May 2002, at 16:06, Mark Matthews wrote:

  I'm interested in hearing what databases you have used that have this
  functionality? SQL server and Oracle do not, for example.

 MS Access does.  Also, it seems a bit inconsistent that MySQL has
 metadata (the comment) associated with each table, but has nothing
 similar for databases or columns.  It's certainly not essential, but
 I've wondered about it before.

I'm of the opinion that storing the sort of metadata you're talking about in
the database is a bad idea. I don't even think that the table comment
functionality that MySQL does have is absolutely necessary. Storing this
sort of metadata in the database seems to be counterproductive to me,
especially because it's only available if you know SQL and are working with
the database. But that's another discussion.

I would hope that developers create table and column names that are
descriptive enough to get a general understanding of what's going on, and
that anything that wasn't self-describing is documented in a set of release
notes or a data dictionary.

I'm also of the opinion that a picture is a much better tool for this sort
of thing. Unfortunately, there aren't many open-source ERD editors out
there.

-Mark




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php