Re: MAX_JOIN_SIZE

2006-07-11 Thread Gabriel PREDA

Try issuing

select @@sql_max_join_size

And also make a product from the number of rows from all the tables
involved in the join (with WHERE clause applied) and if it exceeds the
number given from the select I gave you above... then that's your
reason !

Also bare in mind that although you only request 30 rows with LIMIT...
MySQL will still JOIN the tables... the LIMIT clause is applied just
before sending the resultset to the client !

If you can't do it otherwise try setting a higher
sql_max_join_size... or try using some temporary tables !

Good fortune !
--
Gabriel PREDA
Senior Web Developer


#1104 - The SELECT would examine more rows than MAX_JOIN_SIZE. Check your
WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT
is ok


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Please solve immediately Error found in migration from MS SQL 2000 to My sql

2006-07-11 Thread Vitaliy Okulov
Здравствуйте, Kishore.

Вы писали 10 июля 2006 г., 16:52:29:

 Database Migration Error:
 My database is stored in MSSQL Server 2000 in a standalone system in windows
 authentication mode means without any username and password. I am writing my
 whole process  problem if you people could solve my problem.

 In Source Connection Parameter

 Stored Connection : myLibrary
 Hostname : Localhost  Port : 1433
 username : .
 Password :
 Database : Library (I am explicitly specifing, It is not fetching by mySQL
 migration Tool)

Next

 Target Connection Parameter

 Stored Connection : Library
 Hostname : Localhost  Port : 3306
 Username : root
 password : 

next 
 Connecting to source database system (failed)
 Retrieve schema information from source database system (failed)
 Test connection to target database system (not completed)

 from Advanced button in below I found following error message



 Connecting to source database and retrieve schemata names.
 Initializing JDBC driver ...
 Driver class MS SQL JDBC Driver
 Opening connection ...
 Connection
 jdbc:jtds:sqlserver://localhost:1433/Library;user=.;password=;charset=utf-8;domain=
 The list of schema names could not be retrieved (error: 0).
 ReverseEngineeringMssql.getSchemata :Network error IOException: Connection
 refused: connect

ReverseEngineering Mssql.getSchemata :Network error IOException:
Connection refused: connect

I think there is some problem in connection.
Try: telnet localhost:1433

 Details:
 net.sourceforge.jtds.jdbc.ConnectionJDBC2.init(ConnectionJDBC2.java:372)
 net.sourceforge.jtds.jdbc.ConnectionJDBC3.init(ConnectionJDBC3.java:50)
 net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:178)
 java.sql.DriverManager.getConnection(Unknown Source)
 java.sql.DriverManager.getConnection(Unknown Source)
 com.mysql.grt.modules.ReverseEngineeringGeneric.establishConnection(
 ReverseEngineeringGeneric.java:95)
 com.mysql.grt.modules.ReverseEngineeringMssql.getSchemata(
 ReverseEngineeringMssql.java:99)
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 java.lang.reflect.Method.invoke(Unknown Source)
 com.mysql.grt.Grt.callModuleFunction(Unknown Source)



 Please send me solution immediately. Then I will try MySQL with these tools
 in Network architecture.
 Thank you
 Kishore K Barik
 Kolkata, India



-- 
С уважением,
 Vitaliy  mailto:[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UPDATE silently fails

2006-07-11 Thread Donncha O Caoimh
We have two networks which are very far apart in separate data centres 
over 1,000 miles apart. Web servers are in both locations but our MySQL 
db servers are in one. All web servers talk to those servers.


After debugging some problems with a counter field I discovered that 
UPDATE statements from the remote network were silently dying. A SELECT 
immediately before works fine but I had to add a loop to make sure the 
UPDATE caught. It would have to loop two or three times for it to execute.
I'm mystified. MySQL doesn't raise an error, and affected rows is either 
0 or 1 (regardless of whether it succeeds or not) so I can't rely on that.

I tried adding a delay, but waiting up to a second didn't fix it.

Platform is Linux, MySQL servers are 4.1.11, web servers use PHP and 
MySQL Client API version 3.23.49.


Has anyone else had problems communicating with remote db servers?

Thanks!
Donncha.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to look for balanced parenthesis?

2006-07-11 Thread Pooly

2006/7/10, Joerg Bruehe [EMAIL PROTECTED]:

Hi Mike, all!


mos wrote:
 I have a complicated SQL statement with around a dozen
 if(this,val1,val2) embedded in it and there are even nested If
 clauses. I'm getting syntax errors because I'm not balancing the ( )
 properly. Is there any free software out there for Windows that I can
 copy and paste the SQL statement into that will show me where the
 parenthesis are unbalanced? It doesn't have to know about SQL, I just to
 either highlight the parenthesis range or find out when the parenthesis
 become out of sync.

I cannot give a list of editors that will do it,
but I know that vim (vi improved) does it.
It will also do syntax highlighting, but I do not know whether it knows
about SQL already or would need an additional SQL syntax description.



In vim, you can find matching parenthesis, with the % shorcut. Put
the cursor on one parenthesis, hit %, hop you jump to the
corresponding opening/closing parenthesis. (definitely a must-have for
certain language).

--
http://www.w-fenec.org/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to look for balanced parenthesis?

2006-07-11 Thread Miles Thompson

At 12:36 AM 7/11/2006, you wrote:


I used to use UltraEdit, but then switched to EditPlus because it can edit
remote files almost transparently. (Opening a file FTP's it down, you edit
local copy, Saving FTP's it back.)




FYI - UltraEdit can do this - It uses SFTP over SSH to edit remote files.
Works like a charm.

-Hank
mysql, query


It's funny how an editor-switch comes about. My old version of UE, could 
not do this, and I really didn't care, except that the editor question 
reared its head again on the PHP list. Keep in mind this was probably close 
to four years ago.


The capability of EditPlus to edit remote files was promoted as one of the 
things it could do - at the time I was working on a project for a client 
for which there was no SSH access - so I downloaded it and began using it 
occasionally ... and the use became more and more frequent.


UE still lives on my machine, but it's no longer my daily editor but it 
is invaluable if I have to look at a binary file. Ian Meade has a fine 
editor there. Maybe I should download the latest version and give it a try. 
When I last checked, its  FTPdown/Edit/FTPup cycle was not transparent, but 
that may have been v. 9 or v.10 - my copy is a pretty old v.7.


What I would really like would be EditPlus / UltraEdit simplicity and 
speed, with the capability to interactively provide function names and 
their parameters. Similar to VB or Zend Studio, BUT while working on the 
server. Yeah, an oxyMORONIC request. g


Cheers - Miles 



--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.1.394 / Virus Database: 268.9.10/384 - Release Date: 7/10/2006



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Speedup of database changes

2006-07-11 Thread Stefan Hornburg

Hello,

one of my customers has a database with about 1.7 million records
(52 fields, almost 1GB on amd64, MyISAM).

This database is running on Debian sarge with MySQL 4.0.24.

Changing the database structures and also large set of inserts/deletes
take too long to be acceptable on a productions system.

What can I do to speedup these operations ?

Bye
Racke

--
LinuXia Systems = http://www.linuxia.de/
Expert Interchange Consulting and System Administration
ICDEVGROUP = http://www.icdevgroup.org/
Interchange Development Team


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Speedup of database changes

2006-07-11 Thread Brent Baisley
Changes to the database structure should be few a far between. If you are finding you need to change the structure regularly, you 
should probably look at a redesign.


MyISAM locks the table when an insert/delete is occurring, during which time other requests are queue up. You can switch it to 
InnoDB, which does not use table locking. This can be done easy enough with an Alter table command, but make sure you read up on 
what InnoDB doesn't support (full text searching). InnoDB does take up a lot more space than MyISAM, so make sure your file system 
supports files over 2GB.


If you need to stick with MyISAM, you may want to change the way you do inserts/deletes, breaking them up into chunks. Then you'll 
be interleaving you insert/deletes with other requests.


- Original Message - 
From: Stefan Hornburg [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, July 11, 2006 8:52 AM
Subject: Speedup of database changes



Hello,

one of my customers has a database with about 1.7 million records
(52 fields, almost 1GB on amd64, MyISAM).

This database is running on Debian sarge with MySQL 4.0.24.

Changing the database structures and also large set of inserts/deletes
take too long to be acceptable on a productions system.

What can I do to speedup these operations ?

Bye
Racke

--
LinuXia Systems = http://www.linuxia.de/
Expert Interchange Consulting and System Administration
ICDEVGROUP = http://www.icdevgroup.org/
Interchange Development Team


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to log the execution time of each query

2006-07-11 Thread Barry

Saha, Mukul (Cognizant) schrieb:

Hi,

I would like to log the execution time for each query in my MySQL server
.Is there any possible way?

Please not that, the - log-slow-queries option will not suffice for my
requirement.




Thanks  Regards

Mukul Saha


Hi,

Well there is no option for logging every query.

You can add a script to your service/software that does this.

but as far as i know, there is no logging option for this.

Barry
--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Speedup of database changes

2006-07-11 Thread Stefan Hornburg

Brent Baisley wrote:
Changes to the database structure should be few a far between. If you 
are finding you need to change the structure regularly, you should 
probably look at a redesign.


This customer regularly request new things, so I cannot avoid changes
to the database structure.



MyISAM locks the table when an insert/delete is occurring, during which 
time other requests are queue up. You can switch it to InnoDB, which 
does not use table locking. This can be done easy enough with an Alter 
table command, but make sure you read up on what InnoDB doesn't support 
(full text searching). InnoDB does take up a lot more space than MyISAM, 
so make sure your file system supports files over 2GB.


I found in the MySQL documentation that the row-level locking of InnoDB
is slower if you need to do frequent full table scans. Unfortunately, I 
can not avoid them (3rd party application running there).




If you need to stick with MyISAM, you may want to change the way you do 
inserts/deletes, breaking them up into chunks. Then you'll be 
interleaving you insert/deletes with other requests.


OK, thanks.

How about DELAY_KEY_WRITE=1 ? Does this speed up things substantially ?

Bye
Racke



--
LinuXia Systems = http://www.linuxia.de/
Expert Interchange Consulting and System Administration
ICDEVGROUP = http://www.icdevgroup.org/
Interchange Development Team


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Dan Buettner

Sorry Mark, appears you're right.  --opt is the default now; didn't
used to be, I don't think.  Also, my experience with MyISAM is a total
lock on all tables across all databases during a mysqldump ... but you
are using InnoDB obviously.  I think you're right in your reading of
the docs - that you should be able to keep going during a dump.  Bug?
What version are you on?

Dan



On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:


Dan: The options I specified are correct (according to the
documentation) to get a consistent non-blocking snapshot.
(--single-transaction disables --lock-tables, --opt is the default
behavior for mysqldump).

My question was more in the nature of will these options work in high
concurrency situations or will they cause a deadlock. (or am I missing
something here)

The documentation states that --single-transaction will get a global
lock 'for a short period of time', which I thought to mean that it'll be
short enough to not disturb normal operations (which is what is implied
in the documentation).

If this isn't the case in high-concurrency situations, anyone have
another method to get a consistent snapshot?

Cheers,

Mark

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Monday, July 10, 2006 3:21 PM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark - I thought your question was more of a does this seem
right and how do I than a something's wrong here post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says This is an online, non-blocking backup,
makes no mention of --opt, which as you note includes --lock-tables.
From mysqldump man page:

--lock-tables, -l

  Lock all tables before starting the dump. The tables are
locked with
  READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
  For transactional tables such as InnoDB and BDB,
  --single-transaction is a much better option, because it does
not
  need to lock the tables at all.

  Please note that when dumping multiple databases,
--lock-tables
  locks tables for each database separately. So, this option
does not
  guarantee that the tables in the dump file are logically
consistent
  between databases. Tables in different databases may be dumped
in
  completely different states.

Try running without --opt, possibly specifying the included options
you need individually, and see if that works better for you.

I understand what you're saying about MySQL replication; hence the
need for monitoring the replication to ensure good backups.

Dan




On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
 Hi Dan,


   --single-transaction
   Creates a consistent snapshot by dumping all tables in a
   single transaction. Works ONLY for tables stored in
   storage engines which support multiversioning (currently
   only InnoDB does); the dump is NOT guaranteed to be
   consistent for other storage engines. Option
   automatically turns off --lock-tables.
   --opt
   Same as --add-drop-table, --add-locks, --create-options,
   --quick, --extended-insert, --lock-tables, --set-charset,
   and --disable-keys. Enabled by default, disable with
   --skip-opt.

 See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html

 These options should produce a non-blocking consistent database
 snapshot.

 I can already accomplish this on a slave server, however MySQL
 replication can lead to slave drift as it is statement based (as
opposed
 to row-based replication). The only safe way to guarantee a real
backup
 in a MySQL replication setup is via snapshots on the master.

 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 10, 2006 2:42 PM
 To: Mark Steele
 Cc: mysql@lists.mysql.com
 Subject: Re: mysqldump with single-transaction with high-concurrency
DB

 Mark, that's the expected behavior of mysqldump with --opt and
 --single-transaction; it locks all databases and all tables for the
 duration of the dump, ensuring a consistent snapshot.

 With a database this size (100 GB), it's an area where throwing
 hardware at the problem may be your best bet.  I suggest one of two
 approaches as possible solutions:

 1) Buy a *really fast* disk array and set it up as striped on a
 superfast connection, like Ultra320 SCSI or fibre.  This will lower
 the amount of time required to write the mysqldump output (which will
 likely exceed 100 GB data size due to overhead within the file).  You
 might even look at 2 disk arrays on 2 channels, striping across both
 the disks in the array and across the arrays.  Pros: fairly easy to
 do, not terribly expensive.  Cons: You still lock up your main
 database server for backups, though possibly for less time than you do
 now.

 2) Buy a second physical server for 

Re: Speedup of database changes

2006-07-11 Thread Brent Baisley
There will always be new requests, but many times you can store data vertically in rows instead of horizontally in columns. You 
would need a label column to indicate what type of data the row contains. It's more work on the front end, but a lot more 
flexible. It's a technique I use often, but may not be applicable/possible in your situation.


You can't create indexes to avoid or reduce full table scans?

InnoDB is slower in certain respects, but you need to keep in mind that InnoDB will allow multiple things to happen at once. Massive 
deletes can be a problem in InnoDB, taking a lng time.
Depending on how your data is structured, you may want to look into merge tables (MyISAM). I've got a table with almost 250 
million records in it, which is a rolling 6 months worth of data. Once a month we need to delete a months worth of data (millions 
of records). We make sure that each of the underlying tables of the merge table are about 1 months worth of data. To delete we 
drop the table and alter the UNION on the merge table. Takes less than a second.


Delay key write won't help much if you are not using indexes.

- Original Message - 
From: Stefan Hornburg [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, July 11, 2006 9:51 AM
Subject: Re: Speedup of database changes



Brent Baisley wrote:
Changes to the database structure should be few a far between. If you are finding you need to change the structure regularly, you 
should probably look at a redesign.


This customer regularly request new things, so I cannot avoid changes
to the database structure.



MyISAM locks the table when an insert/delete is occurring, during which time other requests are queue up. You can switch it to 
InnoDB, which does not use table locking. This can be done easy enough with an Alter table command, but make sure you read up on 
what InnoDB doesn't support (full text searching). InnoDB does take up a lot more space than MyISAM, so make sure your file 
system supports files over 2GB.


I found in the MySQL documentation that the row-level locking of InnoDB
is slower if you need to do frequent full table scans. Unfortunately, I can not 
avoid them (3rd party application running there).



If you need to stick with MyISAM, you may want to change the way you do inserts/deletes, breaking them up into chunks. Then 
you'll be interleaving you insert/deletes with other requests.


OK, thanks.

How about DELAY_KEY_WRITE=1 ? Does this speed up things substantially ?

Bye
Racke



--
LinuXia Systems = http://www.linuxia.de/
Expert Interchange Consulting and System Administration
ICDEVGROUP = http://www.icdevgroup.org/
Interchange Development Team


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Devananda

Mark,

You're correct that --single-transaction does create a consistent 
non-blocking read; I've used the same set of options 
(--single-transaction --master-data=1 --all-databases) to create 
point-in-time snapshots from which to create new replication slaves.


However... I have also seen behavior similar to what you describe 
(system /seems/ to grind to a halt). Do you have any logs from the time 
this occurred, and do you recall any details such as: how long after you 
began the consistent dump did you notice the deadlock; were you writing 
the dumpfile to the same disk (array) as either your data or log files?


Not knowing any details of your situation, I am guessing based on my 
experience, but here goes. Either, you have your datafiles set to 
autoextend, and the device they're on filled up with the dump file, or 
(and I think this is probably what happened, b/c it has happened to me) 
the deadlock you experienced happened long after you began the backup, 
and is due to InnoDB's multi-versioning. I'll try to explain... In order 
for InnoDB to maintain the consistent state of data from the moment the 
backup was begun, it must keep separate all modifications to all data 
after that point in time. Every subsequent connection/transaction will 
see the new (modified) version. In high-concurrency situations, it takes 
increasingly more work to keep the oldversion of the data as the undo 
logs fill up with new data, until eventually InnoDB can barely service 
new transactions since it is so busy trying to maintain the multiple 
versions. About a year ago, I had to create a new replication slave from 
a ~150GB InnoDB database -- iirc, it took about 3 days to complete the 
dump. After the first day, MySQL slowed to a crawl, serving less than 
half of it's usual transactions per sec. By the end, it was even slower.


Now, I'm not much of a morning person, so I hope the above explanation 
made at least some sense... in case it didn't, here's a reference from 
the docs:


http://dev.mysql.com/doc/refman/5.0/en/innodb-multi-versioning.html

   Update undo logs are used also in consistent reads, but they can be 
discarded only after there is no transaction present for which |InnoDB| 
has assigned a   
   snapshot that in a consistent read could need the information in the 
update undo log to build an earlier version of a database row.


If this is what happened, then it wasn't a true deadlock, and there may 
be a few simple solutions -- you may be able to adjust your 
configuration (my.cnf) to get better performance during the dump, or you 
may need to get a faster set of disks to write the dump to, or some 
other solution specific to your situation. If that's not what happened 
in your case, I apologize for the long-winded email. You'll have to 
provide more details about what error messages were reported, and your 
system configuration, for people to help more.


HTH,
Devananda



Mark Steele wrote:

Dan: The options I specified are correct (according to the
documentation) to get a consistent non-blocking snapshot. 
(--single-transaction disables --lock-tables, --opt is the default

behavior for mysqldump).

My question was more in the nature of will these options work in high
concurrency situations or will they cause a deadlock. (or am I missing
something here)

The documentation states that --single-transaction will get a global
lock 'for a short period of time', which I thought to mean that it'll be
short enough to not disturb normal operations (which is what is implied
in the documentation).

If this isn't the case in high-concurrency situations, anyone have
another method to get a consistent snapshot?

Cheers,

Mark

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 10, 2006 3:21 PM

To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark - I thought your question was more of a does this seem
right and how do I than a something's wrong here post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says This is an online, non-blocking backup,
makes no mention of --opt, which as you note includes --lock-tables.
From mysqldump man page:

--lock-tables, -l

  Lock all tables before starting the dump. The tables are
locked with
  READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
  For transactional tables such as InnoDB and BDB,
  --single-transaction is a much better option, because it does
not
  need to lock the tables at all.

  Please note that when dumping multiple databases,
--lock-tables
  locks tables for each database separately. So, this option
does not
  guarantee that the tables in the dump file are logically
consistent
  between databases. Tables in different databases may be dumped
in
  completely different states.

Try running without --opt, 

Re: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Devananda
Argh! I should stop writing emails this early in the morning - I always 
miss something. I just noticed that, at the end of your first email, you 
did say the backup process was stuck on flush tables. The only way 
that I can think of for this to happen is if another thread were holding 
a table open for a long transaction. As soon as that thread were to 
COMMIT or close, the table should be flushed. Every client that 
connected after the backup process /should/ have to wait for the FLUSH 
TABLES to complete, but /that/ has to wait for all previous connections.


Does that seem to fit what happened?


Regards,
Devananda




Mark Steele wrote:

Dan: The options I specified are correct (according to the
documentation) to get a consistent non-blocking snapshot. 
(--single-transaction disables --lock-tables, --opt is the default

behavior for mysqldump).

My question was more in the nature of will these options work in high
concurrency situations or will they cause a deadlock. (or am I missing
something here)

The documentation states that --single-transaction will get a global
lock 'for a short period of time', which I thought to mean that it'll be
short enough to not disturb normal operations (which is what is implied
in the documentation).

If this isn't the case in high-concurrency situations, anyone have
another method to get a consistent snapshot?

Cheers,

Mark

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 10, 2006 3:21 PM

To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark - I thought your question was more of a does this seem
right and how do I than a something's wrong here post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says This is an online, non-blocking backup,
makes no mention of --opt, which as you note includes --lock-tables.
From mysqldump man page:

--lock-tables, -l

  Lock all tables before starting the dump. The tables are
locked with
  READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
  For transactional tables such as InnoDB and BDB,
  --single-transaction is a much better option, because it does
not
  need to lock the tables at all.

  Please note that when dumping multiple databases,
--lock-tables
  locks tables for each database separately. So, this option
does not
  guarantee that the tables in the dump file are logically
consistent
  between databases. Tables in different databases may be dumped
in
  completely different states.

Try running without --opt, possibly specifying the included options
you need individually, and see if that works better for you.

I understand what you're saying about MySQL replication; hence the
need for monitoring the replication to ensure good backups.

Dan




On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
  

Hi Dan,


  --single-transaction
  Creates a consistent snapshot by dumping all tables in a
  single transaction. Works ONLY for tables stored in
  storage engines which support multiversioning (currently
  only InnoDB does); the dump is NOT guaranteed to be
  consistent for other storage engines. Option
  automatically turns off --lock-tables.
  --opt
  Same as --add-drop-table, --add-locks, --create-options,
  --quick, --extended-insert, --lock-tables, --set-charset,
  and --disable-keys. Enabled by default, disable with
  --skip-opt.

See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html

These options should produce a non-blocking consistent database
snapshot.

I can already accomplish this on a slave server, however MySQL
replication can lead to slave drift as it is statement based (as


opposed
  

to row-based replication). The only safe way to guarantee a real


backup
  

in a MySQL replication setup is via snapshots on the master.

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Monday, July 10, 2006 2:42 PM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency


DB
  

Mark, that's the expected behavior of mysqldump with --opt and
--single-transaction; it locks all databases and all tables for the
duration of the dump, ensuring a consistent snapshot.

With a database this size (100 GB), it's an area where throwing
hardware at the problem may be your best bet.  I suggest one of two
approaches as possible solutions:

1) Buy a *really fast* disk array and set it up as striped on a
superfast connection, like Ultra320 SCSI or fibre.  This will lower
the amount of time required to write the mysqldump output (which will
likely exceed 100 GB data size due to overhead within the file).  You
might even look at 2 disk arrays on 2 channels, striping across both
the disks in the array and 

RE: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Mark Steele
5.0.22

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 9:55 AM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark, appears you're right.  --opt is the default now; didn't
used to be, I don't think.  Also, my experience with MyISAM is a total
lock on all tables across all databases during a mysqldump ... but you
are using InnoDB obviously.  I think you're right in your reading of
the docs - that you should be able to keep going during a dump.  Bug?
What version are you on?

Dan



On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:

 Dan: The options I specified are correct (according to the
 documentation) to get a consistent non-blocking snapshot.
 (--single-transaction disables --lock-tables, --opt is the default
 behavior for mysqldump).

 My question was more in the nature of will these options work in high
 concurrency situations or will they cause a deadlock. (or am I missing
 something here)

 The documentation states that --single-transaction will get a global
 lock 'for a short period of time', which I thought to mean that it'll
be
 short enough to not disturb normal operations (which is what is
implied
 in the documentation).

 If this isn't the case in high-concurrency situations, anyone have
 another method to get a consistent snapshot?

 Cheers,

 Mark

 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 10, 2006 3:21 PM
 To: Mark Steele
 Cc: mysql@lists.mysql.com
 Subject: Re: mysqldump with single-transaction with high-concurrency
DB

 Sorry Mark - I thought your question was more of a does this seem
 right and how do I than a something's wrong here post.

 I think your problem is coming in with the use of --opt.  The article
 you reference, where it says This is an online, non-blocking backup,
 makes no mention of --opt, which as you note includes --lock-tables.
 From mysqldump man page:

 --lock-tables, -l

   Lock all tables before starting the dump. The tables are
 locked with
   READ LOCAL to allow concurrent inserts in the case of MyISAM
 tables.
   For transactional tables such as InnoDB and BDB,
   --single-transaction is a much better option, because it
does
 not
   need to lock the tables at all.

   Please note that when dumping multiple databases,
 --lock-tables
   locks tables for each database separately. So, this option
 does not
   guarantee that the tables in the dump file are logically
 consistent
   between databases. Tables in different databases may be
dumped
 in
   completely different states.

 Try running without --opt, possibly specifying the included options
 you need individually, and see if that works better for you.

 I understand what you're saying about MySQL replication; hence the
 need for monitoring the replication to ensure good backups.

 Dan




 On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
  Hi Dan,
 
 
--single-transaction
Creates a consistent snapshot by dumping all tables in a
single transaction. Works ONLY for tables stored in
storage engines which support multiversioning (currently
only InnoDB does); the dump is NOT guaranteed to be
consistent for other storage engines. Option
automatically turns off --lock-tables.
--opt
Same as --add-drop-table, --add-locks, --create-options,
--quick, --extended-insert, --lock-tables, --set-charset,
and --disable-keys. Enabled by default, disable with
--skip-opt.
 
  See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html
 
  These options should produce a non-blocking consistent database
  snapshot.
 
  I can already accomplish this on a slave server, however MySQL
  replication can lead to slave drift as it is statement based (as
 opposed
  to row-based replication). The only safe way to guarantee a real
 backup
  in a MySQL replication setup is via snapshots on the master.
 
  -Original Message-
  From: Dan Buettner [mailto:[EMAIL PROTECTED]
  Sent: Monday, July 10, 2006 2:42 PM
  To: Mark Steele
  Cc: mysql@lists.mysql.com
  Subject: Re: mysqldump with single-transaction with high-concurrency
 DB
 
  Mark, that's the expected behavior of mysqldump with --opt and
  --single-transaction; it locks all databases and all tables for the
  duration of the dump, ensuring a consistent snapshot.
 
  With a database this size (100 GB), it's an area where throwing
  hardware at the problem may be your best bet.  I suggest one of two
  approaches as possible solutions:
 
  1) Buy a *really fast* disk array and set it up as striped on a
  superfast connection, like Ultra320 SCSI or fibre.  This will lower
  the amount of time required to write the mysqldump output (which
will
  likely exceed 100 GB data size due to overhead within the 

RE: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Mark Steele

I have no long running transactions. I believe the deadlock to have been
caused by a high concurrency of transactions. 

As soon as the backup started, mysqldump got the global lock, and seemed
to hang during the flush tables (while hundreds of other clients try to
execute queries).

Should be easy enough to reproduce, but unfortunately I cannot as this
is my production DB.


-Original Message-
From: Devananda [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 10:59 AM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Argh! I should stop writing emails this early in the morning - I always 
miss something. I just noticed that, at the end of your first email, you

did say the backup process was stuck on flush tables. The only way 
that I can think of for this to happen is if another thread were holding

a table open for a long transaction. As soon as that thread were to 
COMMIT or close, the table should be flushed. Every client that 
connected after the backup process /should/ have to wait for the FLUSH 
TABLES to complete, but /that/ has to wait for all previous connections.

Does that seem to fit what happened?


Regards,
Devananda




Mark Steele wrote:
 Dan: The options I specified are correct (according to the
 documentation) to get a consistent non-blocking snapshot. 
 (--single-transaction disables --lock-tables, --opt is the default
 behavior for mysqldump).

 My question was more in the nature of will these options work in high
 concurrency situations or will they cause a deadlock. (or am I missing
 something here)

 The documentation states that --single-transaction will get a global
 lock 'for a short period of time', which I thought to mean that it'll
be
 short enough to not disturb normal operations (which is what is
implied
 in the documentation).

 If this isn't the case in high-concurrency situations, anyone have
 another method to get a consistent snapshot?

 Cheers,

 Mark

 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED] 
 Sent: Monday, July 10, 2006 3:21 PM
 To: Mark Steele
 Cc: mysql@lists.mysql.com
 Subject: Re: mysqldump with single-transaction with high-concurrency
DB

 Sorry Mark - I thought your question was more of a does this seem
 right and how do I than a something's wrong here post.

 I think your problem is coming in with the use of --opt.  The article
 you reference, where it says This is an online, non-blocking backup,
 makes no mention of --opt, which as you note includes --lock-tables.
 From mysqldump man page:

 --lock-tables, -l

   Lock all tables before starting the dump. The tables are
 locked with
   READ LOCAL to allow concurrent inserts in the case of MyISAM
 tables.
   For transactional tables such as InnoDB and BDB,
   --single-transaction is a much better option, because it
does
 not
   need to lock the tables at all.

   Please note that when dumping multiple databases,
 --lock-tables
   locks tables for each database separately. So, this option
 does not
   guarantee that the tables in the dump file are logically
 consistent
   between databases. Tables in different databases may be
dumped
 in
   completely different states.

 Try running without --opt, possibly specifying the included options
 you need individually, and see if that works better for you.

 I understand what you're saying about MySQL replication; hence the
 need for monitoring the replication to ensure good backups.

 Dan




 On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
   
 Hi Dan,


   --single-transaction
   Creates a consistent snapshot by dumping all tables in a
   single transaction. Works ONLY for tables stored in
   storage engines which support multiversioning (currently
   only InnoDB does); the dump is NOT guaranteed to be
   consistent for other storage engines. Option
   automatically turns off --lock-tables.
   --opt
   Same as --add-drop-table, --add-locks, --create-options,
   --quick, --extended-insert, --lock-tables, --set-charset,
   and --disable-keys. Enabled by default, disable with
   --skip-opt.

 See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html

 These options should produce a non-blocking consistent database
 snapshot.

 I can already accomplish this on a slave server, however MySQL
 replication can lead to slave drift as it is statement based (as
 
 opposed
   
 to row-based replication). The only safe way to guarantee a real
 
 backup
   
 in a MySQL replication setup is via snapshots on the master.

 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 10, 2006 2:42 PM
 To: Mark Steele
 Cc: mysql@lists.mysql.com
 Subject: Re: mysqldump with single-transaction with high-concurrency
 
 DB
   
 Mark, that's the expected behavior of 

Manually enabling root access from remote machine

2006-07-11 Thread John Doolan
Hi,

 I have just installed mySQL version 5 and want to enable root access
from remote machines.  However, the configuration tool fails when trying
to change the security settings with an error 2003.  Apart from this it
is working fine (alongside a version 4 server on the same machine).  How
do I manually enable root access from remote machines as adding % into
the list of hosts for root does not seem to do anything in
Administrator?

 

Regards,

John Doolan

 

 

 




* 

This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by e-mail or telephone and then delete the e-mail from your
system. The sender does not accept liability for any errors or
transmissions in the contents of this message. Furthermore, the sender
accepts no responsibility for any loss or damage resulting directly or
indirectly from the use of this e-mail or the contents.


* 

Proprietors: Humber-Thames Marketing Ltd. Reg Company No. 1803671 -
Established 1983. 

 




Re: Manually enabling root access from remote machine

2006-07-11 Thread Daniel da Veiga

On 7/11/06, John Doolan [EMAIL PROTECTED] wrote:

Hi,

 I have just installed mySQL version 5 and want to enable root access
from remote machines.  However, the configuration tool fails when trying
to change the security settings with an error 2003.  Apart from this it
is working fine (alongside a version 4 server on the same machine).  How
do I manually enable root access from remote machines as adding % into
the list of hosts for root does not seem to do anything in
Administrator?



grant all privileges on *.* to root@'%' identified by PASSWORD('password');

?

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Manually enabling root access from remote machine

2006-07-11 Thread James Barros
sorry if it's a stupid question, but just to cover the basics, you  
remembered to flush privileges after adding it right?


-- James

On Jul 11, 2006, at 9:31 AM, John Doolan wrote:


Hi,

 I have just installed mySQL version 5 and want to enable root access
from remote machines.  However, the configuration tool fails when  
trying
to change the security settings with an error 2003.  Apart from  
this it
is working fine (alongside a version 4 server on the same  
machine).  How
do I manually enable root access from remote machines as adding %  
into

the list of hosts for root does not seem to do anything in
Administrator?



Regards,

John Doolan









** 
**

*

This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom  
they are

addressed. If you have received this email in error please notify the
sender by e-mail or telephone and then delete the e-mail from your
system. The sender does not accept liability for any errors or
transmissions in the contents of this message. Furthermore, the sender
accepts no responsibility for any loss or damage resulting directly or
indirectly from the use of this e-mail or the contents.

** 
**

*

Proprietors: Humber-Thames Marketing Ltd. Reg Company No. 1803671 -
Established 1983.







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



log

2006-07-11 Thread Karl Larsen
   I have found that \T /home/karl/sql_log will cause evcrything I do 
to be saved in the file sql_log. Alas if I turn off mysql as I do daily, 
the log is lost and I have to do it again.


   Is there a way to get a log like this to be perminant :-)


Karl Larsen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to log the execution time of each query

2006-07-11 Thread Dan Buettner

Actually, you can log every query, with the general query log:
http://dev.mysql.com/doc/refman/5.0/en/query-log.html
be warned that enabling this is a good way to fill up your disks
though since it logs inserts and updates complete with data.

The general query log does not appear to log execution time, however.

Mukul, can you elaborate on why the slow query log doesn't suffice?

Dan

On 7/11/06, Barry [EMAIL PROTECTED] wrote:

Saha, Mukul (Cognizant) schrieb:
 Hi,

 I would like to log the execution time for each query in my MySQL server
 .Is there any possible way?

 Please not that, the - log-slow-queries option will not suffice for my
 requirement.




 Thanks  Regards

 Mukul Saha

Hi,

Well there is no option for logging every query.

You can add a script to your service/software that does this.

but as far as i know, there is no logging option for this.

Barry
--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql - multimedia database

2006-07-11 Thread Daniel da Veiga

On 7/10/06, Shain Lee [EMAIL PROTECTED] wrote:

Hi ,

  I'm just behind a mulimedia database , that wanted to be online with WAP 
contents. So , i decided to go ahead with mysql , and i installed mysql latest 
5.0 in fedora core 5 with innoDB .upto now every thing success.
  But , my worries is , how can i created huge mulimedia database with mysql ?
  There are lots of subcatagories ,  in each and every main catagories ,
  for example ,
  Movie stars --- actors -  hollywood---abc---ddd
 bollywood--aaasss
 actress - hollywoodaaa
 bollywood---qqq
  like that , there should be alot of main catagories and sub catagotries. All 
contents types are available , like wallpaper , greetings, ring tones ..., 
movies clips .etc.

  Those contents are supposed to stored as BLOB in mysql database itself.


I've found, over years and years of using DBs that storing info in
BLOB format is only for specific uses (scientific, binary, special
data). When dealing with this kind of stuff (pictures, sound, videos)
I've found that its more easy, fast and secure to store this info in
files and only index basic information (file path, name, size,
description, other info) with a database, this way you can easily
manage those files, while keeping them indexed for fast
search/retrieve in any language, its much more easy to insert them
into a web page or application too, because the database do not serve
the data, only the info about it. Anyway, its MHO.

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: log

2006-07-11 Thread George Law
 
Just a shot in the dark...
how about : \T /home/karl/sql_log_$$


I believe, in bash, $$ will give you the pid of the process, so you
should
get a unique (somewhat) file name every time


-Original Message-
From: Karl Larsen [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 2:51 PM
To: MYSQL General List
Subject: log 

I have found that \T /home/karl/sql_log will cause 
evcrything I do 
to be saved in the file sql_log. Alas if I turn off mysql as 
I do daily, 
the log is lost and I have to do it again.

Is there a way to get a log like this to be perminant :-)


Karl Larsen


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to look for balanced parenthesis?

2006-07-11 Thread Andrew Kreps

As of Ultraedit version 12, it's not quite transparent, but it's
pretty close.  If you Open a file from FTP (File menu option), and hit
save, it will automatically upload the file to the server.  It's a
little slow on the save (as in hit save and count to 10), but it does
the job well.

Ultraedit seems to do a good job of keeping this functionality far
enough away from you to be easily accessible.  It's on an FTP submenu
of the File menu, and there are no toolbar buttons for Open From and
Save As to FTP by default.  You have to add them manually.  The server
setup is a bit clunky, but once you've added your account info, you
never have to do it again.

I have to say the cleanest implementation of this I've seen is in
KDevelop (KDE Application, Linux based but open source).  You just
type in a URL in the file open dialog, and it attempts to log into the
server.  If anonymous authentication fails, it prompts you for a
username and password, which you can optionally save in another app
called KWallet.  All of it happens inline, and it's very intuitive.
Unfortunately, I haven't had the heart to try to compile it under
Cygwin.  :)




On 7/11/06, Miles Thompson [EMAIL PROTECTED] wrote:

At 12:36 AM 7/11/2006, you wrote:

UE still lives on my machine, but it's no longer my daily editor but it
is invaluable if I have to look at a binary file. Ian Meade has a fine
editor there. Maybe I should download the latest version and give it a try.
When I last checked, its  FTPdown/Edit/FTPup cycle was not transparent, but
that may have been v. 9 or v.10 - my copy is a pretty old v.7.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql - multimedia database

2006-07-11 Thread Peter Van Dijck

Creating a huge media database and storing the media *inside* of mysql
is a big mistake.

Store the media in a filesystem. Store metadata about the media in mysql.

Good luck.
Peter

On 7/10/06, Shain Lee [EMAIL PROTECTED] wrote:

Hi ,

  I'm just behind a mulimedia database , that wanted to be online with WAP 
contents. So , i decided to go ahead with mysql , and i installed mysql latest 
5.0 in fedora core 5 with innoDB .upto now every thing success.
  But , my worries is , how can i created huge mulimedia database with mysql ?
  There are lots of subcatagories ,  in each and every main catagories ,
  for example ,
  Movie stars --- actors -  hollywood---abc---ddd
 bollywood--aaasss
 actress - hollywoodaaa
 bollywood---qqq
  like that , there should be alot of main catagories and sub catagotries. All 
contents types are available , like wallpaper , greetings, ring tones ..., 
movies clips .etc.

  Those contents are supposed to stored as BLOB in mysql database itself.

  So , can somebody , help me to created most reliable mysql - multimedia 
database ?

  any ideas and comments are welcome ,

  Thanx in advance,
  Shaine.





-
 All new Yahoo! Mail The new Interface is stunning in its simplicity and ease of 
use. - PC Magazine




--
find videoblogs: http://mefeedia.com
my blog: http://poorbuthappy.com/ease/
my job: http://petervandijck.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Speedup of database changes

2006-07-11 Thread Stefan Hornburg (Racke)
Brent Baisley wrote:
 There will always be new requests, but many times you can store data
 vertically in rows instead of horizontally in columns. You would
 need a label column to indicate what type of data the row contains.
 It's more work on the front end, but a lot more flexible. It's a
 technique I use often, but may not be applicable/possible in your
 situation.

That is right, but wouldn't make the large table even larger ?

 
 You can't create indexes to avoid or reduce full table scans?
 

The major problem is that the application needs fast selects (for
displaying information) _and_ fast inserts/updates/deletes (data is updated
regularly with some large upload sets). So at the moment it isn't possible
to use more indices.

 InnoDB is slower in certain respects, but you need to keep in mind that
 InnoDB will allow multiple things to happen at once. Massive deletes can
 be a problem in InnoDB, taking a lng time.
 Depending on how your data is structured, you may want to look into
 merge tables (MyISAM). I've got a table with almost 250 million
 records in it, which is a rolling 6 months worth of data. Once a month
 we need to delete a months worth of data (millions of records). We
 make sure that each of the underlying tables of the merge table are
 about 1 months worth of data. To delete we drop the table and alter
 the UNION on the merge table. Takes less than a second.
 
 Delay key write won't help much if you are not using indexes.

If will try that first and see if it speeds up the writing operations.

Thanks for your valuable input, I'll report back what happened :-).

Bye
Racke


-- 
LinuXia Systems = http://www.linuxia.de/
Expert Interchange Consulting and System Administration
ICDEVGROUP = http://www.icdevgroup.org/
Interchange Development Team


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysterious speedup after doing FLUSH TABLES

2006-07-11 Thread Moritz Möller
Hello list,

we use MySQL 4.1.13 / InnoDB on Xeon em64t running Debian, kernel 2.6.8.

After a while, queries run slower and slower, although the CPU is 70% idle
and diskio is nearly zero. A simple select by primary key takes up to one
second. After doing a FLUSH TABLES the speed is up again.

I also noticed that 4.1.15 seems to run slower than 4.1.13 but that might be
subjective.

As far as I know, FLUSH TABLES reopens the tables and clears the query
cache. Query cache is enabled, but does not seem to cause the slow down, as
I can observe the same effect without the query cache.

So it looks as if some buffers run full, or a lock is blocked or something.
The queries running do not hang in any specific stage (such as waiting for
lock).

Even stranger: we have 4 servers of that type (hardware, software,
configuration all identical) and two of them need a flush tables every now
and then, the other two (which have more load, 5000 q/s compared to 1500)
are performing without problems. 

Well, if anyone has an idea what causes the slowdown, and/or why a flush
tables helps, I'd be glad to hear from you ;)

Thanks,

Moritz



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Trying to understand why Select running out of memory if table not used

2006-07-11 Thread Jacob, Raymond A Jr
 
When I try to retrieve all rows in the event table that are in a
particular range and then
 try to determine if the cid is in the data table the join takes
forever.
So I tried just tried adding the table to see if the where clause was
 slowing things down. Below is a portion on my session.



mysql select cid, timestamp from event where timestamp between
'2006-05-01' AND '2006-05-15';  

| 7753544 | 2006-05-14 23:59:58 |
| 7753545 | 2006-05-15 00:00:00 |
| 7753546 | 2006-05-15 00:00:00 |
+-+-+
1336344 rows in set (32.55 sec)

mysql select event.cid, event.timestamp from event, data where 
mysql event.timestamp between '2006-05-01' AND '2006-05-15';
mysql: Out of memory (Needed 1477684 bytes) ERROR 2008 (HY000): MySQL
client ran out of memory
   
Obviously, that is a bad idea. I just can not figure
Out how to speed the select clause up.

I was using the query:
Select event.cid, event.timestamp  from event, data
Where ( event.timestamp between '2006-05-01' AND '2006-05-15' ) and
event.cid=data.cid;

But the query never completed i.e. I aborted the query after a few
hours.

Thank you,
Raymond

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql-workbench 1.0.6beta not working on reverse-engineer

2006-07-11 Thread rturnbull

Hello to all,
   I'm having some problems with the linux copy of mysql-workbench.  
Great features and all, if I could get them to work.


   What I'm trying to do is reverse-engineer a INNODB database I have 
in mysql 5.0 on my local machine.


   I go through the steps right to the end of the process and when I'm 
ready to click Finish it hangs the application. I can't do anything 
further with the application, and I have to manually kill or repeatedly 
click on the close window 'x'.  I have tried the reverse-engineer on the 
mysql database on the same box, using the same steps and application 
version and everything works fine, so what gives?


   To boot with this, there is NO LOGFILE NO ERROR MESSAGES on the 
console or anything.


   Could it be that the reverse-engineering process is still running in 
the background?


   Anyways, I thought that maybe it could be the database, which was 
full of records was causing the problem. So I created a mysqldump file 
with just table create statements, then re-imported the file back into 
mysql.  Same result, workbench freezes.


Please let me know if there is anything I can do about this, otherwise I 
guess I have to wait for the newest version to address this, or manually 
draw all the table relationships by hand YUK!


--
Ryan Turnbull
Network Administrator



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



I don't understand why SCSI is preferred.

2006-07-11 Thread Brian Dunning
My understanding is that SCSI has a faster transfer rate, for  
transferring large files. A busy database needs really fast access,  
for making numerous fast calls all over the disk. Two different,  
unrelated things.


I am more than willing to be called Wrong, slapped, and cast from a  
bridge.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Chris White
On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files. A busy database needs really fast access,
 for making numerous fast calls all over the disk. Two different,
 unrelated things.

 I am more than willing to be called Wrong, slapped, and cast from a
 bridge.

Be careful on that, databases do more work in memory than anything else.  That 
said, I'd be more worried about your memory capacity.  Now, if you rely 
mainly on swap(virtual) memory, then  you might worry more on that :).

-- 
Chris White
PHP Programer/DBouncingWithJava
Interfuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Chris W

Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for  
transferring large files. 


SCSI is better for EVERYTHING except your budget.  Faster for large 
transfers, small transfers, seek times, and most especially it handles 
requests from multiple threads much better. 


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  
give the gifts they want
One stop wish list for any gift, 
from anywhere, for any occasion!

http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Scott Haneda
 Brian Dunning wrote:
 
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files.
 
 SCSI is better for EVERYTHING except your budget.  Faster for large
 transfers, small transfers, seek times, and most especially it handles
 requests from multiple threads much better.

Almost everything, they have not hit that capacity issue yet, they are all
generally much smaller that non SCSI.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Greg 'groggy' Lehey
On Tuesday, 11 July 2006 at 16:41:24 -0700, Chris White wrote:
 On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
 My understanding is that SCSI has a faster transfer rate, for
 transferring large files. A busy database needs really fast access,
 for making numerous fast calls all over the disk. Two different,
 unrelated things.

 I am more than willing to be called Wrong, slapped, and cast from a
 bridge.

 Be careful on that, databases do more work in memory than anything
 else.  That said, I'd be more worried about your memory capacity.
 Now, if you rely mainly on swap(virtual) memory, then you might
 worry more on that :).

Clearly when you're working in memory, the kind of disks you use don't
have much influence.

In fact, SCSI disks typically have (marginally) faster access times
than ATA.  They may also have higher transfer rates, but as Brian
observes, this is of marginal interest.

One of the things that we discuss internally from time to time is the
influence of block size on database performance.  On modern disks,
random access to a single 4 kB block takes about 5.1 ms (5 ms seek,
0.1 ms transfer).  Random access to a single 64 kB block takes about
6.6 ms (5 ms seek, 1.6 ms transfer).  Clearly big blocks improve disk
bandwidth; but if you only need 4 kB, the rest doesn't buy you
anything.  That's why we discuss rather than come to any useful
conclusion.

Greg
--
Greg Lehey, Senior Software Engineer, Online Backup
MySQL AB, http://www.mysql.com/
Echunga, South Australia
Phone: +61-8-8388-8286   Mobile: +61-418-838-708
VoIP:  sip:[EMAIL PROTECTED], sip:[EMAIL PROTECTED]
Diary http://www.lemis.com/grog/diary.html

Are you MySQL certified?  http://www.mysql.com/certification/


pgpDBQluI8zU2.pgp
Description: PGP signature


Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Jon Frisby
It's my understanding that the biggest remaining difference has to do  
with SCSI having far superior command queueing capabilities --  
although SATA's command queueing may have closed the gap somewhat --  
which provides for much better real-world performance when you have  
multiple database threads doing work.


The bottom line is that (at least in the past -- who knows, perhaps  
the latest-n-greatest SATA gear has truly tipped the scales, although  
I doubt it) you will see better real-world performance with less  
fidgeting* from SCSI (or Fibre Channel, switched or otherwise) in  
terms of access times and throughput than you will from PATA or SATA.


* - For example: We faced a NASTY problem using AMD 64-bit CPUs +  
SATA + Linux where I/O on the system (the WHOLE system, not JUST the  
SATA spindles -- network, PATA, USB, EVERYTHING) would suddenly come  
to a grinding halt (or very nearly halted) randomly when the SATA  
subsystem was under heavy load.  It required a LOT of trial-and-error  
kernel adjustments to find a configuration that did not suffer this  
problem.


As to whether it is PREFERRED, that comes down to your constraints.   
There are some problem domains where it's REALLY REALLY HARD to split  
database load across multiple servers.  There are many problem  
domains where bad or overly-simplistic design patterns are common  
that make scaling to multiple machines hard.  So sometimes you wind  
up in a nasty situation where your only option is to have REALLY fast  
spindles -- in which case, the 10x or 20x price premium for SCSI may  
be unavoidable.


Generally speaking, if you need ultra-fast spindles you should  
probably be re-evaluating your database architecture as you're asking  
for financial and technological pain.


-JF

On Jul 11, 2006, at 4:18 PM, Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for  
transferring large files. A busy database needs really fast access,  
for making numerous fast calls all over the disk. Two different,  
unrelated things.


I am more than willing to be called Wrong, slapped, and cast from a  
bridge.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Views can't refer to temporary tables

2006-07-11 Thread Daniel Kasak
Greetings.

Continuing on from my previous question, I've discovered that I *can't*
create views which refer to temporary tables.
Is there any plan to drop this requirement?

-- 
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Self Join Performance

2006-07-11 Thread Andrew Wood

Hello everyone!

I've got a few questions regarding optimizing self-joins.
So I've got these three tables:

mysql describe FieldName;
+-++--+-+-++
| Field   | Type   | Null | Key | Default | Extra  |
+-++--+-+-++
| FieldNameID | bigint(20) | NO   | PRI | NULL| auto_increment |
| Name| char(255)  | YES  | | NULL||
+-++--+-+-++
2 rows in set (0.02 sec)

mysql describe FieldValue;
+--++--+-+-++
| Field| Type   | Null | Key | Default | Extra  |
+--++--+-+-++
| FieldValueID | bigint(20) | NO   | PRI | NULL| auto_increment |
| FieldNameID  | bigint(20) | NO   | MUL | NULL||
| Value| char(255)  | YES  | MUL | NULL||
+--++--+-+-++
3 rows in set (0.00 sec)

mysql describe FieldValueRelation;
+--+-+--+-+-+---+
| Field| Type| Null | Key | Default | Extra |
+--+-+--+-+-+---+
| FieldValueID | bigint(20)  | NO   | MUL | NULL|   |
| DocumentID   | bigint(20)  | NO   | MUL | NULL|   |
| FieldSetID   | tinyint(11) | YES  | | NULL|   |
+--+-+--+-+-+---+
3 rows in set (0.00 sec)


I've ran a script to populate the database with a decent set of
randomized data (~7 million DocumentIDs).  The following query may
clear things up a bit:


mysql select * from FieldName, FieldValue, FieldValueRelation where
DocumentID = '700' and FieldValue.FieldValueID =
FieldValueRelation.FieldValueID and FieldName.FieldNameID =
FieldValue.FieldNameID;
+-+---+--+-+--+--+++
| FieldNameID | Name  | FieldValueID | FieldNameID | Value|
FieldValueID | DocumentID | FieldSetID |
+-+---+--+-+--+--+++
|   1 | Account   |  6737063 |   1 | 88116010 |
 6737063 |700 |   NULL |
|   2 | FirstName | 1344 |   2 | Noelle   |
1344 |700 |  1 |
|   3 | LastName  | 1569 |   3 | Shea |
1569 |700 |  1 |
+-+---+--+-+--+--+++
3 rows in set (0.00 sec)



So here's my question:  I want to be able to find all of the rows in
the FieldValueRelation table that have both a FirstName of 'Noelle'
and a LastName of 'Shea'.  I've tried a few things, all of which have
less than stellar performance.  I'm guesstimating that I'll have to do
a self join on the FieldValueRelation table. Alas, it's pretty darn
slow:

SELECT
DISTINCT(fvr.DocumentID)
FROM
FieldValueRelation fvr
INNER JOIN
FieldValueRelation fvr1
ON fvr.DocumentID = fvr1.DocumentID
AND fvr1.FieldValueID = '1344'
INNER JOIN
FieldValueRelation fvr2
ON  fvr.DocumentID = fvr2.DocumentID
AND fvr2.FieldValueID = '1569'

And when I execute it:

++
| DocumentID |
++
| 162955 |
| 721704 |
| 993290 |
|1606157 |
|2459823 |
|2759626 |
|3949779 |
|5192230 |
|5753563 |
|6616602 |
|700 |
++
11 rows in set (37.33 sec)

37 seconds is a bit long for what I'd like to use this schema for. So
here's the counts for the matching FieldValueIDs (if this helps):

mysql select count(*) from FieldValueRelation where FieldValueID = '1569';
+--+
| count(*) |
+--+
| 7753 |
+--+
1 row in set (0.05 sec)

mysql select count(*) from FieldValueRelation where FieldValueID = '1344';
+--+
| count(*) |
+--+
| 7357 |
+--+
1 row in set (0.01 sec)


Can anyone shed any light on this?

--
Cheers,
-Andrew


All generalizations, with the possible exception of this one, are false.
- Kurt Goedel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



datetime issue on MySQL 4.x

2006-07-11 Thread Willy
Hello,
I have a MyISAM table:

CREATE TABLE `dlr` (
  `msisdn` varchar(20) NOT NULL default '',
  `source` varchar(20) NOT NULL default '',
  `operator_id` varchar(20) NOT NULL default '',
  `sms_message` longtext NOT NULL,
  `smsc_id` varchar(20) NOT NULL default '',
  `sms_id` varchar(250) NOT NULL default '',
  `dlr_smsc` text NOT NULL,
  `dlr` tinyint(4) NOT NULL default '0',
  `date_time` datetime NOT NULL default '-00-00 00:00:00'
) TYPE=MyISAM

I have a case here, on July 7 I imported some data into the table with this 
query:

INSERT INTO dlr (source, msisdn, operator_id, sms_message, smsc_id, sms_id, 
dlr_smsc, dlr, date_time) 
VALUES 
('366508', '1234', 4, '', 'IM33665', MD5(RAND()), 'RE:', '2', NOW()),
('366508', '1234', 4, 'id:0943381383 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:, 'IM33665', MD5(RAND()), 
'RE:id:0943381383 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:', '2', '2006-06-28 03:17:00'),
('366508', '1234', 4, 'id:1226442088 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:', 'IM33665', MD5(RAND()), 
'RE:id:1226442088 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:Silahkan tawar: XDA ', '2', '2006-06-28 03:17:00'),
('366508', '1234', 4, 'id:1861980498 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:', 'IM33665', MD5(RAND()), 
'RE:id:1861980498 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:', '2', '2006-06-28 03:17:00'),
('366508', '1234', 4, '', 'IM33665', MD5(RAND()), 'RE:', '2', NOW())...

And when I try to fetch the data using PHP with this query:

SELECT operator_id, DATE_FORMAT(date_time,'%d-%m-%Y') AS tanggal, COUNT(*) AS 
TOTAL FROM dlr WHERE 1=1 AND dlr = 8 AND MONTH(date_time) = MONTH(NOW()) GROUP 
BY tanggal, operator_id ORDER BY date_time

The weird thing happened. Why does all the rows are shown or treated as July 
7th data? If we see the imported data, there should be June 28, June 
29.until July 7. Please help and many thanks for any reply.

Regards


Willy

Tuning High Loads MySQL Server

2006-07-11 Thread Willy
Hello,
I have a MySQL 4.x installed, and I have a very high requests on this server, 
it's about 1 - 5 requests/minutes. Any tips to tuning it? Many thanks 
for any replies.

Regards


Willy

Re: datetime issue on MySQL 4.x (SOLVED)

2006-07-11 Thread Willy
Hello,
I have solved this problem, thanks.

Regards


Willy
  - Original Message - 
  From: Willy 
  To: mysql@lists.mysql.com 
  Sent: Wednesday, July 12, 2006 8:55 AM
  Subject: datetime issue on MySQL 4.x


  Hello,
  I have a MyISAM table:

  CREATE TABLE `dlr` (
`msisdn` varchar(20) NOT NULL default '',
`source` varchar(20) NOT NULL default '',
`operator_id` varchar(20) NOT NULL default '',
`sms_message` longtext NOT NULL,
`smsc_id` varchar(20) NOT NULL default '',
`sms_id` varchar(250) NOT NULL default '',
`dlr_smsc` text NOT NULL,
`dlr` tinyint(4) NOT NULL default '0',
`date_time` datetime NOT NULL default '-00-00 00:00:00'
  ) TYPE=MyISAM

  I have a case here, on July 7 I imported some data into the table with this 
query:

  INSERT INTO dlr (source, msisdn, operator_id, sms_message, smsc_id, sms_id, 
dlr_smsc, dlr, date_time) 
  VALUES 
  ('366508', '1234', 4, '', 'IM33665', MD5(RAND()), 'RE:', '2', NOW()),
  ('366508', '1234', 4, 'id:0943381383 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:, 'IM33665', MD5(RAND()), 
'RE:id:0943381383 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:', '2', '2006-06-28 03:17:00'),
  ('366508', '1234', 4, 'id:1226442088 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:', 'IM33665', MD5(RAND()), 
'RE:id:1226442088 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:Silahkan tawar: XDA ', '2', '2006-06-28 03:17:00'),
  ('366508', '1234', 4, 'id:1861980498 sub:001 dlvrd:000 submit date:0606280317 
done date:0606290317 stat:EXPIRED err:999 text:', 'IM33665', MD5(RAND()), 
'RE:id:1861980498 sub:001 dlvrd:000 submit date:0606280317 done date:0606290317 
stat:EXPIRED err:999 text:', '2', '2006-06-28 03:17:00'),
  ('366508', '1234', 4, '', 'IM33665', MD5(RAND()), 'RE:', '2', NOW())...

  And when I try to fetch the data using PHP with this query:

  SELECT operator_id, DATE_FORMAT(date_time,'%d-%m-%Y') AS tanggal, COUNT(*) AS 
TOTAL FROM dlr WHERE 1=1 AND dlr = 8 AND MONTH(date_time) = MONTH(NOW()) GROUP 
BY tanggal, operator_id ORDER BY date_time

  The weird thing happened. Why does all the rows are shown or treated as July 
7th data? If we see the imported data, there should be June 28, June 
29.until July 7. Please help and many thanks for any reply.

  Regards


  Willy

Re: I don't understand why SCSI is preferred.

2006-07-11 Thread mos

At 06:18 PM 7/11/2006, you wrote:

My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.

I am more than willing to be called Wrong, slapped, and cast from a
bridge.


SCSI controllers have a processor that can queue disk commands thereby 
freeing up CPU cycles which makes it ideal for a server. If you are using 
the SCSI drive on a single user machine then it's not going to be faster, 
and could even be slower than a good IDE drive. I've used a lot of SCSI 
drives years ago and paid dearly the price for the drives and the 
controllers.  SATA II drives may give SCSI a run for their money. But as 
others have said, you can get better database performance just by 
increasing your RAM.


SCSI drives are also designed to run 24/7 whereas IDE drives are more 
likely to fail if used on a busy server. If you really want something fast, 
put the data on a hardware RAM drive. If you think SCSI drives are 
expensive, you ain't seen nothing yet. :)


http://www.anandtech.com/storage/showdoc.aspx?i=2480
http://www.tomshardware.com/2005/12/05/hyperos_dram_hard_drive_on_the_block/
http://www.hyperossystems.co.uk/

Mike

P.S. Don't jump from a bridge, cause I may be driving underneath it at the 
time. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]