Re: [Replication] - urgent

2007-10-03 Thread Ratheesh K J
Thanks,

It helped me a lot. I wanted to know 
  1.. what are the various scenarios where my replication setup can fail? 
(considering even issues like network failure and server reboot etc). What is 
the normal procedure to correct the failure when something unpredicted happens?
  2.. What are the scenarios where the SQL THREAD stops running and what are 
the scenarios where the IO THREAD stops running? 
  3.. Does SQL_SLAVE_SKIP_COUNTER skip the statement of the master binlog from 
being replicated to the slave relay log OR Has the statement already been 
copied into the slave relay log and has been skipped from the relay log?
  4.. How do I know immediately that replication has failed? ( have heard that 
the enterprise edition has some technique for this )?
Thanks  regards,
Ratheesh

- Original Message - 
From: Jan Kirchhoff [EMAIL PROTECTED]
To: Ratheesh K J [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, October 02, 2007 4:16 PM
Subject: Re: [Replication] - urgent


 Ratheesh K J schrieb:
 Hello all,

 I issued a create table statement on the master for a table which was not 
 present on the master but present on the slave.
 I did this purposely to see the error on slave.

 I am a newbie to replication. Now when i see SLave status on the slave 
 machine it shows that the SQL Thread has stopped.

 When I start the SQL thread it does not start and gives the error message 
 that the table exists. How do i correct this and how do I calculate the next 
 position that the slave must start executing from the relay log.

 Is there any article on MySQL replication that tells me how to deal when 
 errors occur.

 Thanks  regards,
 Ratheesh
   
 
 You have 2 options:
 
 1.
 on the slave, enter SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; and then 
 SLAVE START; on the slave. This skips the upcoming entry in the binlog 
 which is the create table command that causes your problem.
 
 2.
 if you don't have any data in the table on the slave, just drop the 
 table and do a slave start;, it will then create the table again as 
 this is the next command in the binlog.
 
 Remember: never write on the slave without knowing what you do and 
 you'll be happy with your replication ;)
 
 Jan

Re: Multipart + IN comparison on the second part + JOIN does not use full index

2007-10-03 Thread Rob Wultsch
It is way past bed time so excuse me if I am way off...

What is the order of tables in the explain? What is shown as the select_type?


 On 10/2/07, Eamon Daly [EMAIL PROTECTED] wrote:
  Hi, all. I couldn't find this mentioned in the docs or in
  the archives, so I'd figure I'd ask. I have a table with a
  multipart index on three columns. When querying the table
  alone using IN operators on any of the three columns, all
  parts of the index are used. However, if I do a JOIN with
  another table on the first column, the first part of the
  index is used, but not the rest-- but only when searching
  for multiple values on col2. Best explained by example, so
  here's the table:
 
  CREATE TABLE `table1` (
`col1` char(1) default NULL,
`col2` char(1) default NULL,
`col3` char(1) default NULL,
KEY `col1` (`col1`,`col2`,`col3`)
  ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
  So with multiple IN operators against table1 alone, EXPLAIN
  gives the expected key_len of 6:
 
  EXPLAIN
  SELECT SQL_NO_CACHE COUNT(*)
  FROM table1
  WHERE table1.col1 IN ('A', 'B') AND table1.col2 IN ('A', 'B') AND
  table1.col3 IN ('A', 'B')
 
  and if I JOIN against another table with single values in
  the IN operators, I again get a key_len of 6:
 
  EXPLAIN
  SELECT SQL_NO_CACHE COUNT(*)
  FROM table1, table2
  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A') AND table1.col3 IN
  ('A')
 
  This one, however, results in a key_len of 2:
 
  EXPLAIN
  SELECT SQL_NO_CACHE COUNT(*)
  FROM table1, table2
  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A', 'B') AND
  table1.col3 IN ('A', 'B')
 
  Is this expected behavior? It surprised me that the second
  query would take full advantage of the index but not the
  third. We're using MySQL 4.1.20.


-- 
Rob Wultsch
(480)223-2566
[EMAIL PROTECTED] (email/google im)
wultsch (aim)
[EMAIL PROTECTED] (msn)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Out of memory; check if mysqld or some other process uses all available memory; error

2007-10-03 Thread amarnath.shivashankar

Hi Mathieu,

 

I found that innodb_buffer pool value isn't set..The whole innodb
settings are commented.

I found the below values from the my.cnf file:

 

# Uncomment the following if you are using InnoDB tables

#innodb_data_home_dir = /var/lib/mysql/

#innodb_data_file_path = ibdata1:10M:autoextend

#innodb_log_group_home_dir = /var/lib/mysql/

#innodb_log_arch_dir = /var/lib/mysql/

# You can set .._buffer_pool_size up to 50 - 80 %

# of RAM but beware of setting memory usage too high

#innodb_buffer_pool_size = 256M

#innodb_additional_mem_pool_size = 20M

# Set .._log_file_size to 25 % of buffer pool size

#innodb_log_file_size = 64M

#innodb_log_buffer_size = 8M

#innodb_flush_log_at_trx_commit = 1

#innodb_lock_wait_timeout = 50

 

here is the memory settings:

 

 

total

used

free

shared

buffers 

cached'

 

 

 

 

 

 

 

mem:

4054

4038

15

0

6

1426

 buffers/cache

2605

1448

 

 

 

 

swap:

4094

150

3943

 

 

 

 

 

Please help me out to change the parameter values

 

Regards,

Amarnath S

 

Amarnath Shivashankar wrote :
 We have found that the MYSQL on all Email DB servers starts throwing
Out of
 memory; check if mysqld or some other process uses all available
memory;
 error. The error goes once we restart MySQL. But after a week again
the same
 problem occurs. We have 4 GB of physical memory on the server but
Mysql
 utilizes only up to 2.5 GB  starts throwing Out of memory error
 
 Please help me to resolve this.
 
Mathieu Bruneau wrote: 


This looks like the traditionnal 32 bits limitation ... You're using a 
32 bits system right ? Because of many reasons (lots of documentation on

the net about that) MySQL is in practice limited to about 2.4-2.6G of 
memory, thus the error you see.

When I experienced this errors, I lowered the mysql_buffer and 
innodb_buffer so that mysql would stay below this limit and never had 
other issue with it. (It was crashing with an error 11 before). I kept 
this settings till I could upgrade to a 64 bits host.



-- 
Mathieu Bruneau
aka ROunofF

 

Regards,

Amarnath Shivashankar

SQL Database Management 




The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.
 
www.wipro.com

Re: How to use an hardcoded list of values

2007-10-03 Thread Manuel Vacelet
On 9/26/07, Baron Schwartz [EMAIL PROTECTED] wrote:
 I don't think you're gaining anything by doing this though, unless it is
 extremely expensive to do a lookup in item.

Thanks Baron,

I wanted to be sure I didn't miss a key feature.
As the lookup in item is not expensive at all, I will keep the
simple solution.

Regards,
Manuel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Design decision

2007-10-03 Thread Scott Haneda
I have an a table of objects, attached to those objects are keywords.
Users submit the keywords to the objects.

Currently, I chose to keep a hit count on the keywords, so if a duplicate
keyword is supplied, a counter is incremented.

I thought this was a good idea, as it keeps the number of rows in the
keywords table to a minimum.

However, this is a user login based system, and with the above, I lose the
ability to track which users sent in which keywords.

So I can move to the keywords table storing duplicate keywords, and each
keyword will get a user_id attached to it, but, that table will grow.  Fast.

1000 objects, each with 10 keywords only = 10,000, I could hit many millions
very fast.

I could toss in a third table, and relate that to the user_id and keyword.
However, I am inserting in one go, and that could be upwards of 30 or more
inserts, taking too long.

Anyone got any suggestions?  Thanks.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design decision

2007-10-03 Thread Chris

Scott Haneda wrote:

I have an a table of objects, attached to those objects are keywords.
Users submit the keywords to the objects.

Currently, I chose to keep a hit count on the keywords, so if a duplicate
keyword is supplied, a counter is incremented.

I thought this was a good idea, as it keeps the number of rows in the
keywords table to a minimum.

However, this is a user login based system, and with the above, I lose the
ability to track which users sent in which keywords.


Why do you need this info? Is there a reporting need or something else 
you need to keep this information for? Ie when are you going to need to 
know who inserted a particular keyword?



So I can move to the keywords table storing duplicate keywords, and each
keyword will get a user_id attached to it, but, that table will grow.  Fast.

1000 objects, each with 10 keywords only = 10,000, I could hit many millions
very fast.

I could toss in a third table, and relate that to the user_id and keyword.
However, I am inserting in one go, and that could be upwards of 30 or more
inserts, taking too long.


[assuming php]

$keyword_ids = array();
foreach ($keywords_to_insert) {
  $query = insert into keywords_table(keyword) .;
  $keyword_ids[] = mysql_insert_id();
}

$query = insert into table (userid, keywordid) select userid, keywordid 
where keywordid in ( . implode(',', $keyword_ids) . );


[/assuming php]


Just one query to insert the relationship(s) :)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design decision

2007-10-03 Thread Scott Haneda
 Scott Haneda wrote:
 I have an a table of objects, attached to those objects are keywords.
 Users submit the keywords to the objects.
 
 Currently, I chose to keep a hit count on the keywords, so if a duplicate
 keyword is supplied, a counter is incremented.
 
 I thought this was a good idea, as it keeps the number of rows in the
 keywords table to a minimum.
 
 However, this is a user login based system, and with the above, I lose the
 ability to track which users sent in which keywords.
 
 Why do you need this info? Is there a reporting need or something else
 you need to keep this information for? Ie when are you going to need to
 know who inserted a particular keyword?

Because it is a semi-public system, every user is allowed to supply keywords
to other users objects.  If some not so nice person decides to paste in a
list of words to 'game' that object up the ranks, I want to know who, and be
able to take action.

Looking over the php now, thanks.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design decision

2007-10-03 Thread Chris

Scott Haneda wrote:

Scott Haneda wrote:

I have an a table of objects, attached to those objects are keywords.
Users submit the keywords to the objects.

Currently, I chose to keep a hit count on the keywords, so if a duplicate
keyword is supplied, a counter is incremented.

I thought this was a good idea, as it keeps the number of rows in the
keywords table to a minimum.

However, this is a user login based system, and with the above, I lose the
ability to track which users sent in which keywords.

Why do you need this info? Is there a reporting need or something else
you need to keep this information for? Ie when are you going to need to
know who inserted a particular keyword?


Because it is a semi-public system, every user is allowed to supply keywords
to other users objects.  If some not so nice person decides to paste in a
list of words to 'game' that object up the ranks, I want to know who, and be
able to take action.


So you're not going to reference the data that much, so size of the 
table (and speed) isn't going to be a huge issue.


Keeping a keywordid - userid table will work pretty well I think.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Configuration for a powerful server?

2007-10-03 Thread Ratheesh K J
Hello all,

What is the best possible values in my.cnf for a 8 processor (Quad core-2 cpu) 
8 GB RAM machine dedicated for MySQL server only. No other application will run 
on this machine.

the innodb_buffer_pool_size cannot accept values above 2000 MB due to 32 bit 
machine constraint. So what other parameters can be tweaked to make use of this 
powerful server to its best?

NOTE: All our tables are of INNODB storage engine.

Re: MySQL Configuration for a powerful server?

2007-10-03 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Hello all,

What is the best possible values in my.cnf for a 8 processor (Quad core-2 cpu) 
8 GB RAM machine dedicated for MySQL server only. No other application will run 
on this machine.

the innodb_buffer_pool_size cannot accept values above 2000 MB due to 32 bit 
machine constraint. So what other parameters can be tweaked to make use of this 
powerful server to its best?

NOTE: All our tables are of INNODB storage engine.
  


You simply cannot make use of your server's power on a 32bit OS.
Since it is a dedicated server anyway, install a 64bit OS (I prefer 
debian linux, but install whatever 64bit linux you know best) and set 
the buffer pool to around 6.5GB. All other variables depend on you 
usage, so are you running a web application with lots of small queries 
or is it only very few connections running big queries? what is the size 
of your DB? You'll need to provide more information to get help here.


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Replication] - urgent

2007-10-03 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Thanks,
 
It helped me a lot. I wanted to know


   1. what are the various scenarios where my replication setup can
  fail? (considering even issues like network failure and server
  reboot etc). What is the normal procedure to correct the failure
  when something unpredicted happens?

You should first read the right parts of the manual at 
https//dev.mysql.com/doc before asking such questions.

Basically:
-Use good hardware with ECC-RAM and RAID-Controllers in order to 
minimize trouble with faulty hardware.
-Never write on the slaves without knowing what this could do to your 
replication setup
-Watch the diskspace and make sure it's always enough space for the 
binlogs. Otherwise you might end up with half-written binlogs on either 
the slave or master because of a full disk which can cause trouble and 
some work to get it up and running again.


When a master goes down or network connection is lost, the slave 
automatically tries to reconnect once a minute or so. Restarting the 
master or exchanging some network equipment is no problem. When the 
slave reboots, it tries to reconnect on startup, too.


This is out-of-the-box-behaviour. You can modify it in the my.cnf 
(i.e. use the  skip-slave-start option etc)



   1. What are the scenarios where the SQL THREAD stops running and
  what are the scenarios where the IO THREAD stops running?

SQL thread stops when it can't run a SQL-Query from the binlogs for any 
reason, as you have experiences when the table already existed.


The IO-Thread only stops when it has an error reading a binlog from the 
master. When its only a lost connection, it automatically reconnects.
Other problems (i.e. unable to read a binlog) should never happen as 
long a you don't delete binlogs on the master that have not yet been 
copied over to the slave by the io-thread (show master status and 
show slave status commands and their output) or you have faulty 
hardware (io_errors on the harddisk or such things)



   1. Does SQL_SLAVE_SKIP_COUNTER skip the statement of the master
  binlog from being replicated to the slave relay log OR Has the
  statement already been copied into the slave relay log and has
  been skipped from the relay log?

it skips the entry on the local copy of the binlog. The IO-Thread 
replicates the whole binlog and the sql-thread skips an entry in it when 
you use sql_slave_skip_counter


   1. How do I know immediately that replication has failed? (
  have heard that the enterprise edition has some technique for
  this )?

watch the logfile, it is written there. Or run a cronjob once a minute 
with something like
mysql -e 'show slave status\G' |grep '_Running:' /dev/null || bash 
my_alarm_script_that_sends_mail_or_whatever.sh




regards
Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ANN: Freeware PHP Generator for MySQL 7.10 released

2007-10-03 Thread SQL Maestro Group

Hi!

SQL Maestro Group announce the release of PHP Generator for MySQL 7.10,
freeware MySQL GUI frontend that allows you to generate high-quality MySQL
PHP scripts for the selected tables, views and queries for the further
working with these objects through the web.

http://www.sqlmaestro.com/products/mysql/phpgenerator/

1. BLOB fields now can be represented as pictures. It is also possible to
add, edit and delete the images directly from the generated web application.

2. Starting with this version our software allows you to create web-pages
with hyperlinks.

3. PHP Generator for MySQL now supports creation of look up menus based on a
custom value list. For ENUM fields such list is created automatically.

4. The View, Edit, and Delete command links are now placed at the left of
data columns by default.

5.The wizard window becomes resizable. Also it can be maximized or
minimized.

6. Now our software generates classic PHP opening tags (?php) instead of
short ones (?).

Full press-release:
http://www.sqlmaestro.com/news/company/4667/

Background information:
SQL Maestro Group is engaged in developing database administration and
management tools for MySQL, SQL Server, PostgreSQL, Oracle, SQLite, Firebird
and MaxDB providing the highest performance, scalability and reliability to
meet the requirements of today's database applications.

Thank you for your attention.

Sincerely,
The SQL Maestro Group Team
http://www.sqlmaestro.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DB Schema Comparison Utility ?

2007-10-03 Thread Baron Schwartz

Eric Frazier wrote:

Daevid Vincent wrote:
This has been asked for many many times on this list, not sure why 
mySQL AB

doesn't just release a command line tool like a 'mysql diff' and also a
'mysql lint'. The lint one should be totally trivial for them to do, 
as they
already have a SQL parser! I can't tell you how many times our daily 
build

was broken by a missing semi-colon or some other SQL syntax error. We run
all commits through php -l and ruby's checker, but mysql is the only 
one

we have to sweat over.

While I'm glad that pretty GUI tools like Upscene's exist, that 
doesn't do

us any good on a linux build system where it does an svn checkout, runs
automated BVT tests, compiles code, uploads to a daily build 
directory, etc.


We need command line tools that run on linux.

:(   
This is not quite what you were asking for, but I found this yesterday: 
http://sourceforge.net/projects/mysqltoolkit


I think the guy has done a lot of really good work.


Why thank you :-)

I kept meaning to reply to this thread and mention mysqldiff:
http://www.adamspiers.org/computing/mysqldiff/

(Now that I search Google for it, I also see http://www.mysqldiff.org/, 
but I'm unsure of the relationship between the two.)


I have not used it myself.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DB Schema Comparison Utility ?

2007-10-03 Thread Eric Frazier

Daevid Vincent wrote:

This has been asked for many many times on this list, not sure why mySQL AB
doesn't just release a command line tool like a 'mysql diff' and also a
'mysql lint'. The lint one should be totally trivial for them to do, as they
already have a SQL parser! I can't tell you how many times our daily build
was broken by a missing semi-colon or some other SQL syntax error. We run
all commits through php -l and ruby's checker, but mysql is the only one
we have to sweat over.

While I'm glad that pretty GUI tools like Upscene's exist, that doesn't do
us any good on a linux build system where it does an svn checkout, runs
automated BVT tests, compiles code, uploads to a daily build directory, etc.

We need command line tools that run on linux.

:( 
  
This is not quite what you were asking for, but I found this yesterday: 
http://sourceforge.net/projects/mysqltoolkit


I think the guy has done a lot of really good work.

Thanks,

Eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



allow me to post

2007-10-03 Thread Zack Wickes


Zack Wickes
VP Software Operations
Yummy Interactive Inc.
604-682-0471 Ext:232



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Multipart + IN comparison on the second part + JOIN does not use full index

2007-10-03 Thread Eamon Daly
After spending half the night trying this same query on a
number of different datasets, it looks like sometimes MySQL
/will/ use all parts in certain cases, so I'm satisfied by
that. Thanks for responding!


Eamon Daly



- Original Message - 
From: Rob Wultsch [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, October 03, 2007 1:30 AM
Subject: Re: Multipart + IN comparison on the second part + JOIN does not 
use full index


: It is way past bed time so excuse me if I am way off...
:
: What is the order of tables in the explain? What is shown as the 
select_type?
:
:
: On 10/2/07, Eamon Daly [EMAIL PROTECTED] wrote:
:  Hi, all. I couldn't find this mentioned in the docs or in
:  the archives, so I'd figure I'd ask. I have a table with a
:  multipart index on three columns. When querying the table
:  alone using IN operators on any of the three columns, all
:  parts of the index are used. However, if I do a JOIN with
:  another table on the first column, the first part of the
:  index is used, but not the rest-- but only when searching
:  for multiple values on col2. Best explained by example, so
:  here's the table:
: 
:  CREATE TABLE `table1` (
:`col1` char(1) default NULL,
:`col2` char(1) default NULL,
:`col3` char(1) default NULL,
:KEY `col1` (`col1`,`col2`,`col3`)
:  ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
: 
:  So with multiple IN operators against table1 alone, EXPLAIN
:  gives the expected key_len of 6:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1
:  WHERE table1.col1 IN ('A', 'B') AND table1.col2 IN ('A', 'B') AND
:  table1.col3 IN ('A', 'B')
: 
:  and if I JOIN against another table with single values in
:  the IN operators, I again get a key_len of 6:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1, table2
:  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A') AND table1.col3 
IN
:  ('A')
: 
:  This one, however, results in a key_len of 2:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1, table2
:  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A', 'B') AND
:  table1.col3 IN ('A', 'B')
: 
:  Is this expected behavior? It surprised me that the second
:  query would take full advantage of the index but not the
:  third. We're using MySQL 4.1.20.
:
:
: -- 
: Rob Wultsch
: (480)223-2566
: [EMAIL PROTECTED] (email/google im)
: wultsch (aim)
: [EMAIL PROTECTED] (msn)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Multipart + IN comparison on the second part + JOIN does not use full index

2007-10-03 Thread Baron Schwartz
That probably means the optimizer is rejecting (part of) the index as 
not selective enough to be efficient for the given query, depending on 
storage engine index statistics.


Making sure your indexes are up to date can help on certain storage 
engines (MyISAM).  ANALYZE TABLE does this for you.


Eamon Daly wrote:

After spending half the night trying this same query on a
number of different datasets, it looks like sometimes MySQL
/will/ use all parts in certain cases, so I'm satisfied by
that. Thanks for responding!


Eamon Daly



- Original Message - 
From: Rob Wultsch [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Wednesday, October 03, 2007 1:30 AM
Subject: Re: Multipart + IN comparison on the second part + JOIN does not 
use full index



: It is way past bed time so excuse me if I am way off...
:
: What is the order of tables in the explain? What is shown as the 
select_type?

:
:
: On 10/2/07, Eamon Daly [EMAIL PROTECTED] wrote:
:  Hi, all. I couldn't find this mentioned in the docs or in
:  the archives, so I'd figure I'd ask. I have a table with a
:  multipart index on three columns. When querying the table
:  alone using IN operators on any of the three columns, all
:  parts of the index are used. However, if I do a JOIN with
:  another table on the first column, the first part of the
:  index is used, but not the rest-- but only when searching
:  for multiple values on col2. Best explained by example, so
:  here's the table:
: 
:  CREATE TABLE `table1` (
:`col1` char(1) default NULL,
:`col2` char(1) default NULL,
:`col3` char(1) default NULL,
:KEY `col1` (`col1`,`col2`,`col3`)
:  ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
: 
:  So with multiple IN operators against table1 alone, EXPLAIN
:  gives the expected key_len of 6:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1
:  WHERE table1.col1 IN ('A', 'B') AND table1.col2 IN ('A', 'B') AND
:  table1.col3 IN ('A', 'B')
: 
:  and if I JOIN against another table with single values in
:  the IN operators, I again get a key_len of 6:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1, table2
:  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A') AND table1.col3 
IN

:  ('A')
: 
:  This one, however, results in a key_len of 2:
: 
:  EXPLAIN
:  SELECT SQL_NO_CACHE COUNT(*)
:  FROM table1, table2
:  WHERE table1.col1 = table2.col1 AND table1.col2 IN ('A', 'B') AND
:  table1.col3 IN ('A', 'B')
: 
:  Is this expected behavior? It surprised me that the second
:  query would take full advantage of the index but not the
:  third. We're using MySQL 4.1.20.
:
:
: -- 
: Rob Wultsch

: (480)223-2566
: [EMAIL PROTECTED] (email/google im)
: wultsch (aim)
: [EMAIL PROTECTED] (msn)




--
Baron Schwartz
Xaprb LLC
http://www.xaprb.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SQL for Subversion update-report

2007-10-03 Thread Jack Bates
I'm working on a Subversion interface to MediaWiki and am struggling
with the SQL to respond to Subversion's update-report:
http://www.mediawiki.org/wiki/WebDAV

MediaWiki's revision table contains unique revision ids and the
corresponding page id. The page table contains unique page ids and the
corresponding page title.

Suversion's update-report intends to get a list of changes between the
current state of the working copy and a target revision: often HEAD,
MAX(revision.rev_id)

By converting update-report entries to an SQL condition, I can select
rows of the revision table which come after the current state of the
working copy:

  S:update-report send-all=true xmlns:S=svn:
S:src-pathhttp://ket/~jablko/mediawiki/webdav.php/S:src-path
S:target-revision27/S:target-revision
S:entry rev=18/S:entry
S:entry rev=20Test/S:entry
S:entry rev=20Main_Page/S:entry
  /S:update-report

- becomes:

((page_title = 'Main_Page' OR page_title LIKE 'Main\_Page/%') AND 
revision.rev_id  '20' OR NOT (page_title = 'Main_Page' OR page_title LIKE 
'Main\_Page/%') AND ((page_title = 'Test' OR page_title LIKE 'Test/%') AND 
revision.rev_id  '20' OR NOT (page_title = 'Test' OR page_title LIKE 'Test/%') 
AND revision.rev_id  '18'))

Using GROUP BY page_id, I get a list of pages which changed between the
current state of the working copy and the target revision.

My problem: I also need to know if these pages have revisions before
current-state, or if they are newly created.

I don't want to do a second query for revisions of page-list before
current-state because on an initial checkout, page-list could be
huge, making an enormous SQL query.

Instead, I think I should do a LEFT JOIN on another instance of the
revision table (old), where revisions are before current-state. NULL
rows in this table correspond to newly created pages:

SELECT page_title, MAX(new.rev_id), old.rev_id FROM page JOIN revision AS new 
LEFT JOIN revision AS old ON new.rev_page = old.rev_page WHERE new.rev_page = 
page_id AND old.rev_id  new.rev_id AND new.rev_id = target-revision AND 
new.rev_id greater than current-state GROUP BY page_id

The problem with this query is that old.rev_id  new.rev_id means each
row in old is less than _a_ row in new, not necessarily less than
the _minimum_ row in new.

I don't want to replace this condition with old.rev_id less than
current-state because the current-state expression can be long and
complex; I prefer to evaluate it only once, for new.rev_id greater than
current-state

I tried replacing old.rev_id  new.rev_id with old.rev_id 
MIN(new.rev_id) but got a MySQL error: Invalid use of group function

Can anyone recommend how best to query the database for a list of pages
which changed since the current state of the working copy, and whether
those pages are newly created?

Much thanks, Jack


signature.asc
Description: Digital signature


Re: What is MYSQL's equivalent to Oracle's DBMS_OUTPUT

2007-10-03 Thread Anders Karlsson
Have a look at my, with an update way overdue but allthesame, myProcDbg 
project at sourceforge. I think this might do what you are looking for.


/Karlsson
sol beach wrote:

Oracle provides a stored procedure called DBMS_OUTPUT which primarily is
used to write/print/display text string to StandardOut (a.k.a. the
terminal).
In V5 MYSQL is there a functional equivalent? If so, what is it called.
I am willing to RTFM if somebody provides me a clue as to which manual
contains the answer to my question.
I have Guy Harrison's MYSQL Stored Procedures but could not find what I'm
looking for in it.
Since I am not sure if what I want exists or what it may be called, I just
may be looking in the wrong places for the answer.

TIA!

  



--
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Anders Karlsson ([EMAIL PROTECTED])
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Sales Engineer
/_/  /_/\_, /___/\___\_\___/ Stockholm
   ___/   www.mysql.com Cellphone: +46 708 608121
  Skype: drdatabase



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL crashing on flush-logs

2007-10-03 Thread Ofer Inbar
We have MySQL 5.0.27 running on about 10 different RedHat EL4 boxes,
all from the same RPMs.  Every night we run mysqladmin flush-logs from
crontab (as well as some other things) on most of these servers.

One on server, mysqld is dying with signal 11 every single night right
during the mysqladmin flush-logs command.  None of the others ever do that.
This is repeatable.  It happens every night.

We're investigating possible causes, but in the meantime I'm also
curious if anyone else on this list has run into something similar
and has some suggestions.


Here's the backtrace portion of the error log from the most recent crash:

| Attempting backtrace. You can use the following information to find out
| where mysqld died. If you see no messages after this, something went
| terribly wrong...
| Cannot determine thread, fp=0x45394f78, backtrace may not be correct.
| Stack range sanity check OK, backtrace follows:
| 0x5f737400746f6f72
| New value of fp=0x1874230 failed sanity check, terminating stack trace!
| Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and 
follow instructions on how to resolve the stack
|  trace. Resolved
| stack trace is much more helpful in diagnosing the problem, so please do 
| resolve it
| Trying to get some variables.
| Some pointers may be invalid and cause the dump to abort...
| thd-query at (nil)  is invalid pointer
| thd-thread_id=12310

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design decision

2007-10-03 Thread Brent Baisley
I'm not sure why you say 30 or more inserts will take too long. As  
long as you do a bulk insert, it's just a single command. 30  
individual insert will take it's toll.


You are really looking for a logging system. Your not going to be  
querying the table all that much, just a lot of inserts. So millions  
and millions of rows isn't that big of a deal. Your not deleting  
anything either, so if you set your parameters right, you can use  
MyISAM tables without locking issues. You need to set the appropriate  
parameter to only insert to the end of the table.


When your table reaches a certain size (50 million?), you rename it a  
create an empty one. If you need to query multiple tables after you  
have a bunch, just create a merge table. I've done a similar setup on  
a system that added 5-7 million records per day.


Alternatively, had a text field that logs all the keywords and a word  
count field that tells you how many words were entered. You would  
need to parse the words or use full text indexing to perform  
analysis, but that would be a common thing I'm guessing.



On Oct 3, 2007, at 3:57 AM, Scott Haneda wrote:


I have an a table of objects, attached to those objects are keywords.
Users submit the keywords to the objects.

Currently, I chose to keep a hit count on the keywords, so if a  
duplicate

keyword is supplied, a counter is incremented.

I thought this was a good idea, as it keeps the number of rows in the
keywords table to a minimum.

However, this is a user login based system, and with the above, I  
lose the

ability to track which users sent in which keywords.

So I can move to the keywords table storing duplicate keywords, and  
each
keyword will get a user_id attached to it, but, that table will  
grow.  Fast.


1000 objects, each with 10 keywords only = 10,000, I could hit many  
millions

very fast.

I could toss in a third table, and relate that to the user_id and  
keyword.
However, I am inserting in one go, and that could be upwards of 30  
or more

inserts, taking too long.

Anyone got any suggestions?  Thanks.
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



single line inserts with on duplicate key

2007-10-03 Thread Scott Haneda
Is it possible, in single-line inserts, with on duplicate key clauses, to
get back a list of last insert id's for what was inserted?

I get strange results, just one single insert id, which makes sense from the
perspective of what was just inserted, however, I need to know what the
returned insert id is for each of an arbitrary amount of single-line
inserts.

Defining what I am calling single line insets, I mean:
INSERT INTO foo (a, b, c) VALUES ('x', 'y', 'x'), ('x', 'y', 'x'), ('x',
'y', 'x'), etc etc
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]