Re: Connection problem after long idle periods

2006-02-01 Thread David Logan

Michael Lai wrote:

I am currently using MySQL 5.0.15 with mysql-connector-java-3.1.11.  I 
can access the database from my JSPs with no problem except for one 
small issue.  After a long delay (usually overnight), when someone 
first tries to access the database, I would get the following error:


Communications link failure due to underlying exception: ** BEGIN 
NESTED EXCEPTION ** java.net.SocketException MESSAGE: Broken pipe 
STACKTRACE: java.net.SocketException: Broken pipe at 
java.net.SocketOutputStream.socketWrite0(Native Method) at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at 
java.net.SocketOutputStream.write(SocketOutputStream.java:136) at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) 
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) 
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2690) at 
com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2619) at 
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1552) at 
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1666) at 
com.mysql.jdbc.Connection.execSQL(Connection.java:2978) at 
com.mysql.jdbc.Connection.execSQL(Connection.java:2902) at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:933) 
at 
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1027) 
at 
org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:92) 
at 
org.apache.jsp.processLogin_jsp._jspService(org.apache.jsp.processLogin_jsp:81) 
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97) 
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:322) 
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:291) 
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:241) 
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) 
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) 
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) 
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) 
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) 
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) 
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) 
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) 
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856) 
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744) 
at 
org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) 
at 
org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80) 
at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684) 
at java.lang.Thread.run(Thread.java:595) ** END NESTED EXCEPTION ** 
Last packet sent to the server was 0 ms ago.


But afterwards, on the second and subsequent attempts, the database 
runs fine.  It is only the first try after a long idle period that I 
get this error message.  I am not sure how to fix this.  I am guessing 
that the database connection is down after a long idle period and have 
to recreate a connection first after the first try.



Hi Michael,

Try looking at
http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html more
specifically the connect-timeout variable. You should use SHOW VARIABLES
LIKE %connect%; and check the value.

It maybe this and you can stretch it out further if you need to.

Regards

--

David Logan
South Australia

when in trouble, or in doubt
run in circles, scream and shout



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: charset

2006-02-01 Thread OKAN ARI
yes it is mysql 4.0 how can I set i to latin 5 or how can I set the server 
to latin 5?


os: centos 4.2

Thank you
- Original Message - 
From: Gleb Paharenko [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, January 31, 2006 1:40 PM
Subject: Re: charset



Hello.

In case mysql4 is mysql4.0 there should be a server system variable
default_character_set. If it is 4.1 see:
 http://dev.mysql.com/doc/refman/4.1/en/charset-syntax.html
 http://dev.mysql.com/doc/refman/4.1/en/charset-connection.html


OKAN ARI wrote:

How can I set the characterset to latin5 for mysq4 in linux?

OKAN



--
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
  __  ___ ___   __
 /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
/ /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
  ___/   www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



password? getting started

2006-02-01 Thread hawat . thufir

from linux Fedora, FC4:

[EMAIL PROTECTED] ~]# yum -y groupinstall 'MySQL Database'
Setting up Group Process
Setting up repositories
base  100% |=| 1.1 kB00:00
Setting up repositories
Reading repository metadata in from local files
Passing package list to Install Process
Parsing package install arguments
Nothing to do
[EMAIL PROTECTED] ~]# mysql -h localhost -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket 
'/var/lib/mysql/mysql.sock' (2)

[EMAIL PROTECTED] ~]# whoami
root
[EMAIL PROTECTED] ~]# date
Wed Feb  1 03:31:09 EST 2006
[EMAIL PROTECTED] ~]#


Now, I've read reference indicating that first the password must be set. 
This isn't the password for the root account, nor for a user account on 
the local system, but a password for the database, yes?


RTFM, I know, I'm going to the library first chance I can.


-Thufir

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: password? getting started

2006-02-01 Thread hawat . thufir

Got it started:


[EMAIL PROTECTED] ~]# service mysqld start
Initializing MySQL database:  Installing all prepared tables
Fill help tables
ERROR: 1064  You have an error in your SQL syntax; check the manual that corresponds 
to your MySQL server version for the right syntax to use near '' at line 1

060201  3:46:14 [ERROR] Aborting

060201  3:46:14 [Note] /usr/libexec/mysqld: Shutdown complete


WARNING: HELP FILES ARE NOT COMPLETELY INSTALLED!
The HELP command might not work properly


To start mysqld at boot time you have to copy support-files/mysql.server
to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h localhost.localdomain password 'new-password'
See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe 

You can test the MySQL daemon with the benchmarks in the 'sql-bench' directory:
cd sql-bench ; perl run-all-tests

Please report any problems with the /usr/bin/mysqlbug script!

The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at https://order.mysql.com
   [  OK  ]
Starting MySQL:[  OK  ]
[EMAIL PROTECTED] ~]# date
Wed Feb  1 03:46:44 EST 2006
[EMAIL PROTECTED] ~]# nano sql.2.txt
[EMAIL PROTECTED] ~]# /usr/bin/mysqladmin -u root password 'password'
[EMAIL PROTECTED] ~]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3 to server version: 4.1.16

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql quit
Bye
[EMAIL PROTECTED] ~]# date
Wed Feb  1 03:48:43 EST 2006
[EMAIL PROTECTED] ~]#



heh.  Now I just have to figure out how to do stuff!  :)



-Thufir

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: password? getting started

2006-02-01 Thread hawat . thufir

Ok, I logged on to the GUI Administrator as:

root
localhost
/var/lib/mysql/mysql.sock

from my linux user account.  Tad confusing, as to which root is being 
referred to.  If I, for instance, restart they system I'll need to 
manually restart the database?  Or, if for some other reason the database 
is stopped it must be restarted?  Hmm.  And there are different levels 
of users for the database, apparently.  Bit of adjustment required!




-Thufir

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Rows counted but not returned after join

2006-02-01 Thread Dougal Watson
When I perform a particular query using a join the number of rows counted as
hits exceeds the number of rows returned. I think I know why but don't know
how to interrogate the database to confirm (and remedy) my suspicion.

The basic database structure is two tables. Each table contains an email
address field. The join allows me to link a publication reference in the
first table, via the email address, to the author's name in the second.

I suspect my problem results from there being a couple of email addresses in
the first table that do not have a counterpart in the second. I've been
trying to write a query to find out whether there are any email addresses in
the first table that do not have a counterpart in the second.

My follow-on question is how do I write my original query so that if there's
no corresponding email address in the second table, a result is still
returned?

My basic query is of the form:
$query = SELECT datetime, author_email, title, body, author.person AS
author_name
FROM $table01, $table02
WHERE
(
(
((body REGEXP '$searchstring01')
OR
(title REGEXP '$searchstring01'))
$choice3
((body REGEXP '$searchstring02')
OR
(title REGEXP '$searchstring02'))
$choice4
((body REGEXP '$searchstring03')
OR
(title REGEXP '$searchstring03'))
)
AND
((YEAR(datetime) = '$startyear') AND (YEAR(datetime) =
`$endyear'))
 AND
 (postings.author_email = author.email)
 )
 ORDER BY  datetime DESC

I also perform a COUNT query but that query doesn't include the join, just
the hits against the searchstrings. I've tried the COUNT with the join but
it returns and error.

I'm sorry if these are very basic, MySQL 101, questions ... I am a very
basic recreational coder :-)

Cheers
Dougal



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Testing Concurrent Insert on InnoDB

2006-02-01 Thread Gleb Paharenko
Hello.

 The table should not be locked, right :D

Most probably it will be locked.  If you are not running in strict SQL
mode, any column not explicitly given a value is set to its default
(explicit or implicit) value. For example, if you specify a column list
that does not name all the columns in the table, unnamed columns are set
to their default values.  For an AUTO_INCREMENT column, the default
value is the next value in the sequence. So your insert will put an
AUTO-INC lock on the table.







Ady Wicaksono wrote:
 Hi Gleb
 
 Thanx a lot for explanation, make sense.
 I haven't try with INSERT INTO Y (t_y_time,and so on) SELECT
 (t_x_time, ... and so on) FROM X
 excluding t_y_id
 
 The table should not be locked, right :D
 
 Thx
 
 
 Gleb Paharenko wrote:
 
 Hello.

 It seems that the problem is in the t_y_id auto_increment field. InnoDB
 puts special AUTO-INC table lock, and prevent other threads from
 inserting into Y. See:
 http://dev.mysql.com/doc/refman/5.0/en/innodb-auto-increment-column.html



 Ady Wicaksono wrote:
  

 Below is the SQL to create table Y

 CREATE TABLE `Y` (
 `t_y_id` int(11) NOT NULL auto_increment,
 `t_y_time` timestamp NOT NULL default CURRENT_TIMESTAMP,
 `t_y_dest` varchar(16) NOT NULL default '',
 `t_y_msg` varchar(160) NOT NULL default '',
 `t_y_status` tinyint(2) NOT NULL default '0',
 `t_y_type` varchar(16) NOT NULL default '',
 `t_y_trx_id` varchar(40) NOT NULL default '',
 `t_y_trx_date` varchar(33) NOT NULL default '',
 `t_y_serviceid` varchar(20) NOT NULL default '',
 `t_y_pin` varchar(15) NOT NULL default '',
 `t_y_key` varchar(20) NOT NULL default '',
 `t_y_ans` varchar(160) NOT NULL default '',
 `in_sms_message_id` varchar(22) NOT NULL default '',
 `in_sms_time` datetime NOT NULL default '-00-00 00:00:00',
 `t_y_city` varchar(50) NOT NULL default '',
 PRIMARY KEY  (`t_y_id`),
 KEY `t_idx01` (`t_y_type`(3)),
 KEY `t_idx02` (`t_y_key`(3)),
 KEY `t_idx03` (`t_y_ans`(8)),
 KEY `t_idx04` (`t_y_dest`(7)),
 KEY `t_idx05` (`t_y_dest`(13),`t_y_key`(15),`t_y_ans`(10)),
 KEY `t_idx06` (`t_y_time`),
 KEY `t_idx07` (`t_y_time`,`t_y_key`(6)),
 KEY `t_idx08` (`t_y_trx_id`(8)),
 KEY `t_idx09` (`t_y_trx_id`(10),`t_y_dest`(6)),
 KEY `t_idx10` (`t_y_status`,`t_y_type`(3)),
 KEY `t_idx11` (`in_sms_time`),
 KEY `t_idx12` (`t_y_time`,`t_y_type`(3)),
 KEY `t_idx13` (`t_y_city`(7))
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1

   




  

 
 


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: charset

2006-02-01 Thread Gleb Paharenko
Hello.

mysqld has a --default-character-set=name command line option. Put
latin5 instead of 'name'. It should be possible to specify this variable
through the configuration file or  environment variable. You can check
if your server supports latin5 with:
  show variables like 'character_sets';

statement.

OKAN ARI wrote:
 yes it is mysql 4.0 how can I set i to latin 5 or how can I set the
 server to latin 5?
 
 os: centos 4.2
 
 Thank you
 - Original Message - From: Gleb Paharenko
 [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Tuesday, January 31, 2006 1:40 PM
 Subject: Re: charset
 
 
 Hello.

 In case mysql4 is mysql4.0 there should be a server system variable
 default_character_set. If it is 4.1 see:
  http://dev.mysql.com/doc/refman/4.1/en/charset-syntax.html
  http://dev.mysql.com/doc/refman/4.1/en/charset-connection.html


 OKAN ARI wrote:

 How can I set the characterset to latin5 for mysq4 in linux?

 OKAN



 -- 
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: OPTIMIZE fails on very large table

2006-02-01 Thread Gleb Paharenko
Hello.

If you're not using per-file tablespace, the OPTIMIZing of the table
most probably won't decrease the size of ibdata files. See:
  http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html

about how to decrease the size of the tablespace. OPTIMIZE table
is mapped to ALTER TABLE for InnoDB, and it recreates the whole table,
and most probably it will need a lot of space in the directory where
MySQL stores temporary files. In case this table is in a separated file,
you can dump it to another machine with mysqldump through the network,
drop it, and restore from the dump.



Rithish Saralaya wrote:
 Hello.
 
 We had an INNoDB table of very large size of 47 GB. Recently we deleted a
 lot of rows from the table in order to reduce the size. The number of rows
 now stands at 164,000+, however, the disk usage remains the same. I tried
 OPTIMIZEing the table; but the optimize fails after running for about an
 hour and a half.
 
 We have 15 GB of free space left in /var/lib/mysql/ and about 14GB in /tmp/
 
 The MySQL server is running on a 2*3.00 GHz linux box of 4 GB ram and 140 GB
 HDD.
 
 How will we able to regain the free space?
 
 Comments/suggestions/flamings solicited.
 
 Regards,
 Rithish.
 


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rows counted but not returned after join

2006-02-01 Thread Gleb Paharenko
Hello.

 trying to write a query to find out whether there are any email
addresses in the first table that do not have a counterpart in the second.

It seems like a job for a LEFT JOIN. To see the records which are
present in table A and not present in table B use this query:

SELECT A.* FROM A LEFT JOIN B USING(common_field) WHERE B.common_field
is NULL.


See:
  http://dev.mysql.com/doc/refman/5.0/en/join.html

There is a good article as well:
  http://dev.mysql.com/tech-resources/articles/mysql-db-design-ch5.html


Dougal Watson wrote:
 When I perform a particular query using a join the number of rows counted as
 hits exceeds the number of rows returned. I think I know why but don't know
 how to interrogate the database to confirm (and remedy) my suspicion.
 
 The basic database structure is two tables. Each table contains an email
 address field. The join allows me to link a publication reference in the
 first table, via the email address, to the author's name in the second.
 
 I suspect my problem results from there being a couple of email addresses in
 the first table that do not have a counterpart in the second. I've been
 trying to write a query to find out whether there are any email addresses in
 the first table that do not have a counterpart in the second.
 
 My follow-on question is how do I write my original query so that if there's
 no corresponding email address in the second table, a result is still
 returned?
 
 My basic query is of the form:
 $query = SELECT datetime, author_email, title, body, author.person AS
 author_name
 FROM $table01, $table02
 WHERE
 (
 (
 ((body REGEXP '$searchstring01')
 OR
 (title REGEXP '$searchstring01'))
 $choice3
 ((body REGEXP '$searchstring02')
 OR
 (title REGEXP '$searchstring02'))
 $choice4
 ((body REGEXP '$searchstring03')
 OR
 (title REGEXP '$searchstring03'))
 )
 AND
 ((YEAR(datetime) = '$startyear') AND (YEAR(datetime) =
 `$endyear'))
  AND
  (postings.author_email = author.email)
  )
  ORDER BY  datetime DESC
 
 I also perform a COUNT query but that query doesn't include the join, just
 the hits against the searchstrings. I've tried the COUNT with the join but
 it returns and error.
 
 I'm sorry if these are very basic, MySQL 101, questions ... I am a very
 basic recreational coder :-)
 
 Cheers
 Dougal
 
 


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Memory problems?

2006-02-01 Thread Rohit Peyyeti

Gurus,

We have recently installed beta version of our application on IBM - Pentium 
Xeon - dual processor, 1 GB RAM, 80 GB HDD on Fedora Core 3 OS. The kernel 
version which I currently have is: 2.6.9-1.667 and Mysql Server version: 
4.0.20-standard.


I have noticed that mysql processes (using 'top' command) take up 
substantial amount of memory over period of time. Please take a look at 
mysql processes output of 'top' command below:



PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2478 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.31 mysqld
2507 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.50 mysqld
2508 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2509 mysql 17 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2510 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2511 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2547 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.58 mysqld
2548 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.27 mysqld
2549 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2550 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.16 mysqld
2758 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2760 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:02.13 mysqld
2835 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:57.11 mysqld
2933 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2947 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2948 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2949 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2950 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2962 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2963 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2964 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2965 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld

My question:

1) I see 20 processes in using the 'top' command but I have only 10 open 
connections (I am using connection pooling). Does it mean that processes 
shown in the top command and open connections are not related? Can somebody 
please clarify?


2) At the begining of the server startup, I have noticed that each mysql 
process under top command begin taking approx 10-12 MB. This kept 
increasing. The current memory occupied by one mysql process now is 20m (as 
you can see above). This memory only increases over the period, but I have 
never seen it coming down throughout the application use. Why is this so? Is 
there any problem with the application or the way mysql is configured?


Any pointers?

Thanks in advance,
Rohit 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Memory problems?

2006-02-01 Thread Rohit Peyyeti

Thanks for your quick response. Another question:

So, what you are saying is that sum of the memory (RES - 20m) of all the 
mysql processes shown with the 'top' command is not total memory occupied by 
mysql server? Like 22 threads*22m = 440MB?


Also, any idea why this keeps increasing?

Thanks,
Rohit


- Original Message - 
From: Lars Heidieker [EMAIL PROTECTED]

To: Rohit Peyyeti [EMAIL PROTECTED]
Sent: Wednesday, February 01, 2006 4:37 PM
Subject: Re: Memory problems?


All these processes share the same address space (linux way of doing 
threads) therefor  their sizes dont add up.
It is correct one thread per connection plus a few from mysql (eg  innodb 
has a few maintenance threads )


On 1 Feb 2006, at 12:00, Rohit Peyyeti wrote:


Gurus,

We have recently installed beta version of our application on IBM - 
Pentium Xeon - dual processor, 1 GB RAM, 80 GB HDD on Fedora Core 3  OS. 
The kernel version which I currently have is: 2.6.9-1.667 and  Mysql 
Server version: 4.0.20-standard.


I have noticed that mysql processes (using 'top' command) take up 
substantial amount of memory over period of time. Please take a  look at 
mysql processes output of 'top' command below:



PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2478 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.31 mysqld
2507 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.50 mysqld
2508 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2509 mysql 17 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2510 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2511 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2547 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.58 mysqld
2548 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.27 mysqld
2549 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2550 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.16 mysqld
2758 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2760 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:02.13 mysqld
2835 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:57.11 mysqld
2933 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2947 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2948 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2949 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2950 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2962 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2963 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2964 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2965 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld

My question:

1) I see 20 processes in using the 'top' command but I have only 10  open 
connections (I am using connection pooling). Does it mean that  processes 
shown in the top command and open connections are not  related? Can 
somebody please clarify?


2) At the begining of the server startup, I have noticed that each  mysql 
process under top command begin taking approx 10-12 MB. This  kept 
increasing. The current memory occupied by one mysql process  now is 20m 
(as you can see above). This memory only increases over  the period, but 
I have never seen it coming down throughout the  application use. Why is 
this so? Is there any problem with the  application or the way mysql is 
configured?


Any pointers?

Thanks in advance,
Rohit

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread Jan Kirchhoff


As I already wrote I try do get a replication running from a mysql-4.1.13 
(32bit) master to a 5.0.18 (64bit) slave. It only runs for a few minutes and 
then a query hangs.
I think I now found out why:
I modified a multi-table-update that hung to a select.  The same query on the absolutely 
identical tables gives totally different explains on both systems:
While my query has a cardinality of 23,124*1=23,124 on mysql4, it has  
6,412*34,341=220,194,492 on mysql5 - and takes forever and makes me think 
everything hangs?!
I verified this with a dump of to tables that I imported on various few 
different systems. I created a new test-database, piped the dump into that and 
ran the following queries.

It looks like the join-behaviour of mysql has totally changed in mysql5!
This seems to affect quite a lot of queries here.
Am I doing really stupid mistakes or did I miss a major chapter in the upgrading to 
mysql5-documentation that I read over and over during the last days?

Jan

I put the mysql versions in front of the mysql prompts:


4.1.13-standard mysql show table status;
+--++-++++-+-+--+---++-+-+-+---+--++-+
| Name | Engine | Version | Row_format | Rows   | Avg_row_length | 
Data_length | Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time  | Collation 
| Checksum | Create_options | Comment |
+--++-++++-+-+--+---++-+-+-+---+--++-+
| dlstm_data_d | MyISAM |   9 | Dynamic|  48621 | 49 | 
2428108 |  4294967295 |  1921024 | 0 |   NULL | 
2006-02-01 11:54:57 | 2006-02-01 11:55:08 | 2006-02-01 11:55:09 | 
latin1_swedish_ci | NULL || |
| stm_data_d   | MyISAM |   9 | Dynamic| 480772 |105 |
50816164 |  4294967295 | 57697280 | 0 |   NULL | 
2006-02-01 11:55:09 | 2006-02-01 11:57:00 | 2006-02-01 11:58:58 | 
latin1_swedish_ci | NULL || |
+--++-++++-+-+--+---++-+-+-+---+--++-+
2 rows in set (0.00 sec)


5.0.18-max-log mysql show table status;
+--++-++++-+-+--+---++-+-+-+---+--++-+
| Name | Engine | Version | Row_format | Rows   | Avg_row_length | 
Data_length | Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time  | Collation 
| Checksum | Create_options | Comment |
+--++-++++-+-+--+---++-+-+-+---+--++-+
| dlstm_data_d | MyISAM |  10 | Dynamic|  48621 | 49 | 
2384860 | 281474976710655 |  1836032 | 0 |   NULL | 
2006-02-01 11:54:05 | 2006-02-01 11:54:07 | 2006-02-01 11:54:07 | 
latin1_swedish_ci | NULL || |
| stm_data_d   | MyISAM |  10 | Dynamic| 480772 |104 |
50192768 | 281474976710655 | 52738048 | 0 |   NULL | 
2006-02-01 11:54:07 | 2006-02-01 11:55:12 | 2006-02-01 11:55:40 | 
latin1_swedish_ci | NULL || |
+--++-++++-+-+--+---++-+-+-+---+--++-+
2 rows in set (0.00 sec)


4.1.13-standard mysql explain select *from dlstm_data_d s, stm_data_d t where s.sym_cd_ is null and s.sys_cdd=t.sys_cdd and s.local=t.local and s.local is not null and s.local!= and s.sys_cdd is not null and s.sys_cdd!= and t.sym_cd is not null; 
++-+---+--+---+-+-+---+---+-+

| id | select_type | table | type | possible_keys | key | 
key_len | ref   | rows  | Extra 

Re: innodb questions

2006-02-01 Thread Grant Giddens
Ware,

  Thanks so much for your help.  

This is what I'm planning on doing, please let me know if you see any problems 
here.  This is on my test server, so losing data isn't the end of the world.  
This is what I plan on doing in order:

1.  Backup the database via mysqldump
2.  Purge the master logs via PURGE MASTER LOGS command.  Maybe I should just 
delete them?  It's a test server and I don't plan on going back to a previous 
snapshot.
3.  Shut down the mysql server
4.  Edit my.cnf to remove log-bin
5.  Edit my.cnf to add innodb_file_per_table
6. Move the tablespace ( I guess this is the ib_logfile0, ib_logfile1, and 
ibdata1 files)
7.  Start the mysql server.  I think at this point the table space will be 
recreated.  Am I right?
8.  Re-import my data from my file at step 1.

This leads me to several questions though:
1.  My web host server is running CentOS linux.  How do I know it's maximum 
filesize?  Should I limit the ibdata1 type files such as:

innodb_data_file_path = /ibdata/ibdata1:100M:autoextend:max:2000M

What if my table is larger than 2G?

2.  If  I change my tables (add/drop index and such) do I need to do this 
mysqldump, recreate my tablespace and re-imporrt my data process each time?

3.  What if I'm running multiple databases with innodb tables on one mysql 
server?  I guess if I use innodb_file_per_table it will help me keep the file 
sizes down and make it easier to drop and reload individual databases.  Am I 
correct here?

Thanks,
Grant


Ware Adams [EMAIL PROTECTED] wrote: On Jan 31, 2006, at 9:54 PM, Grant 
Giddens wrote:

 Since changing these tables, I've noticed some large files in my / 
 var/lib/mysql directory.  This is on my test server and I'm running  
 gentoo linux.

 The files in this directory look like:

 /var/lib/mysql/gentoo1-bin.01  (1 Gig in size)
 /var/lib/mysql/gentoo1-bin.01  (1 Gig in size)
 /var/lib/mysql/gentoo1-bin.01  (1 Gig in size)
 /var/lib/mysql/gentoo1-bin.01  (1 Gig in size)
 ...
 /var/lib/mysql/gentoo1-bin.60  (1 Gig in size)

These all look like binary log files, they aren't exclusive to  
InnoDB.  You must have enabled binary logging in your my.cnf file as  
well.

This is covered pretty extensively in the manual.  Basically they are  
used for replication and recovery.  In the latter you can take a  
point in time snapshot from a known binary log position and then if  
need be apply the logs (which is basically re-running the queries in  
them) to that to get back to where you databases was at any point in  
time.

If you don't need any of these functions you can either get rid of  
the files or turn off binary logging.  However, you probably don't  
want to just delete them from the file system.  Have a look at the  
'show master logs' and 'purge master logs' commands.  This will  
delete the files and keep mysqld's internal index of the binary logs  
accurate.

 /var/lib/mysql/ibdata1(10.0 Gig in size)

This is your InnoDB table space, you need it.  You also need your  
ib_logfile0 etc... files.

 3. Can I limit the size of the ibdata1 file?

Only by limiting data in your tables.  Also, in the shared table  
space (which you are using) you can't shrink this file.  You can  
switch to innodb_file_per_table (see InnoDB manual) so that when you  
drop a table you save it's space (but not on deletion of individual  
records).  However, even doing this you cannot delete your ibdata1  
file or any shared tablespace files.  You can recreate your entire  
tablespace (see below), but even then you'll need a (small) shared  
ibdata file.

   Is it too late to resize it?

Yes, but you could use mysqldump to dump all data to text files,  
delete (or move) the tablespace, redefine it and then re-import.

 4.  What can I set to reduce the size of these files?

Use innodb_file_per_table
turn off binary loggin if you don't need it
make sure index and field types are appropriate

 My innodb variables are:

These came through poorly spaced, but I think it would help a lot to  
read the InnoDB sections of the manual (it's pretty manageable in size).

InnoDB is really fantastic for certain applications, including heavy  
write load to large tables with concurrent reads.  We've used it for  
several years on several billion records with 170 qps 50% of which is  
writes.  There's no way to do this in MySQL other than InnoDB,  
realistically.

That said, it has it's own learning curve.  It's really an entirely  
new database engine, so there's lots to learn even if you've used  
mysql/myisam for years.  In particular the tablespace layout and  
dependency on files  other than ones linked to a particular table is  
a little daunting.

--Ware




-
Bring words and photos together (easily) with
 PhotoMail  - it's free and works with Yahoo! Mail.

Re: Memory problems?

2006-02-01 Thread JamesDR

Rohit Peyyeti wrote:

Thanks for your quick response. Another question:

So, what you are saying is that sum of the memory (RES - 20m) of all the 
mysql processes shown with the 'top' command is not total memory 
occupied by mysql server? Like 22 threads*22m = 440MB?


Also, any idea why this keeps increasing?

Thanks,
Rohit


- Original Message - From: Lars Heidieker [EMAIL PROTECTED]
To: Rohit Peyyeti [EMAIL PROTECTED]
Sent: Wednesday, February 01, 2006 4:37 PM
Subject: Re: Memory problems?


All these processes share the same address space (linux way of doing 
threads) therefor  their sizes dont add up.
It is correct one thread per connection plus a few from mysql (eg  
innodb has a few maintenance threads )


On 1 Feb 2006, at 12:00, Rohit Peyyeti wrote:


Gurus,

We have recently installed beta version of our application on IBM - 
Pentium Xeon - dual processor, 1 GB RAM, 80 GB HDD on Fedora Core 3  
OS. The kernel version which I currently have is: 2.6.9-1.667 and  
Mysql Server version: 4.0.20-standard.


I have noticed that mysql processes (using 'top' command) take up 
substantial amount of memory over period of time. Please take a  look 
at mysql processes output of 'top' command below:



PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2478 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.31 mysqld
2507 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.50 mysqld
2508 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2509 mysql 17 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2510 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2511 mysql 20 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2547 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.58 mysqld
2548 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.27 mysqld
2549 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2550 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.16 mysqld
2758 mysql 15 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2760 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:02.13 mysqld
2835 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:57.11 mysqld
2933 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2947 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2948 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2949 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2950 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2962 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2963 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2964 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld
2965 mysql 16 0 35036 20m 3616 S 0.0 2.1 0:00.00 mysqld

My question:

1) I see 20 processes in using the 'top' command but I have only 10  
open connections (I am using connection pooling). Does it mean that  
processes shown in the top command and open connections are not  
related? Can somebody please clarify?


2) At the begining of the server startup, I have noticed that each  
mysql process under top command begin taking approx 10-12 MB. This  
kept increasing. The current memory occupied by one mysql process  
now is 20m (as you can see above). This memory only increases over  
the period, but I have never seen it coming down throughout the  
application use. Why is this so? Is there any problem with the  
application or the way mysql is configured?


Any pointers?

Thanks in advance,
Rohit



There is queries going on, so some data will be cached in memory. If 
memory serves me correctly, Linux won't clear out memory until it has 
to, so unless you are running low, that memory may stay in use until the 
 Kernel cleans house. Unless you are into swap really badly, I wouldn't 
worry too much.
As an aside, your memory of 1GB seems a bit low to me. If I was you, and 
I had the money, I would toss in another 1GB.


Take a look at your in use numbers. You may see that if you add up all 
the memory consumption of the system, that this will be greater than 
what is being displayed as in use. As the other respondent said, this is 
how Linux works.

--
Thanks,
James

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



error LNK2001: unresolved external symbol _mysql_real_escape_string

2006-02-01 Thread lakshmi.narasimharao

Hi,

  I am trying to use mysql_real_escape_string() C API for escaping
the special characters. When I tried to compile the program using the
Visual Studio, I am getting the following link error

error LNK2001: unresolved external symbol _mysql_real_escape_string
fatal error LNK1120: 1 unresolved externals
Error executing link.exe.

I used libmysql.lib for linking and path to mysql.h in the include
files.

Please guide me here. To avoid the linking error, what I need to do?. I
am using mysql 4.0.24 version.

Thanking you in advance.

Thanks,
Narasimha


-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006 4:23 PM
To: mysql@lists.mysql.com
Subject: Re: Storing a string value with slash(\)


Hello.

 If I retrive the same value with the slash (\), not able to display
 properly in the GUI. Do we need to use any MySQL specific functions to

 select such values?.

If you're retrieving the same values which have stored, that this is
rather a GUI problem, not MySQL.

 Is there any way in MySQL so that I can enter any number of slashes
 between the name with out escaping with another slash?.

It is all depends on the way you're using to store data in MySQL. You
can pass everything to mysql_real_escape_string(). See:
  http://dev.mysql.com/doc/refman/5.0/en/mysql-real-escape-string.html



[EMAIL PROTECTED] wrote:

 Hi,

  From the front end I need to use double slashes i.e  (\\) to
 enter one slash (\) into the MySQL database. I.e, if I enter
 Gelb\Paha, it stores in the mySQL as GlebPaha, If I enter
 S\\Greeen, it stores as S\Green in the database.


 Is there any way in MySQL so that I can enter any number of slashes
 between the name with out escaping with another slash?.

 If I retrive the same value with the slash (\), not able to display
 properly in the GUI. Do we need to use any MySQL specific functions to

 select such values?.

 Please guide me for a solution here. Thanking you in advance.

 Thanks,
 Narasimha

 



 The information contained in this electronic message and any
 attachments to this message are intended for the exclusive use of the
 addressee(s) and may contain proprietary, confidential or privileged
 information. If you are not the intended recipient, you should not
 disseminate, distribute or copy this e-mail. Please notify the sender
 immediately and destroy all copies of this message and any
 attachments.

 WARNING: Computer viruses can be transmitted via email. The recipient
 should check this email and any attachments for the presence of
 viruses. The company accepts no liability for any damage caused by any

 virus transmitted by this email.

 www.wipro.com


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments.

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.

www.wipro.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: NOT IN vs IS NULL

2006-02-01 Thread SGreen
Devananda,

I have to support Peter on this one.  What he submitted to you is a 
perfectly appropriate solution. It seems as though you rejected his 
assistance before even trying to see if it would work. There have been and 
continue to be SQL-driven databases around that have not had and do not 
have subquery support. They function perfectly well without them.  By your 
response, it appears to me you don't know how a LEFT JOIN is supposed to 
operate. That is one of the most basic tools of any relational DBA's 
arsenal.

I implore everyone not comfortable with them to learn their JOINs (INNER 
JOIN, LEFT JOIN, RIGHT JOIN, CROSS JOIN, etc.) before you learn how to 
subquery (and I do NOT mean that comma-separated crap that poses as a 
CROSS JOIN). So many of the performance problems posted to this list are 
subquery-related when a simple, straightforward JOIN or two will solve the 
same problem usually with _much_ better response time. This post is a 
perfect example.

Explicit JOIN statements have the advantage of possibly using indexes at 
every layer of data aggregation. Subqueries lose that ability at the level 
of each subquery.  Some subqueries, depending on how they are written, are 
executed once PER ROW of your parent query data. If it takes .01 seconds 
to run a subquery by itself and you have 1 rows of parent data, that 
would be 100 seconds of nothing but subquery execution time before you 
even get to the GROUP BY or ORDER BY processing part of the query. 
Rewriting the same query to use an explicit JOIN may turn the .01 seconds 
of the naked subquery into .02-.04 seconds because of the indexing 
advantage and because it computes a set of matches between the table only 
once.

Subqueries have their uses and are appropriate for many situations. 
However, IMHO, they should be your second-line of attack because of their 
inherent weaknesses. I believe this to be true for the majority of the 
RDBM systems in the world today.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



Peter Brawley [EMAIL PROTECTED] wrote on 02/01/2006 12:25:01 
AM:

 Devananda,
 
 Definitely not. The first LEFT JOIN, with the clauses 
 ON pt.login_id=pab1.login_id ... WHERE pab1.login_id IS NULL,
  does not make sense. 
 
 ?!? It's a standard JOIN replacement for NOT IN (...). 
 
 There is not a 1to1 relationship,
 
 These joins neither find nor assume 1:1 relationships.
 
  and pab1.login_id will rarely, if ever, be NULL in this query. 
 
 If you LEFT JOIN table a to table b using column c WHERE b.c IS 
 NULL, you get the rows in b which have no matching c values in a, 
 which is logically equivalent to c in a and NOT IN b.
 
 This query ...
 
 SELECT DISTINCT pt.offer_id
 FROM paytable AS pt
 INNER JOIN offers AS o USING (offer_id)
 LEFT JOIN publisher_advertiser_blocks AS pab1 
   ON pt.login_id=pab1.login_id
 LEFT JOIN publisher_advertiser_blocks AS pab2 ON o.
 advertiser_id=pab2.advertiser_id
 WHERE pt.login_id=1
   AND pab1.login_id IS NULL OR pab2.advertiser_id IS NULL;
 
 gives the same result as yours does, on the data you provided.
 
 PB
 
 -
 
 Devananda wrote: 
 Peter Brawley wrote: 
 Devananda,
 
 OK, get the diescription in ordinary language right, and the query 
 ought to write itself. Your schema is like this? ...
 and you want the paytable rows 
   (i) for which there is an offers row matching paytable.offer_id, and
   (ii) for which there is no pab row where pab.login_id=paytable.
 login_id and pab.advertiser_id=offer_id?
 Not quite.  I want all the offer_id's which are stored in paytable ...
(i) for a specified login_id
(ii) that do not belong to advertiser_id's which that login_id is
 blocked from.
 
 Would that be ...
 
 SELECT pt.offer_id, pt.login_id, o.advertiser_id
 FROM paytable AS pt 
 INNER JOIN offers AS o USING (offer_id)
 LEFT JOIN publisher_advertiser_blocks AS pab1 ON 
pt.login_id=pab1.login_id 
 LEFT JOIN publisher_advertiser_blocks AS pab2 ON o.
 advertiser_id=pab2.advertiser_id
 WHERE pab1.login_id IS NULL OR pab2.advertiser_id IS NULL; 
 Definitely not. The first LEFT JOIN, with the clauses ON pt.
 login_id=pab1.login_id ... WHERE pab1.login_id IS NULL, does not 
 make sense. There is not a 1to1 relationship, and pab1.login_id will
 rarely, if ever, be NULL in this query. 
 
 For example, let's say login_id (1) has a hundred rows in paytable, 
 each belonging to a different offer_id (1) ... (100), and of those, 
 10 belong to advertiser_id (1), 5 belong to advertiser_id (2), and 
 the rest to (3). If pab (login_id, advertiser_id) contains the rows 
 (1,1),(1,2) then the query should return to me 85 rows from 
 paytable. Here's example data:
 
 INSERT INTO login_data (login_id) VALUES (1);
 INSERT INTO advertiser_data (advertiser_id) VALUES (1), (2), (3);
 INSERT INTO offers (offer_id, advertiser_id) VALUES (1, 1), (2, 1), 
 ..., (11, 2), (12, 2), ..., (16, 3), ..., (100,3);
 INSERT INTO paytable (offer_id, login_id) VALUES (1, 1), (2, 1), ..., 

Re: Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread SGreen
Jan Kirchhoff [EMAIL PROTECTED] wrote on 02/01/2006 06:31:20 AM:

 
 As I already wrote I try do get a replication running from a 
 mysql-4.1.13 (32bit) master to a 5.0.18 (64bit) slave. It only runs 
 for a few minutes and then a query hangs.
 I think I now found out why:
 I modified a multi-table-update that hung to a select.  The same 
 query on the absolutely identical tables gives totally different 
 explains on both systems:
 While my query has a cardinality of 23,124*1=23,124 on mysql4, it 
 has  6,412*34,341=220,194,492 on mysql5 - and takes forever and 
 makes me think everything hangs?!
 I verified this with a dump of to tables that I imported on various 
 few different systems. I created a new test-database, piped the dump
 into that and ran the following queries.
 
 It looks like the join-behaviour of mysql has totally changed in mysql5!
 This seems to affect quite a lot of queries here.
 Am I doing really stupid mistakes or did I miss a major chapter in 
 the upgrading to mysql5-documentation that I read over and over 
 during the last days?
 
 Jan
 
 I put the mysql versions in front of the mysql prompts:
 
 
 4.1.13-standard mysql show table status;
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 | Name | Engine | Version | Row_format | Rows   | 
 Avg_row_length | Data_length | Max_data_length | Index_length | 
 Data_free | Auto_increment | Create_time | Update_time 
 | Check_time  | Collation | Checksum | 
 Create_options | Comment |
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 | dlstm_data_d | MyISAM |   9 | Dynamic|  48621 | 
 49 | 2428108 |  4294967295 |  1921024 | 0 | 
 NULL | 2006-02-01 11:54:57 | 2006-02-01 11:55:08 | 2006-02-01 11:55:
 09 | latin1_swedish_ci | NULL || |
 | stm_data_d   | MyISAM |   9 | Dynamic| 480772 | 
 105 |50816164 |  4294967295 | 57697280 | 0 | 
 NULL | 2006-02-01 11:55:09 | 2006-02-01 11:57:00 | 2006-02-01 11:58:
 58 | latin1_swedish_ci | NULL || |
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 2 rows in set (0.00 sec)
 
 
 5.0.18-max-log mysql show table status;
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 | Name | Engine | Version | Row_format | Rows   | 
 Avg_row_length | Data_length | Max_data_length | Index_length | 
 Data_free | Auto_increment | Create_time | Update_time 
 | Check_time  | Collation | Checksum | 
 Create_options | Comment |
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 | dlstm_data_d | MyISAM |  10 | Dynamic|  48621 | 
 49 | 2384860 | 281474976710655 |  1836032 | 0 | 
 NULL | 2006-02-01 11:54:05 | 2006-02-01 11:54:07 | 2006-02-01 11:54:
 07 | latin1_swedish_ci | NULL || |
 | stm_data_d   | MyISAM |  10 | Dynamic| 480772 | 
 104 |50192768 | 281474976710655 | 52738048 | 0 | 
 NULL | 2006-02-01 11:54:07 | 2006-02-01 11:55:12 | 2006-02-01 11:55:
 40 | latin1_swedish_ci | NULL || |
 +--++-++
 ++-+-+--
 +---++-
 +-+-+---
 +--++-+
 2 rows in set (0.00 sec)
 
 
 4.1.13-standard mysql explain select *from dlstm_data_d s, 
 stm_data_d t where s.sym_cd_ is null and s.sys_cdd=t.sys_cdd and s.
 local=t.local and s.local is not null and s.local!= and s.sys_cdd 
 is not null and s.sys_cdd!= and t.sym_cd is not null; 
 ++-+---+--+---
 +-+-+---+---+-+
 | id | select_type | table | 

Re: Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread Peter Brawley




Jan,

While my query has a cardinality of 23,124*1=23,124 on mysql4,
it has 

6,412*34,341=220,194,492 on mysql5 - and takes forever and makes 
me
think everything hangs?!


Yep, MySQL improved the SQL compatibility of its join syntax beginning
with 5.0.12, see http://dev.mysql.com/doc/refman/5.0/en/join.html and,
esp. comments there on the change in precedence of the comma operator. 

PB

-

Jan Kirchhoff wrote:

As I already wrote I try do get a replication running from a
mysql-4.1.13 (32bit) master to a 5.0.18 (64bit) slave. It only runs for
a few minutes and then a query hangs.
  
I think I now found out why:
  
I modified a multi-table-update that hung to a select.  The same query
on the absolutely identical tables gives totally different "explains"
on both systems:
  
While my query has a cardinality of 23,124*1=23,124 on mysql4, it has 
6,412*34,341=220,194,492 on mysql5 - and takes forever and makes me
think everything hangs?!
  
I verified this with a dump of to tables that I imported on various few
different systems. I created a new test-database, piped the dump into
that and ran the following queries.
  
  
It looks like the join-behaviour of mysql has totally changed in
mysql5!
  
This seems to affect quite a lot of queries here.
  
Am I doing really stupid mistakes or did I miss a major chapter in the
"upgrading to mysql5"-documentation that I read over and over during
the last days?
  
  
Jan
  
  
I put the mysql versions in front of the "mysql" prompts:
  
  
  
4.1.13-standard mysql show table status;
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
| Name | Engine | Version | Row_format | Rows   |
Avg_row_length | Data_length | Max_data_length | Index_length |
Data_free | Auto_increment | Create_time | Update_time
| Check_time  | Collation | Checksum | Create_options |
Comment |
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
| dlstm_data_d | MyISAM |   9 | Dynamic    |  48621 |
49 | 2428108 |  4294967295 |  1921024 | 0
|   NULL | 2006-02-01 11:54:57 | 2006-02-01 11:55:08 |
2006-02-01 11:55:09 | latin1_swedish_ci | NULL |   
| |
  
| stm_data_d   | MyISAM |   9 | Dynamic    | 480772 |   
105 |    50816164 |  4294967295 | 57697280 | 0
|   NULL | 2006-02-01 11:55:09 | 2006-02-01 11:57:00 |
2006-02-01 11:58:58 | latin1_swedish_ci | NULL |   
| |
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
2 rows in set (0.00 sec)
  
  
  
5.0.18-max-log mysql show table status;
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
| Name | Engine | Version | Row_format | Rows   |
Avg_row_length | Data_length | Max_data_length | Index_length |
Data_free | Auto_increment | Create_time | Update_time
| Check_time  | Collation | Checksum | Create_options |
Comment |
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
| dlstm_data_d | MyISAM |  10 | Dynamic    |  48621 |
49 | 2384860 | 281474976710655 |  1836032 | 0
|   NULL | 2006-02-01 11:54:05 | 2006-02-01 11:54:07 |
2006-02-01 11:54:07 | latin1_swedish_ci | NULL |   
| |
  
| stm_data_d   | MyISAM |  10 | Dynamic    | 480772 |   
104 |    50192768 | 281474976710655 | 52738048 | 0
|   NULL | 2006-02-01 11:54:07 | 2006-02-01 11:55:12 |
2006-02-01 11:55:40 | latin1_swedish_ci | NULL |   
| |
  
+--++-++++-+-+--+---++-+-+-+---+--++-+
  
2 rows in set (0.00 sec)
  
  

Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Dear All,
I need your suggestions please.

have to large tables with these schemas:

 Table: traffic_log
Create Table: CREATE TABLE `traffic_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `orig` varchar(13) default NULL,
  `dest` varchar(13) default NULL,
  `message` text,
  `account_id` int(11) NOT NULL default '0',
  `service_id` int(11) NOT NULL default '0',
  `dir` enum('IN','OUT') NOT NULL default 'IN',
  `plugin` varchar(30) NOT NULL default 'UNKNOWN',
  `date_entered` datetime NOT NULL default '-00-00 00:00:00',
  `replied` tinyint(4) default '0',
  KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
  KEY `account_id_2` (`account_id`,`date_entered`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

Table: status_log
Create Table: CREATE TABLE `status_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `status` smallint(5) NOT NULL default '0',
  `time` datetime NOT NULL default '-00-00 00:00:00',
  `smsc` varchar(20) NOT NULL default '',
  `priority` tinyint(2) unsigned NOT NULL default '0',
  `ack` varchar(30) NOT NULL default '',
  KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
  KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I want to execute a query to find out each last message's status. So my
query is :

select * from traffic_log LEFT JOIN status_log ON
traffic_log.recipient_id=status_log.recipient_id and
traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
group by status_log.recipient_id HAVING time=max(time) order by time;

And MySQL explanation about this query is:
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: traffic_log
 type: ref
possible_keys: account_id,account_id_2
  key: account_id
  key_len: 4
  ref: const
 rows: 1049598
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: status_log
 type: ref
possible_keys: recipient_id_2
  key: recipient_id_2
  key_len: 5
  ref: smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
 rows: 2
Extra:

as you see return records are 1049598.

But it's very slow.

Do you have any suggestions to fast it?

--
Sincerely,
Hadi Rastgou
a href=http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1;Get
Firefox!/a


Dictionary

2006-02-01 Thread Scott Hamm
I've been trying to google to no avail for English dictionary (with
definitions) in any format that I can download and import into MySQL. 
Do anyone know where I can find it?

Thanks in advance,

Scott

--
Power to people, Linux is here.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.0.18-max-log as a slave of a 4.1.13-standard-log master problem - slave hangs

2006-02-01 Thread Jan Kirchhoff
I thought I found the reason for my problems with the change in 
join-behaviour in mysql 5, but Iwas wrong :( there is more trouble :(


my replications hangs with simple queries like insert into table 
(a,b,c) values (1,2,3) on a myisam-table.
It just hangs forever with no cpu-load on the slave. I have to kill and 
restart mysql with the following commands:


killall -9 mysqld;sleep 2;mysqladmin shutdown;sleep 5;/etc/init.d/mysql 
start;sleep 2;mysql -e 'slave start'


I can find the changed row in the table, so the query was processed 
correctly.
Then it runs again for some time and hangs again with some other simple 
insert.


I disabled innodb, cluster, took out all my variables out of my.cnf except
max_allowed_packet = 16M which I need for the replication to work and I 
have no clue what the reason for my problem is.


what else could I try?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb questions

2006-02-01 Thread Ware Adams

On Feb 1, 2006, at 6:35 AM, Grant Giddens wrote:

This is what I'm planning on doing, please let me know if you see  
any problems here.  This is on my test server, so losing data isn't  
the end of the world.  This is what I plan on doing in order:


1.  Backup the database via mysqldump


We've actually done this several times recently to convert table  
spaces fully to innodb_file_per_table.  Here's the mysqldump command  
we use:


/usr/local/mysql/bin/mysqldump -u username -p --add-drop-table --add- 
locks --create-options --single-transaction --disable-keys --extended- 
insert --quick --quote-names --flush-logs --all-databases  /volumes/ 
raid/snapshot.sql


You should read about those options, mysqldump has a ton of them.   
For systems with only InnoDB tables this will take a point in time  
snapshot.  You could probably get away with a simpler command on a  
test server, but if you're going to use InnoDB in a real environment  
you might as well do it this way.  All the options are pretty  
straight forward explanations.


2.  Purge the master logs via PURGE MASTER LOGS command.  Maybe I  
should just delete them?  It's a test server and I don't plan on  
going back to a previous snapshot.


Yes, you might as well use e.g.

mysqlshow master logs;
+---++
| Log_name  | File_size  |
+---++
| server-bin.000473 | 1073741985 |
| server-bin.000474 | 1074519940 |

...

| server-bin.000606 | 1074213492 |
| server-bin.000607 |  779776096 |
+---++
135 rows in set (0.01 sec)

to see what you have and then:

mysqlpurge master logs to 'server-bin.000607';
Query OK, 0 rows affected (18.78 sec)

to do the purge.  Again, might as well test how you'll do it on a  
real server.



3.  Shut down the mysql server
4.  Edit my.cnf to remove log-bin
5.  Edit my.cnf to add innodb_file_per_table


right

6. Move the tablespace ( I guess this is the ib_logfile0,  
ib_logfile1, and ibdata1 files)


Yes, though they should be specified in your my.cnf file.

7.  Start the mysql server.  I think at this point the table space  
will be recreated.  Am I right?


Right, but before this I would make sure to explicitly define the  
tablespace as you want it to be (I'm assuming you'll do that from   
your comments from below, but just mentioning it) rather than keep  
your old one.



8.  Re-import my data from my file at step 1.


yes


This leads me to several questions though:
1.  My web host server is running CentOS linux.  How do I know it's  
maximum filesize?  Should I limit the ibdata1 type files such as:


innodb_data_file_path = /ibdata/ibdata1:100M:autoextend:max:2000M

What if my table is larger than 2G?


Our only CentOS machine doesn't run MySQL, so I don't know what the  
actual limit is.  I suspect it's larger, but you'd want to know.  If  
you use the above (and I didn't know you could specify a max, but ok)  
and the file fils up InnoDB would generate an error.  It's the same  
issue if you don't have autoextend.


However, you won't be using the shared tablespace for much at all if  
you're using innodb_file_per_table.  It only holds various shared  
data, not the tables themselves.  You still could have an issue if an  
individual table grows that large, but it's less likely than in a  
shared tablespace.


I think the answer is to find out what CentOS (or probably more  
specifically your file system's) limit is.  We're using OS X and have  
a 56 GB table, so InnoDB can certainly handle huge files if the file  
system can.  Even if you run into the limit I don't believe (though  
I've never tested it) you'll  have corruption, just a database error  
(which might be as bad in some applications).


2.  If  I change my tables (add/drop index and such) do I need to  
do this mysqldump, recreate my tablespace and re-imporrt my data  
process each time?


No.  In fact a less intrusive way to move tables out of shared  
tablespace is to issue:


alter table db.table type=innodb;

statements after adding innodb_file_per_table to the my.cnf and  
restarting.  This does a meaningless alter that essentially drops and  
recreates the table and moves it out of the shared tablespace (b/c  
the create occurs with innodb_file_per_table on).


However, thought this frees up space in your shared tablespace (so  
other tables can grow without it expanding) you still can't delete it  
or shrink it this way.  The only way to do that is to delete and  
recreate.


3.  What if I'm running multiple databases with innodb tables on  
one mysql server?  I guess if I use innodb_file_per_table it will  
help me keep the file sizes down and make it easier to drop and  
reload individual databases.  Am I correct here?


It will keep you from having a giant file (the shared tablespace)  
that you can't delete or shrink.  If you drop a table you actually  
free up space on the disk.


However, it's still not like myisam where you can grab the files and  

Re: Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread Jan Kirchhoff


Comma separated JOINS strikes again!!! 
  

[...]

Here is where you will find this change documented in the manual:
http://dev.mysql.com/doc/refman/5.0/en/upgrading-from-4-1.html
  
I read that page over and over again... probably too late at night.  
thanks for that info. Thanks to Peter, too.
I made that select out of an multi-table update that hung in the 
processlist. That query came out of a very old script
floating around...  I found a few more and  modified the updates, i 
think I shouldn't have that problem any more.
I think nobody over here ever used comma-joins except for those few 
times. At least I hope so. ;)


but that was only part of my problems, my replication still hangs every 
now and then... I'd be glad if you could have a
look at my other postings 5.0.18-max-log as a slave of a 
4.1.13-standard-log master problem - slave hangs.

Maybe you have any ideas what I could try to get this working...

thanks
Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tmpdir option

2006-02-01 Thread Eamon Daly

This is wishful thinking, but I figured I'd ask anyway: the
manual states:

 Starting from MySQL 4.1, the --tmpdir option can be set
  to a list of several paths that are used in round-robin
  fashion.

Are these directories promotable at all? In other words, if
I specify an 8G partition and a 20G partition, and a
temporary table runs out of space in the former, will MySQL
attempt to move it to the latter? We've been considering the
purchase of a flash drive as a temporary disk for some of
our larger reports, but on occassion we'll run a Very Large
Report that would easily overrun the smaller disk.


Eamon Daly


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help please

2006-02-01 Thread Logg, Connie A.
Two days ago, a system that has been running fine started crashing...It could 
be for a variety of reasons which I am researchinig. However (running mysql 
5.0.18) I notice the following from netstat:
tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38672 
ESTABLISHED
tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38775 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38781 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38780 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38781 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38782 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38783 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38776 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38777 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38778 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38779 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38772 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38773 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38774 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38768 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38769 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38770 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38771 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38764 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38765 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38766 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38760 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38761 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38762 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38763 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38756 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38757 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38758 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38759 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38752 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38753 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38754 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38755 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38748 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38749 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38750 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38751 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38744 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38745 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38746 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38747 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38742 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38743 iepm-bw.slac.stanford.:1000 
TIME_WAIT
tcp0  0 iepm-bw.slac.stanford:38672 iepm-bw.slac.stanford.:1000 
ESTABLISHED

One of the messages in /var/log/messages is too many orphaned sockets.  Do the 
above indicate orphaned sockets? I logged into mysql root and did a 'show full 
processlist' and there were only one or two mysql processes.  Can someone 
explain why there might be so many tcp sockets taken up to connect to mysql 
(which is running on port 1000).

Thanks, Connie Logg
SLAC

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tmpdir option

2006-02-01 Thread Eamon Daly

That's exactly right (the few writes part, not the
knowing what I'm doing part): this particular machine only
runs reports a few times a day.


Eamon Daly



- Original Message - 
From: Jan Kirchhoff [EMAIL PROTECTED]

To: Eamon Daly [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, February 01, 2006 11:40 AM
Subject: Re: tmpdir option



Eamon Daly schrieb:
 
We've been considering the

purchase of a flash drive as a temporary disk for some of
our larger reports, but on occassion we'll run a Very Large
Report that would easily overrun the smaller disk.

Eamon,

flash drives have a limited number of possible writes, I would not
use a flashdrive for anything like swap/cache/log/tmp...
except you know what you are doing and only have few writes?

Jan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql-query-browser install RPM

2006-02-01 Thread hawat . thufir
Must the RPM be manually downloaded, or can the query browser be installed 
from yum?  I've tried search 'mysql-query-browser' and no 
result--perhaps I need to add a repository?




[EMAIL PROTECTED] ~]$ mysql-query-browser
bash: mysql-query-browser: command not found
[EMAIL PROTECTED] ~]$ ll /opt/
total 0
[EMAIL PROTECTED] ~]$ mysql -u thufir -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13 to server version: 4.1.16

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql quit
Bye
[EMAIL PROTECTED] ~]$ date
Wed Feb  1 12:58:28 EST 2006
[EMAIL PROTECTED] ~]$

and

[EMAIL PROTECTED] ~]#
[EMAIL PROTECTED] ~]# yum -y groupinstall 'MySQL Database'
Setting up Group Process
Setting up repositories
Setting up repositories
Reading repository metadata in from local files
Passing package list to Install Process
Parsing package install arguments
Nothing to do
[EMAIL PROTECTED] ~]# date
Wed Feb  1 13:03:56 EST 2006
[EMAIL PROTECTED] ~]#


So, I know that I have the entire group 'MySQL Database'.


thanks,

Thufir

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help please

2006-02-01 Thread Kishore Jalleda
No I don't think this indicates orphaned sockets, having many sockets for
mysql in the state TIME_WAIT state is quite normal, as a socket has to be
created for every connection and once a connection is established the socket
goes into a TIME_WAIT state( i am not sure for how long though), the 'show
processlist' only lists all the threads currently running..
JFYI
Socket: an end point for communication ( IP  Port)
Thread: quite similar to a process , a process can have multiple threads


Kishore Jalleda


On 2/1/06, Logg, Connie A. [EMAIL PROTECTED] wrote:

 Two days ago, a system that has been running fine started crashing...It
 could be for a variety of reasons which I am researchinig. However (running
 mysql 5.0.18) I notice the following from netstat:
 tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38672
 ESTABLISHED
 tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38775
 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford.:1000 iepm-bw.slac.stanford:38781
 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38780
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38781
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38782
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38783
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38776
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38777
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38778
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38779
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38772
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38773
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38774
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38768
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38769
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38770
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38771
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38764
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38765
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38766
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38760
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38761
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38762
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38763
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38756
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38757
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38758
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38759
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38752
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38753
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38754
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38755
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38748
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38749
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38750
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38751
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38744
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38745
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38746
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38747
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38742
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38743
 iepm-bw.slac.stanford.:1000 TIME_WAIT
 tcp0  0 iepm-bw.slac.stanford:38672
 iepm-bw.slac.stanford.:1000 ESTABLISHED

 One of the messages in /var/log/messages is too many orphaned sockets.  Do
 the above indicate orphaned sockets? I logged into mysql root and did a
 'show full processlist' and there were only one or two mysql processes.  Can
 someone explain why there might be so many tcp sockets taken up to connect
 to mysql (which is running on port 1000).

 

columns of one table are rows of another table

2006-02-01 Thread JC

Hi all,

I really need your help.

I have two tables:

table1:
id1|1|2|3|4|
000+a+b+c+d
001+e+f+g+h
.
.
.
and

table2:
id2|col1|col2|col3
  1+val1+val2+val3
  2+val4+val5+val6
  3
  4
  .
  .
  .

columns (1,2,3,4,...) in table1 are rows (id2) in table2.  I want to query 
rows in table2 such that id2 IN (all columns in table1 except first 
columns).


Is this possible to do in one statement?  I know how to do this in 
multiple queries, just wonder anyone knows how to optimize this.


Thanks,
JC
--


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dictionary

2006-02-01 Thread Peter of Pedsters Planet
I'd like to know too if posible :)

On 01/02/06, Scott Hamm [EMAIL PROTECTED] wrote:
 I've been trying to google to no avail for English dictionary (with
 definitions) in any format that I can download and import into MySQL.
 Do anyone know where I can find it?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: columns of one table are rows of another table

2006-02-01 Thread SGreen
JC [EMAIL PROTECTED] wrote on 02/01/2006 01:15:00 PM:

 Hi all,
 
 I really need your help.
 
 I have two tables:
 
 table1:
 id1|1|2|3|4|
 000+a+b+c+d
 001+e+f+g+h
 .
 .
 .
 and
 
 table2:
 id2|col1|col2|col3
1+val1+val2+val3
2+val4+val5+val6
3
4
.
.
.
 
 columns (1,2,3,4,...) in table1 are rows (id2) in table2.  I want to 
query 
 rows in table2 such that id2 IN (all columns in table1 except first 
 columns).
 
 Is this possible to do in one statement?  I know how to do this in 
 multiple queries, just wonder anyone knows how to optimize this.
 
 Thanks,
 JC
 -- 
 

I am afraid you can't write that kind of query in SQL. The syntax of the 
language just doesn't allow one value to be compared across two or more 
columns without some kind of major hack or a bunch of typing (something 
like val=col1 and val=col2 and ... and val=colN). May I suggest that you 
redesign table1 so that it looks like this:

id1|t2_id|value

Sure you end up with more rows but what you gain in flexibility should 
more than make up for the pittance of space you will need to store a bunch 
of extra row pointers. There is a relatively simple query pattern you can 
use to convert this new vertical design back to your original 
(pivoted) design

SELECT 
id1, 
SUM(if(td2_id) = 1, value,0) as 1,
SUM(if(td2_id) = 2, value,0) as 2,
SUM(if(td2_id) = 3, value,0) as 3,
...
SUM(if(td2_id) = N, value,0) as N
FROM table1
GROUP BY id1;

Perhaps if you described your situation more accurately, you could get a 
better response. We on the list are used to dealing with some rather 
complex designs and issues so don't feel at all like you need to dumb it 
down for us. The very fact that you tried to simplify your design 
actually made it harder to give you a decent answer.

Sorry!

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: Dictionary

2006-02-01 Thread SGreen
Peter of Pedsters Planet [EMAIL PROTECTED] wrote on 02/01/2006 
01:27:45 PM:

 I'd like to know too if posible :)
 
 On 01/02/06, Scott Hamm [EMAIL PROTECTED] wrote:
  I've been trying to google to no avail for English dictionary (with
  definitions) in any format that I can download and import into MySQL.
  Do anyone know where I can find it?
 

There is a lot of effort that goes into creating and maintaining all of 
the definitions for hundreds of thousands of words. I seriously doubt that 
any dictionary publisher is just going to give it all away for free, 
regardless of the language.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: mysql-query-browser install RPM

2006-02-01 Thread Dan Trainor

[EMAIL PROTECTED] wrote:
Must the RPM be manually downloaded, or can the query browser be 
installed from yum?  I've tried search 'mysql-query-browser' and no 
result--perhaps I need to add a repository?




[EMAIL PROTECTED] ~]$ mysql-query-browser
bash: mysql-query-browser: command not found
[EMAIL PROTECTED] ~]$ ll /opt/
total 0
[EMAIL PROTECTED] ~]$ mysql -u thufir -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13 to server version: 4.1.16

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql quit
Bye
[EMAIL PROTECTED] ~]$ date
Wed Feb  1 12:58:28 EST 2006
[EMAIL PROTECTED] ~]$

and

[EMAIL PROTECTED] ~]#
[EMAIL PROTECTED] ~]# yum -y groupinstall 'MySQL Database'
Setting up Group Process
Setting up repositories
Setting up repositories
Reading repository metadata in from local files
Passing package list to Install Process
Parsing package install arguments
Nothing to do
[EMAIL PROTECTED] ~]# date
Wed Feb  1 13:03:56 EST 2006
[EMAIL PROTECTED] ~]#


So, I know that I have the entire group 'MySQL Database'.


thanks,

Thufir



Hi -

I have never seen mysql-query-browser in yum.  I recently recompiled a 
.src.rpm against CentOS 4.2, which seems to work fine.  That's the best 
I could do.


Perhaps the MySQL team could consider posting a more recent build of it 
for people like yourself (I'd be happy to distribute, but I don't have 
enough bandwidth! ;) )


Thanks
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-02-01 Thread Peter Brawley




Hadi,
But it's very slow.
Do you have any suggestions to fast it?

Your query calls no aggregate functions, so what do you mean to achieve
by GROUP BY ... HAVING? For example this bit of logic extracted from
your query ...

SELECT * FROM table
GROUP BY pkcol
HAVING pkcol=MAX(pkcol)

is logically equivalent to ...

SELECT * FROM table
ORDER BY pkcol;

if pkcol is unique but orders of magnitude slower. If you want the
maximum time for each recipient_id, you need something like ...

SELECT 
  recipient_id, 
  ...,
  MAX(time)
...
GROUP BY recipient_id;

PB

-

سيد هادی راستگوی حقی wrote:

  Dear All,
I need your suggestions please.

have to large tables with these schemas:

 Table: traffic_log
Create Table: CREATE TABLE `traffic_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `orig` varchar(13) default NULL,
  `dest` varchar(13) default NULL,
  `message` text,
  `account_id` int(11) NOT NULL default '0',
  `service_id` int(11) NOT NULL default '0',
  `dir` enum('IN','OUT') NOT NULL default 'IN',
  `plugin` varchar(30) NOT NULL default 'UNKNOWN',
  `date_entered` datetime NOT NULL default '-00-00 00:00:00',
  `replied` tinyint(4) default '0',
  KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
  KEY `account_id_2` (`account_id`,`date_entered`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

Table: status_log
Create Table: CREATE TABLE `status_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `status` smallint(5) NOT NULL default '0',
  `time` datetime NOT NULL default '-00-00 00:00:00',
  `smsc` varchar(20) NOT NULL default '',
  `priority` tinyint(2) unsigned NOT NULL default '0',
  `ack` varchar(30) NOT NULL default '',
  KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
  KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I want to execute a query to find out each last message's status. So my
query is :

select * from traffic_log LEFT JOIN status_log ON
traffic_log.recipient_id=status_log.recipient_id and
traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
group by status_log.recipient_id HAVING time=max(time) order by time;

And MySQL explanation about this query is:
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: traffic_log
 type: ref
possible_keys: account_id,account_id_2
  key: account_id
  key_len: 4
  ref: const
 rows: 1049598
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: status_log
 type: ref
possible_keys: recipient_id_2
  key: recipient_id_2
  key_len: 5
  ref: smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
 rows: 2
Extra:

as you see return records are 1049598.

But it's very slow.

Do you have any suggestions to fast it?

--
Sincerely,
Hadi Rastgou
a href="" class="moz-txt-link-rfc2396E" href="http://www.spreadfirefox.com/?q=affiliatesid=0t=1">"http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1"Get
Firefox!/a
  
  

No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006
  



No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

RE: Dictionary

2006-02-01 Thread ISC Edwin Cruz
Have you had a look for aspell check?

For example in eventum:
http://lists.mysql.com/eventum-users/2683

This may be usefull...


For example in php there are some function for aspell

http://mx2.php.net/aspell

Regards!

Edwin.




-Mensaje original-
De: Scott Hamm [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 01 de Febrero de 2006 11:01 a.m.
Para: 'Mysql '
Asunto: Dictionary


I've been trying to google to no avail for English dictionary (with
definitions) in any format that I can download and import into MySQL. 
Do anyone know where I can find it?

Thanks in advance,

Scott

--
Power to people, Linux is here.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dictionary

2006-02-01 Thread Dana Diederich
Check http://dict.org/

Cheers,
-Dana

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 01, 2006 12:39 PM
To: Peter of Pedsters Planet
Cc: Mysql
Subject: Re: Dictionary


Peter of Pedsters Planet [EMAIL PROTECTED] wrote on 02/01/2006

01:27:45 PM:

 I'd like to know too if posible :)
 
 On 01/02/06, Scott Hamm [EMAIL PROTECTED] wrote:
  I've been trying to google to no avail for English dictionary (with
  definitions) in any format that I can download and import into
MySQL.
  Do anyone know where I can find it?
 

There is a lot of effort that goes into creating and maintaining all of 
the definitions for hundreds of thousands of words. I seriously doubt
that 
any dictionary publisher is just going to give it all away for free, 
regardless of the language.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

-
**
This email and any files transmitted with it are confidential and
intended solely for the individual or entity to whom they are
addressed. If you have received this email in error destroy it
immediately.
**
Wal-Mart Confidential
**


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: NOT IN vs IS NULL

2006-02-01 Thread Devananda

[EMAIL PROTECTED] wrote:

Devananda,

I have to support Peter on this one.  What he submitted to you is a 
perfectly appropriate solution. It seems as though you rejected his 
assistance before even trying to see if it would work. 
On the contrary, I have tried his suggestions and they do not work (see 
the email I just sent to the list). I have also tried, but obviously not 
done very well, to explain why they will not work in this particular 
case. The queries Peter has suggested actually take many minutes to 
return when run on the real data, whereas my initial subquery takes only 
a few seconds. I had initially hoped to find a way, using a JOIN, that 
would take less than a second.



d1 mysql explain  SELECT DISTINCT pt.offer_id FROM paytable AS pt INNER 
JOIN offers AS o USING (offer_id) LEFT JOIN publisher_advertiser_blocks 
AS pab1   ON pt.login_id=pab1.login_id LEFT JOIN 
publisher_advertiser_blocks AS pab2 ON 
o.advertiser_id=pab2.advertiser_id WHERE pt.login_id=1   AND 
pab1.login_id IS NULL OR pab2.advertiser_id IS NULL;

++-+---++---+-+-+---+-++
| id | select_type | table | type   | possible_keys | key | key_len 
| ref   | rows| Extra  |

++-+---++---+-+-+---+-++
|  1 | SIMPLE  | pt| index  | PRIMARY   | PRIMARY |   5 
| NULL  | 1814973 | Using index; Using temporary   |
|  1 | SIMPLE  | o | eq_ref | PRIMARY   | PRIMARY |   4 
| affiliate.pt.offer_id |   1 | Using where; Distinct  |
|  1 | SIMPLE  | pab1  | ref| PRIMARY   | PRIMARY |   4 
| affiliate.pt.login_id |   7 | Using index; Distinct  |
|  1 | SIMPLE  | pab2  | index  | NULL  | PRIMARY |   8 
| NULL  |  62 | Using where; Using index; Distinct |

++-+---++---+-+-+---+-++



d1 mysql explain SELECT offer_id FROM paytable WHERE login_id=1 AND 
offer_id NOT IN (  SELECT offer_id  FROM offers  LEFT JOIN 
publisher_advertiser_blocks AS pab USING (advertiser_id)  WHERE 
pab.login_id=1 );

+++--++---+-+-+--+-+--+
| id | select_type| table| type   | possible_keys | 
key | key_len | ref  | rows| 
Extra|

+++--++---+-+-+--+-+--+
|  1 | PRIMARY| paytable | index  | NULL  | 
PRIMARY |   5 | NULL | 1814973 | 
Using where; Using index |
|  2 | DEPENDENT SUBQUERY | offers   | eq_ref | PRIMARY,advertiser_id | 
PRIMARY |   4 | func |   1 | 
Using where  |
|  2 | DEPENDENT SUBQUERY | pab  | eq_ref | PRIMARY   | 
PRIMARY |   8 | const,affiliate.offers.advertiser_id |   1 | 
Using where; Using index |

+++--++---+-+-+--+-+--+
3 rows in set (0.03 sec)

There have been and 
continue to be SQL-driven databases around that have not had and do not 
have subquery support. They function perfectly well without them.  By your 
response, it appears to me you don't know how a LEFT JOIN is supposed to 
operate. That is one of the most basic tools of any relational DBA's 
arsenal.
  
Perhaps I didn't phrase my response well, regarding 1to1 relationship, 
but I do understand how to use a LEFT JOIN. I freely admit I don't have 
as much experience with all the types of joins, or in other DBMS, as 
many of the folks on this list. Also, I _know_ that subqueries and IN 
lists are not nearly as efficient as JOINs. That is  exactly _why_ I 
asked in the first place!!


:)

Regards,
Devananda


I implore everyone not comfortable with them to learn their JOINs (INNER 
JOIN, LEFT JOIN, RIGHT JOIN, CROSS JOIN, etc.) before you learn how to 
subquery (and I do NOT mean that comma-separated crap that poses as a 
CROSS JOIN). So many of the performance problems posted to this list are 
subquery-related when a simple, straightforward JOIN or two will solve the 
same problem usually with _much_ better response time. This post is a 
perfect example.


Explicit JOIN statements have the advantage of possibly using indexes at 
every layer of data aggregation. Subqueries lose that ability at the level 
of each 

Re: Query Speed

2006-02-01 Thread SGreen
سيد هادی راستگوی حقی [EMAIL PROTECTED] wrote on 02/01/2006 
11:07:49 AM:

 Dear All,
 I need your suggestions please.
 
 have to large tables with these schemas:
 
  Table: traffic_log
 Create Table: CREATE TABLE `traffic_log` (
   `recipient_id` int(11) NOT NULL default '0',
   `retry` smallint(4) NOT NULL default '0',
   `mobile_retry` tinyint(1) NOT NULL default '0',
   `orig` varchar(13) default NULL,
   `dest` varchar(13) default NULL,
   `message` text,
   `account_id` int(11) NOT NULL default '0',
   `service_id` int(11) NOT NULL default '0',
   `dir` enum('IN','OUT') NOT NULL default 'IN',
   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
   `replied` tinyint(4) default '0',
   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
   KEY `account_id_2` (`account_id`,`date_entered`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8
 
 Table: status_log
 Create Table: CREATE TABLE `status_log` (
   `recipient_id` int(11) NOT NULL default '0',
   `retry` smallint(4) NOT NULL default '0',
   `mobile_retry` tinyint(1) NOT NULL default '0',
   `status` smallint(5) NOT NULL default '0',
   `time` datetime NOT NULL default '-00-00 00:00:00',
   `smsc` varchar(20) NOT NULL default '',
   `priority` tinyint(2) unsigned NOT NULL default '0',
   `ack` varchar(30) NOT NULL default '',
   KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8
 
 I want to execute a query to find out each last message's status. So my
 query is :
 
 select * from traffic_log LEFT JOIN status_log ON
 traffic_log.recipient_id=status_log.recipient_id and
 traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
 group by status_log.recipient_id HAVING time=max(time) order by time;
 
 And MySQL explanation about this query is:
 *** 1. row ***
id: 1
   select_type: SIMPLE
 table: traffic_log
  type: ref
 possible_keys: account_id,account_id_2
   key: account_id
   key_len: 4
   ref: const
  rows: 1049598
 Extra: Using temporary; Using filesort
 *** 2. row ***
id: 1
   select_type: SIMPLE
 table: status_log
  type: ref
 possible_keys: recipient_id_2
   key: recipient_id_2
   key_len: 5
   ref: 
smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
  rows: 2
 Extra:
 
 as you see return records are 1049598.
 
 But it's very slow.
 
 Do you have any suggestions to fast it?
 
 --
 Sincerely,
 Hadi Rastgou
 a 
href=http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1;Get
 Firefox!/a

This is another variant of the groupwise maximum pattern of query:

http://dev.mysql.com/doc/refman/4.1/en/example-maximum-column-group-row.html

My favorite way to write these kinds of queries is to make a temporary 
table (or more if needed) identifying the group and max-per-group then 
using that temp table to create the final query. In your case, the group 
is the recipient_id and the max-per-group will be MAX(`time`);

CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT 
`recipient_id`
   ,MAX(`time`) as lastmsg
FROM status_log;

CREATE TEMPORARY TABLE tmpLastStatuses SELECT
sl.*
FROM status_log sl
INNER JOIN tmpLastStatusTimes lst
ON lst.`recipient_id` = sl.`recipient_id`
AND lst.lastmsg = sl.`time`;

SELECT * from traffic_log 
LEFT JOIN tmpLastStatuses 
ON traffic_log.recipient_id=tmpLastStatuses.recipient_id 
AND traffic_log.mobile_retry=tmpLastStatuses.mobile_retry 
WHERE  account_id = 32
order by time;

DROP TEMPORARY TABLE tmpLastStatuses, tmpLastStatusTimes;

In your specific sample, you were only looking for messages from a 
particular person (account 32) so we may be able to speed up my example 
even more if we change the first statement of this query to read:

CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT 
sl.`recipient_id`
   ,MAX(sl.`time`) as lastmsg
FROM status_log sl
INNER JOIN traffic_log tl
on tl.`recipient_id` = sl.`recipient_id`
and tl.account_id = 32;

That way we keep that table's contents within the scope of the actual 
desired results instead of computing the most recent statues for ALL 
messages for EVERYONE.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: Error: server is running,with the --read-only option

2006-02-01 Thread BÁRTHÁZI András

Hi,

If you really sure that your MySQL is not in read only option, better 
you give us the my.cnf configuration
also mount result for all filesystem partition so we sure that the data 
MySQL is not mounted read-only


I can be sure, because usually these inserts are working well. And all 
the other databases and tables are working well, too. All the databases 
are on the same partition, in the same directory.


[mysqld]
user= mysql
server-id   = 1
pid-file= /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port= 3306
log = /var/log/mysql.log
log-bin = /backup/mysql-binlog/wish-bin
basedir = /usr
datadir = /data/database
tmpdir  = /tmp
language= /usr/local/mysql/share/mysql/english
skip-locking
query_cache_limit = 2M
query_cache_size = 64M
query_cache_type = 1
set-variable= key_buffer=512K
set-variable= sort_buffer=16K
set-variable= table_cache=2048
set-variable= read_buffer=8K
set-variable= thread_stack=64K
set-variable= max_connections=199
set-variable= flush_time=20
set-variable= net_buffer_length=128K
character-sets-dir=/usr/local/mysql/share/mysql/charsets
default-character-set = latin2
default-collation = latin2_hungarian_ci
set-variable= ft_min_word_len=2
set-variable= max_allowed_packet=2M

/dev/hdc1  /data   ext3defaults0   2

-rw-rw1 mysqlmysql8700 már  8  2005 sessions.frm
-rw-rw1 mysqlmysql 9177628 feb  1 20:26 sessions.MYD
-rw-rw1 mysqlmysql11057152 feb  1 20:26 sessions.MYI

Debian Linux, MySQL 5.0.18, PHP 4.4.2, Apache 1.3.34 (all the latest)

CREATE TABLE `sessions` (
  `uid` int(10) unsigned NOT NULL default '0',
  `sid` varchar(32) NOT NULL default '',
  `hostname` varchar(128) NOT NULL default '',
  `timestamp` int(11) NOT NULL default '0',
  `session` longtext,
  PRIMARY KEY  (`sid`),
  KEY `uid` (`uid`),
  KEY `sid` (`sid`(4)),
  KEY `timestamp` (`timestamp`)
) ENGINE=MyISAM DEFAULT CHARSET=latin2;

No memory (1GB+256MB swap) and no disk space problem (just 44% used).

Bye,
  Andras



[31-Jan-2006 17:25:59] PHP Fatal error:  The MySQL server is running
with the --read-only option so it cannot execute this statement
query: INSERT INTO sessions (sid, uid, hostname, timestamp) VALUES
('22177a73e9b93b88e376c2226d000f7b', 0, '68.142.250.172', 1138724759) in
/data/.../database.mysql.inc on line 66

[31-Jan-2006 18:29:02] PHP Fatal error:  The MySQL server is running
with the --read-only option so it cannot execute this statement
query: INSERT INTO sessions (sid, uid, hostname, timestamp) VALUES
('bf2de39170b0e02952cf1ab9cf7af6b0', 0, '68.142.250.46', 1138728542) in
/data/.../database.mysql.inc on line 66


The database is NOT read-only. I get this error every 10-60 minutes, 
but the MySQL server used continously (on this particular web page, 
there are ~30.000 hits a day). It's a MASTER server in replication. I 
don't know about any LOCK situation (but I think if it would be LOCK, 
then the error would be about LOCK).


Any ideas, why this happens? Google gave me no results.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dictionary

2006-02-01 Thread Octavian Rasnita
Try searching on www.dict.org

Teddy

- Original Message - 
From: Peter of Pedsters Planet [EMAIL PROTECTED]
To: Mysql mysql@lists.mysql.com
Sent: Wednesday, February 01, 2006 8:27 PM
Subject: Re: Dictionary


I'd like to know too if posible :)

On 01/02/06, Scott Hamm [EMAIL PROTECTED] wrote:
 I've been trying to google to no avail for English dictionary (with
 definitions) in any format that I can download and import into MySQL.
 Do anyone know where I can find it?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: NOT IN vs IS NULL

2006-02-01 Thread SGreen
You are correct, that the situation you describe was not clearly presented 
in your previous reply.  I humbly apologize for any offense.

Using the EXPLAIN you posted in your latest reply, you can translate your 
subquery into the following JOINed query

SELECT p.offer_id
FROM paytable p
LEFT JOIN offers o
ON o.advertiser_id = p.advertiser_id
WHERE pt.login_id=1
AND o.offer_id is null;

Which is not what I think you were actually trying to write. The terms 
selecting values from  publisher_advertiser_blocks disappeared because 
they are on the *right* side of a LEFT JOIN and played no part in actually 
limiting your final results.  Here is the needs statement from your first 
post: 
The goal of these queries is to select all offer_id's from `paytable` 
for a known login_id where that login_id is not blocked from that 
offer_id.

 I would write it this way

CREATE TEMPORARY TABLE tmpBlocks SELECT DISTINCT
o.offer_id
FROM offers
INNER JOIN publisher_advertiser_blocks pab
ON pab.advertiser_id = o.advertiser_id
AND  pab.login_id = 1;

ALTER TABLE  tmpBlocks ADD KEY(offer_id);

SELECT p.offer_id
FROM paytable p
LEFT JOIN tmpBlocks tb
ON tb.offer_id = p.offer_id
WHERE tb.offer_id IS NULL;

DROP TEMPORARY TABLE tmpBlocks;

One trick to working efficiently with larger datasets (millions of rows 
per table)  is to minimize the number of rows being joined at one time. By 
breaking this query into two statements we keep our JOIN combinations to a 
minimum so that at each successive stage we are working with smaller sets 
of data than if we had written it as a single statement.

The term pab.login_id=1 is in the ON clause because your index on 
publisher_advertiser_block is defined in such a way that makes it better 
to be in the ON clause than in the WHERE clause (also information from 
your original post). There was a posting not long ago that said that the 
statistics of a temporary table's indexes were not updated if they exist 
before you fill the table with data. That reason and the fact that it's 
faster to build an index on a populated table than populate an indexed 
table is why I added the index after filling the table with data.

Again, I apologize for any offense I may have caused,

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



Devananda [EMAIL PROTECTED] wrote on 02/01/2006 02:17:36 PM:

 [EMAIL PROTECTED] wrote:
  Devananda,
 
  I have to support Peter on this one.  What he submitted to you is a 
  perfectly appropriate solution. It seems as though you rejected his 
  assistance before even trying to see if it would work. 
 On the contrary, I have tried his suggestions and they do not work (see 
 the email I just sent to the list). I have also tried, but obviously not 

 done very well, to explain why they will not work in this particular 
 case. The queries Peter has suggested actually take many minutes to 
 return when run on the real data, whereas my initial subquery takes only 

 a few seconds. I had initially hoped to find a way, using a JOIN, that 
 would take less than a second.
 
 
 d1 mysql explain  SELECT DISTINCT pt.offer_id FROM paytable AS pt INNER 

 JOIN offers AS o USING (offer_id) LEFT JOIN publisher_advertiser_blocks 
 AS pab1   ON pt.login_id=pab1.login_id LEFT JOIN 
 publisher_advertiser_blocks AS pab2 ON 
 o.advertiser_id=pab2.advertiser_id WHERE pt.login_id=1   AND 
 pab1.login_id IS NULL OR pab2.advertiser_id IS NULL;
 ++-+---++---+-
 +-+---+-
 ++
 | id | select_type | table | type   | possible_keys | key | key_len 
 | ref   | rows| Extra  |
 ++-+---++---+-
 +-+---+-
 ++
 |  1 | SIMPLE  | pt| index  | PRIMARY   | PRIMARY |   5 
 | NULL  | 1814973 | Using index; Using temporary   |
 |  1 | SIMPLE  | o | eq_ref | PRIMARY   | PRIMARY |   4 
 | affiliate.pt.offer_id |   1 | Using where; Distinct  |
 |  1 | SIMPLE  | pab1  | ref| PRIMARY   | PRIMARY |   4 
 | affiliate.pt.login_id |   7 | Using index; Distinct  |
 |  1 | SIMPLE  | pab2  | index  | NULL  | PRIMARY |   8 
 | NULL  |  62 | Using where; Using index; Distinct |
 ++-+---++---+-
 +-+---+-
 ++
 
 
 
 d1 mysql explain SELECT offer_id FROM paytable WHERE login_id=1 AND 
 offer_id NOT IN (  SELECT offer_id  FROM offers  LEFT JOIN 
 publisher_advertiser_blocks AS pab USING (advertiser_id)  WHERE 
 pab.login_id=1 );
 +++--+
 +---+-+-
 

Problem storing lonf files

2006-02-01 Thread Celestino Gomez Cid
 

Dear All,

I am trying to store (using the C API) a long field in a longblob table
field. The size of the sentence is 2361408 Bytes and when using the
function mysql_real_query it returns an error saying that the MySQL
server has gone. However, if I reduce the data to be stored in a factor
of 4. Then the data is stored without problems.

Does anybody know what is happenning ?

Thanks very much,

Celestino.

 


__
Este mensaje, y en su caso, cualquier fichero anexo al mismo,
 puede contener informacion clasificada por su emisor como confidencial
 en el marco de su Sistema de Gestion de Seguridad de la 
Informacion siendo para uso exclusivo del destinatario, quedando 
prohibida su divulgacion copia o distribucion a terceros sin la 
autorizacion expresa del remitente. Si Vd. ha recibido este mensaje 
 erroneamente, se ruega lo notifique al remitente y proceda a su borrado. 
Gracias por su colaboracion.
__
This message including any attachments may contain confidential 
information, according to our Information Security Management System,
 and intended solely for a specific individual to whom they are addressed.
 Any unauthorised copy, disclosure or distribution of this message
 is strictly forbidden. If you have received this transmission in error,
 please notify the sender immediately and delete it.
__


Re: Dictionary

2006-02-01 Thread Dan Baker
What you are looking for is called a Word List.
I would search for something like the following:
word list spellcheck

DanB


Scott Hamm [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
I've been trying to google to no avail for English dictionary (with
definitions) in any format that I can download and import into MySQL.
Do anyone know where I can find it?

Thanks in advance,

Scott

--
Power to people, Linux is here.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL installation failure

2006-02-01 Thread Hong Li
Hi 

I am doing investigation to determine whether we can use MySQL in our
project.
But I got error message when I try to install MySQL on Unix server.
Could you 
please help me?

First I uncompress with gunzip, then I use pkgadd to install, and I got
the following
error message:

group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/support-files/mysql.server.in failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/support-files/mysql.spec failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/support-files/ndb-config-2-node.ini failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/tests/auto_increment.res failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/tests/auto_increment.tst failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/big_record.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/export.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/fork2_test.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/fork_big.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/function.res
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/function.tst
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/grant.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/grant.res
failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/tests/insert_and_repair.pl failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/lock_test.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/lock_test.res
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/mail_to_db.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/tests/myisam-big-rows.tst failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/pmail.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/table_types.pl
failed
group name wheel not found in group table(s)
ERROR: attribute verification of
/opt/mysql/mysql/tests/test_delayed_insert.pl failed
group name wheel not found in group table(s)
ERROR: attribute verification of /opt/mysql/mysql/tests/udf_test.res
failed
group name wheel not found in group table(s)
ERROR: attribute verification of /usr/include/mysql failed
group name wheel not found in group table(s)
ERROR: attribute verification of /usr/info failed
group name wheel not found in group table(s)
## Executing postinstall script.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/opt/mysql/mysql/bin/mysqladmin -u root password 'new-password'
/opt/mysql/mysql/bin/mysqladmin -u root -h zrc2s153 password
'new-password'
See the manual for more instructions.

Please report any problems with the ./bin/mysqlbug script!

The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at https://order.mysql.com

Installation of mysql partially failed.


Thanks,
Hong



Kinda OT: Book database question

2006-02-01 Thread John Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, I'm trying to develop my own book database, and I have a question
about ISBN: Is that number linked to a book or to a title?  That is, can
one title (say, Huckleberry Finn) have several ISBNs associated with it
through several book releases?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFD4Sodj60GAoLuoDkRAmciAJ9NENzEBwW5vKNscbbKxIrYrGBLYQCdF4PN
n1hv7E1N51UJPLhVtxY+TOY=
=BNae
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



data entry GUI

2006-02-01 Thread THUFIR HAWAT
Just as there's MySQL Query Browser for queries and table design, is
there a similar GUI front end for data entry?  I'm not building
anything complex, just a simple database to muck with.  Is it easier
to do data entry on a spreadsheet, then import the spreadsheet?  Seems
a bit silly, but it seems to be either that or

mysql insert into A (one, two, three) values (1 , 2, 3);

Either option seems a bit much.  Perhaps there's something akin to
Access's form builder?


-Thufir

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: data entry GUI

2006-02-01 Thread Imran Chaudhry
On 2/1/06, THUFIR HAWAT [EMAIL PROTECTED] wrote:
 Just as there's MySQL Query Browser for queries and table design, is
 there a similar GUI front end for data entry?  I'm not building
 anything complex, just a simple database to muck with.  Is it easier
 to do data entry on a spreadsheet, then import the spreadsheet?  Seems
 a bit silly, but it seems to be either that or

I can't recommend a GUI data entry tool but the way I would do it is
to use a spreadsheet and save out as Tab-delimited format. Depending
on the data I might also rustle up a Perl program to create the data
automatically.

I would then use LOAD DATA INFILE statement or mysqlimport to get the
data into my tables. These methods are extremely quick for inserts and
will disable key updating etc.

Regards,
Imran

--
http://www.ImranChaudhry.info
MySQL Database Management  Design Services

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: data entry GUI

2006-02-01 Thread Ryan Stille
You can also install MyODBC and then hook an Excel spreadsheet into your
database.  Editing the spreadsheet will update data in your database.
This isn't a good solution if you are going to be creating new tables
often.  But for manipulating data in a known set of tables it's great.

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dictionary

2006-02-01 Thread mysql

Well I have just done a google.co.uk search for 

english dictionary downloadable

and got the following results:

Web Results
1 - 10 of about 1,290,000 for english dictionary downloadable.

They may not be in the correct format to import directly.

But I'm sure it is possible to write a script to parse the 
downloaded dictionary, and build the query to insert the 
contents into a database yourself, or to convert the 
dictionary into another suitable format that would be 
compatible for loading directly into a database table.

HTH

Keith

In theory, theory and practice are the same;
In practice they are not. 

On Wed, 1 Feb 2006 [EMAIL PROTECTED] wrote:

 To: Peter of Pedsters Planet [EMAIL PROTECTED]
 From: [EMAIL PROTECTED]
 Subject: Re: Dictionary
 
 Peter of Pedsters Planet [EMAIL PROTECTED] wrote on 02/01/2006 
 01:27:45 PM:
 
  I'd like to know too if posible :)
  
  On 01/02/06, Scott Hamm [EMAIL PROTECTED] wrote:
   I've been trying to google to no avail for English dictionary (with
   definitions) in any format that I can download and import into MySQL.
   Do anyone know where I can find it?
  
 
 There is a lot of effort that goes into creating and 
 maintaining all of the definitions for hundreds of 
 thousands of words. I seriously doubt that any dictionary 
 publisher is just going to give it all away for free, 
 regardless of the language.
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Kinda OT: Book database question

2006-02-01 Thread Peter Brawley

John,

An ISBN is linked to a published book, sometimes just to a particular 
edition.


PB

John Meyer wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, I'm trying to develop my own book database, and I have a question
about ISBN: Is that number linked to a book or to a title?  That is, can
one title (say, Huckleberry Finn) have several ISBNs associated with it
through several book releases?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFD4Sodj60GAoLuoDkRAmciAJ9NENzEBwW5vKNscbbKxIrYrGBLYQCdF4PN
n1hv7E1N51UJPLhVtxY+TOY=
=BNae
-END PGP SIGNATURE-

  



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Kinda OT: Book database question

2006-02-01 Thread mysql

I think you will find a book ISBN is a reference 
to a particular publisher's version of a particular book.

So in answer to your question, if several different 
publishers, or even the same publisher have published 
several different books all entitled 'Huckleberry Finn' then 
to avoid ambiguity in identifying one particular book from 
that group of books, each book should have a unique 
identifier, which AFAIK is what the ISBN is all about.

The usual way to reference a book is by:

Title:
Author:
Publisher:
ISBN:
Date Published:
Edition:
Price:

plus any other attributes, such as hardback or paperback.

HTH

Keith


In theory, theory and practice are the same;
In practice they are not. 

On Wed, 1 Feb 2006, John Meyer wrote:

 To: mysql@lists.mysql.com
 From: John Meyer [EMAIL PROTECTED]
 Subject: Kinda OT: Book database question
 
 Hi, I'm trying to develop my own book database, and I have a question
 about ISBN: Is that number linked to a book or to a title?  That is, can
 one title (say, Huckleberry Finn) have several ISBNs associated with it
 through several book releases?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: data entry GUI

2006-02-01 Thread mysql

I have just caught the end of this topic, so  hope I'm not 
repeating something already mentioned.

What I do is enter my data into a plain text file, like 
this;

The questions are a bit dumb, just for testing purposes of 
course!


/* file: general-quizdata.sql */

/* data to populate general knowledge quiz tables */

use web_app_tester;

insert into question set
   question_text = 'What is the Capital of England?';

select @questionID := last_insert_id();

insert into answer set
   answer_text = 'London',
   status = 'right',
   questionID = @questionID;

insert into answer set
   answer_text = 'Paris',
   questionID = @questionID;

insert into answer set
   answer_text = 'Edinburgh',
   questionID = @questionID;


insert into question set
   question_text = 'How many yards are there in a mile?';

select @questionID := last_insert_id();

insert into answer set
   answer_text = '5000',
   questionID = @questionID;

insert into answer set
   answer_text = '1760',
   status = 'right',
   questionID = @questionID;

insert into answer set
   answer_text = '2500',
   questionID = @questionID;


insert into question set
   question_text = 'What are the 3 primary colors?';

select @questionID := last_insert_id();

insert into answer set
   answer_text = 'Red, Grey, Black',
   questionID = @questionID;

insert into answer set
   answer_text = 'Yellow, White, Blue',
   questionID = @questionID;

insert into answer set
   answer_text = 'Green, Blue, Red',
   status = 'right',
   questionID = @questionID;


insert into question set
   question_text = 'RAM is an acronym for?';

select @questionID := last_insert_id();

insert into answer set
   answer_text = 'Random Access Memory',
   status = 'right',
   questionID = @questionID;

insert into answer set
   answer_text = 'Read Access Memory',
   questionID = @questionID;

insert into answer set
   answer_text = 'Read And Memorise',
   questionID = @questionID;

/* data truncated here for brevity */

/* end of data */

and then load it into mysql from the mysql command 
prompt with:

mysql \. general-quizdata.sql

This may seem like the long-winded version of LOAD DATA, but 
it does make the syntax easier to understand, plus you can 
put any other mysql commands in the file. Also you have the 
data and commands available in a file, in case you have to 
reload the table from scratch again.

HTH

Keith

In theory, theory and practice are the same;
In practice they are not. 

On Wed, 1 Feb 2006, Ryan Stille wrote:

 To: mysql@lists.mysql.com
 From: Ryan Stille [EMAIL PROTECTED]
 Subject: RE: data entry GUI
 
 You can also install MyODBC and then hook an Excel spreadsheet into your
 database.  Editing the spreadsheet will update data in your database.
 This isn't a good solution if you are going to be creating new tables
 often.  But for manipulating data in a known set of tables it's great.
 
 -Ryan

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

Hello, all -

Still kidna new to MySQL, so please forgive me if this is somewhat dumb 
question...


When issuing 'SHOW TABLE STATUS', I can see clearly that under the Rows 
column for my table, I see 17333.  However, when issuing a 'COUNT (*) 
FROM table', I see 17203 - a difference of 130.


We're really pounding this database, but I wouldn't expect any sort of 
data to become out of sync.  Is this a corruption of tables?  We're 
using the InnoDB storage engine for this particular table, if that helps.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL installation failure

2006-02-01 Thread Greg 'groggy' Lehey
On Wednesday,  1 February 2006 at 15:01:15 -0600, Hong Li wrote:
 Hi

 I am doing investigation to determine whether we can use MySQL in
 our project.  But I got error message when I try to install MySQL on
 Unix server.  Could you please help me?

 group name wheel not found in group table(s)

This suggests that you're using a BSD-style package.  BSD systems have
group wheel, many UNIX systems don't.  Can you give details of your
system and what version of MySQL you're trying to install?

Greg
--
Greg Lehey, Senior Software Engineer
MySQL AB, http://www.mysql.com/
Echunga, South Australia
Phone: +61-8-8388-8286   Mobile: +61-418-838-708
VoIP:  sip:[EMAIL PROTECTED], sip:[EMAIL PROTECTED]

Are you MySQL certified?  http://www.mysql.com/certification/


pgpTPx3cLWuAB.pgp
Description: PGP signature


Re: Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

Dan Trainor wrote:

Hello, all -

Still kidna new to MySQL, so please forgive me if this is somewhat dumb 
question...


When issuing 'SHOW TABLE STATUS', I can see clearly that under the Rows 
column for my table, I see 17333.  However, when issuing a 'COUNT (*) 
FROM table', I see 17203 - a difference of 130.


We're really pounding this database, but I wouldn't expect any sort of 
data to become out of sync.  Is this a corruption of tables?  We're 
using the InnoDB storage engine for this particular table, if that helps.


Thanks!
-dant



Hi -

For the record -

http://bugs.mysql.com/bug.php?id=5755

I now understand the problem.

Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

Hi -

Anyone else get the following message with *ever*single*post* to the 
list?  Anyone have a procmail filter established already? ;)


Thanks
-dant


[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED]

If you really intend for this to get to me, remove the 'no' in the eMail address 
above  I'll send you my new one.

rs






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

Dan Trainor wrote:

Hello, all -

Still kidna new to MySQL, so please forgive me if this is somewhat dumb 
question...


When issuing 'SHOW TABLE STATUS', I can see clearly that under the Rows 
column for my table, I see 17333.  However, when issuing a 'COUNT (*) 
FROM table', I see 17203 - a difference of 130.


We're really pounding this database, but I wouldn't expect any sort of 
data to become out of sync.  Is this a corruption of tables?  We're 
using the InnoDB storage engine for this particular table, if that helps.


Thanks!
-dant



Hi -

So, now this has made me think here.  If 'SHOW TABLE STATUS' only shows 
an estimate of the number of rows contained in a table, how accurate 
is the rest of the data in 'SHOW TABLE STATUS'?


What I'm looking for is to try to find some exact numbers, such as 
physical data size on disk, queries/second, etc etc.  An estimate 
would not give me a very good idea of what I'm looking at.


Is there a more accurate way of getting table information, aside from 
'SHOW TABLE STATUS'?


THanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Unclear about key_len in EXPLAIN output

2006-02-01 Thread James Harvard
According to the manual the value of  key_len enables you to determine how 
many  parts of a multiple-part key MySQL actually uses.

However, it doesn't specify quite how one can determine that. It _looks_ like 
the number of bytes in the key (or key part) that is used - is that correct?

TIA,
James Harvard

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Differences between numbers of rows in tables

2006-02-01 Thread James Harvard
I suppose this is an inherent limitation of transactional tables - you might 
see x rows, but at the same time a use who has just inserted some rows will see 
x+y rows.

However I don't see that the numbers are going to be hugely inaccurate. After 
all, if the table was MyISAM and you get an exact number of rows someone could 
still add another 20 rows in the second after you run SHOW TABLE STATUS.

James Harvard

At 5:54 pm -0700 1/2/06, Dan Trainor wrote:
So, now this has made me think here.  If 'SHOW TABLE STATUS' only shows an 
estimate of the number of rows contained in a table, how accurate is the 
rest of the data in 'SHOW TABLE STATUS'?

What I'm looking for is to try to find some exact numbers, such as physical 
data size on disk, queries/second, etc etc.  An estimate would not give me a 
very good idea of what I'm looking at.

Is there a more accurate way of getting table information, aside from 'SHOW 
TABLE STATUS'?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

James Harvard wrote:

I suppose this is an inherent limitation of transactional tables - you might 
see x rows, but at the same time a use who has just inserted some rows will see 
x+y rows.

However I don't see that the numbers are going to be hugely inaccurate. After 
all, if the table was MyISAM and you get an exact number of rows someone could 
still add another 20 rows in the second after you run SHOW TABLE STATUS.

James Harvard

At 5:54 pm -0700 1/2/06, Dan Trainor wrote:


So, now this has made me think here.  If 'SHOW TABLE STATUS' only shows an 
estimate of the number of rows contained in a table, how accurate is the rest 
of the data in 'SHOW TABLE STATUS'?

What I'm looking for is to try to find some exact numbers, such as physical data size on 
disk, queries/second, etc etc.  An estimate would not give me a very good 
idea of what I'm looking at.

Is there a more accurate way of getting table information, aside from 'SHOW 
TABLE STATUS'?





Thanks for the response, James -


Well, doing these two calculations happens maybe 10 minutes after the 
last person has sent any data to the db.


Is there any way to show the physical disk space used by a table, or 
even a column?  Maybe there's a whole list of functions which I'm 
overseeing here that might explain this.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Differences between numbers of rows in tables

2006-02-01 Thread Dan Trainor

Thanks for the prompt reply, Augusto -

I completely understand what you're saying.  To have anything such as a 
real-time measurement to the exact number of tables would be an 
incredible preformance degration, not to mention overhead and the like.


I think I'm willing to accept the fact that while data is being sent to 
the database server, I won't get an exact reading of database/table/row 
size.  This makes complete sense.  However, what I am after, is how to 
get the exact size of a database/table/row when NO connections are being 
made.  Say, if I started MySQL with no networking.  This way, I could 
positively ensure that no data modification would be taking place.  Is 
this possible?


Thanks for your wonderful insight.  Both of you.  It is much appreciated.

Thanks!
-dant

Augusto Bott wrote:

Dan,

I'm afraid I must quote James:


I suppose this is an inherent limitation of transactional tables



Think of something like that: Some of my databases have transactional
tables with several Gbs and a lot of clients simultaneosly connecting
and inserting/updating a lot of data, on the sames tables. Why would
your transaction see 'exactly' at the same moment the data altered by
some other session?

I mean, if we can have different transaction isolation levels on
different storage engines, why can't the tables 'status scoreboard' be
updated itself by 'polling' the storage engines? Why would it be
imediatly updated at every transaction commit? The MyISAM storage
engine, with a very tight relationship with the server itself,
probably has a higher 'update rate' than, let's say, InnoDB (with also
may 'suffer' some same consequences as James said related to 
transactions).


--
Augusto Bott
augusto.bott (at) gmail.com


On 2/1/06, Dan Trainor [EMAIL PROTECTED] wrote:


James Harvard wrote:


I suppose this is an inherent limitation of transactional tables - you might 
see x rows, but at the same time a use who has just inserted some rows will see 
x+y rows.

However I don't see that the numbers are going to be hugely inaccurate. After 
all, if the table was MyISAM and you get an exact number of rows someone could 
still add another 20 rows in the second after you run SHOW TABLE STATUS.

James Harvard

At 5:54 pm -0700 1/2/06, Dan Trainor wrote:



So, now this has made me think here.  If 'SHOW TABLE STATUS' only shows an 
estimate of the number of rows contained in a table, how accurate is the rest 
of the data in 'SHOW TABLE STATUS'?

What I'm looking for is to try to find some exact numbers, such as physical data size on 
disk, queries/second, etc etc.  An estimate would not give me a very good 
idea of what I'm looking at.

Is there a more accurate way of getting table information, aside from 'SHOW 
TABLE STATUS'?




Thanks for the response, James -


Well, doing these two calculations happens maybe 10 minutes after the
last person has sent any data to the db.

Is there any way to show the physical disk space used by a table, or
even a column?  Maybe there's a whole list of functions which I'm
overseeing here that might explain this.

Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



error LNK2001: unresolved external symbol _mysql_real_escape_string

2006-02-01 Thread lakshmi.narasimharao


Hi,

  I am trying to use mysql_real_escape_string() C API for escaping
the special characters. When I tried to compile the program using the
Visual Studio, I am getting the following link error

error LNK2001: unresolved external symbol _mysql_real_escape_string
fatal error LNK1120: 1 unresolved externals Error executing link.exe.

I used libmysql.lib for linking and path to mysql.h in the include
files.

Please guide me here. To avoid the linking error, what I need to do?. I
am using mysql 4.0.24 version.

Thanking you in advance.

Thanks,
Narasimha


-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]

Sent: Tuesday, January 24, 2006 4:23 PM
To: mysql@lists.mysql.com
Subject: Re: Storing a string value with slash(\)


Hello.

 If I retrive the same value with the slash (\), not able to display

 properly in the GUI. Do we need to use any MySQL specific functions to

 select such values?.

If you're retrieving the same values which have stored, that this is
rather a GUI problem, not MySQL.

 Is there any way in MySQL so that I can enter any number of slashes

 between the name with out escaping with another slash?.

It is all depends on the way you're using to store data in MySQL. You
can pass everything to mysql_real_escape_string(). See:
  http://dev.mysql.com/doc/refman/5.0/en/mysql-real-escape-string.html



[EMAIL PROTECTED] wrote:


 Hi,


  From the front end I need to use double slashes i.e  (\\) to

 enter one slash (\) into the MySQL database. I.e, if I enter

 Gelb\Paha, it stores in the mySQL as GlebPaha, If I enter

 S\\Greeen, it stores as S\Green in the database.




 Is there any way in MySQL so that I can enter any number of slashes

 between the name with out escaping with another slash?.


 If I retrive the same value with the slash (\), not able to display

 properly in the GUI. Do we need to use any MySQL specific functions to

 select such values?.


 Please guide me for a solution here. Thanking you in advance.


 Thanks,
 Narasimha










 The information contained in this electronic message and any

 attachments to this message are intended for the exclusive use of the

 addressee(s) and may contain proprietary, confidential or privileged

 information. If you are not the intended recipient, you should not

 disseminate, distribute or copy this e-mail. Please notify the sender

 immediately and destroy all copies of this message and any

 attachments.


 WARNING: Computer viruses can be transmitted via email. The recipient

 should check this email and any attachments for the presence of

 viruses. The company accepts no liability for any damage caused by any

 virus transmitted by this email.


 www.wipro.com


--

For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

--

MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments.


WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email.


www.wipro.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments.

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.

www.wipro.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Thanks for your suggestion,
I forget to tell that each message in traffic_log may has at least 2 status
in status_log and I use to columns recipients_id and mobile_retry
to uniquely find each message's statuses.
May be I have to change my tables structure. I don't know.

It's really important for me to show each message with it's last status. So
I have to use group by because in other way such as

SELECT * FROM status_log
ORDER BY time;

returns all statuses in order of time regards to multiple statuses for any
message. so I think that the query may be like this

(SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP BY
recipient_id HAVING time=MAX(time)) AS sts*
JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
traffic_log.mobile_retry=sts.mobile_retry

*sts -- to find last status of each message


On 2/1/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:



 سيد هادی راستگوی حقی [EMAIL PROTECTED] wrote on 02/01/2006 11:07:49
 AM:

  Dear All,
  I need your suggestions please.
 
  have to large tables with these schemas:
 
   Table: traffic_log
  Create Table: CREATE TABLE `traffic_log` (
`recipient_id` int(11) NOT NULL default '0',
`retry` smallint(4) NOT NULL default '0',
`mobile_retry` tinyint(1) NOT NULL default '0',
`orig` varchar(13) default NULL,
`dest` varchar(13) default NULL,
`message` text,
`account_id` int(11) NOT NULL default '0',
`service_id` int(11) NOT NULL default '0',
`dir` enum('IN','OUT') NOT NULL default 'IN',
`plugin` varchar(30) NOT NULL default 'UNKNOWN',
`date_entered` datetime NOT NULL default '-00-00 00:00:00',
`replied` tinyint(4) default '0',
KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
KEY `account_id_2` (`account_id`,`date_entered`)
  ) ENGINE=MyISAM DEFAULT CHARSET=utf8
 
  Table: status_log
  Create Table: CREATE TABLE `status_log` (
`recipient_id` int(11) NOT NULL default '0',
`retry` smallint(4) NOT NULL default '0',
`mobile_retry` tinyint(1) NOT NULL default '0',
`status` smallint(5) NOT NULL default '0',
`time` datetime NOT NULL default '-00-00 00:00:00',
`smsc` varchar(20) NOT NULL default '',
`priority` tinyint(2) unsigned NOT NULL default '0',
`ack` varchar(30) NOT NULL default '',
KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
  ) ENGINE=MyISAM DEFAULT CHARSET=utf8
 
  I want to execute a query to find out each last message's status. So my
  query is :
 
  select * from traffic_log LEFT JOIN status_log ON
  traffic_log.recipient_id=status_log.recipient_id and
  traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
  group by status_log.recipient_id HAVING time=max(time) order by time;
 
  And MySQL explanation about this query is:
  *** 1. row ***
 id: 1
select_type: SIMPLE
  table: traffic_log
   type: ref
  possible_keys: account_id,account_id_2
key: account_id
key_len: 4
ref: const
   rows: 1049598
  Extra: Using temporary; Using filesort
  *** 2. row ***
 id: 1
select_type: SIMPLE
  table: status_log
   type: ref
  possible_keys: recipient_id_2
key: recipient_id_2
key_len: 5
ref: smse.traffic_log.recipient_id,
 smse.traffic_log.mobile_retry
   rows: 2
  Extra:
 
  as you see return records are 1049598.
 
  But it's very slow.
 
  Do you have any suggestions to fast it?
 
  --
  Sincerely,
  Hadi Rastgou
  a 
  href=http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1http://www.spreadfirefox.com/?q=affiliatesid=0t=1
 Get
  Firefox!/a

 This is another variant of the groupwise maximum pattern of query:

 http://dev.mysql
 .com/doc/refman/4.1/en/example-maximum-column-group-row.html

 My favorite way to write these kinds of queries is to make a temporary
 table (or more if needed) identifying the group and max-per-group then using
 that temp table to create the final query. In your case, the group is the
 recipient_id and the max-per-group will be MAX(`time`);

 CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT
 `recipient_id`
,MAX(`time`) as lastmsg
 FROM status_log;

 CREATE TEMPORARY TABLE tmpLastStatuses SELECT
 sl.*
 FROM status_log sl
 INNER JOIN tmpLastStatusTimes lst
 ON lst.`recipient_id` = sl.`recipient_id`
 AND lst.lastmsg = sl.`time`;

 SELECT * from traffic_log
 LEFT JOIN tmpLastStatuses
 ON traffic_log.recipient_id=tmpLastStatuses.recipient_id
 AND traffic_log.mobile_retry=tmpLastStatuses.mobile_retry
 WHERE  account_id = 32
 order by time;

 DROP TEMPORARY TABLE tmpLastStatuses, tmpLastStatusTimes;

 In your specific sample, you were only looking for messages from a
 particular person (account 32) so 

Re: Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Another question is that if I run such CREATE TEMPORARY statements in my
query, is MySQL really can do it fast?
Cause this query may be run periodically !


On 2/2/06, سيد هادی راستگوی حقی [EMAIL PROTECTED] wrote:

 Thanks for your suggestion,
 I forget to tell that each message in traffic_log may has at least 2
 status in status_log and I use to columns recipients_id and mobile_retry

 to uniquely find each message's statuses.
 May be I have to change my tables structure. I don't know.

 It's really important for me to show each message with it's last status.
 So I have to use group by because in other way such as

 SELECT * FROM status_log
 ORDER BY time;

 returns all statuses in order of time regards to multiple statuses for any
 message. so I think that the query may be like this

 (SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP BY
 recipient_id HAVING time=MAX(time)) AS sts*
 JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
 traffic_log.mobile_retry=sts.mobile_retry

 *sts -- to find last status of each message


 On 2/1/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 
 
  سيد هادی راستگوی حقی [EMAIL PROTECTED] wrote on 02/01/2006
  11:07:49 AM:
 
   Dear All,
   I need your suggestions please.
  
   have to large tables with these schemas:
  
Table: traffic_log
   Create Table: CREATE TABLE `traffic_log` (
 `recipient_id` int(11) NOT NULL default '0',
 `retry` smallint(4) NOT NULL default '0',
 `mobile_retry` tinyint(1) NOT NULL default '0',
 `orig` varchar(13) default NULL,
 `dest` varchar(13) default NULL,
 `message` text,
 `account_id` int(11) NOT NULL default '0',
 `service_id` int(11) NOT NULL default '0',
 `dir` enum('IN','OUT') NOT NULL default 'IN',
 `plugin` varchar(30) NOT NULL default 'UNKNOWN',
 `date_entered` datetime NOT NULL default '-00-00 00:00:00',
 `replied` tinyint(4) default '0',
 KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
 KEY `account_id_2` (`account_id`,`date_entered`)
   ) ENGINE=MyISAM DEFAULT CHARSET=utf8
  
   Table: status_log
   Create Table: CREATE TABLE `status_log` (
 `recipient_id` int(11) NOT NULL default '0',
 `retry` smallint(4) NOT NULL default '0',
 `mobile_retry` tinyint(1) NOT NULL default '0',
 `status` smallint(5) NOT NULL default '0',
 `time` datetime NOT NULL default '-00-00 00:00:00',
 `smsc` varchar(20) NOT NULL default '',
 `priority` tinyint(2) unsigned NOT NULL default '0',
 `ack` varchar(30) NOT NULL default '',
 KEY `recipient_id_2`
  (`recipient_id`,`mobile_retry`,`time`,`status`),
 KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
   ) ENGINE=MyISAM DEFAULT CHARSET=utf8
  
   I want to execute a query to find out each last message's status. So
  my
   query is :
  
   select * from traffic_log LEFT JOIN status_log ON
   traffic_log.recipient_id=status_log.recipient_id and
   traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id =
  32
   group by status_log.recipient_id HAVING time=max(time) order by time;
  
   And MySQL explanation about this query is:
   *** 1. row ***
  id: 1
 select_type: SIMPLE
   table: traffic_log
type: ref
   possible_keys: account_id,account_id_2
 key: account_id
 key_len: 4
 ref: const
rows: 1049598
   Extra: Using temporary; Using filesort
   *** 2. row ***
  id: 1
 select_type: SIMPLE
   table: status_log
type: ref
   possible_keys: recipient_id_2
 key: recipient_id_2
 key_len: 5
 ref: smse.traffic_log.recipient_id,
  smse.traffic_log.mobile_retry
rows: 2
   Extra:
  
   as you see return records are 1049598.
  
   But it's very slow.
  
   Do you have any suggestions to fast it?
  
   --
   Sincerely,
   Hadi Rastgou
   a href=http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1
  http://www.spreadfirefox.com/?q=affiliatesid=0t=1Get
   Firefox!/a
 
  This is another variant of the groupwise maximum pattern of query:
 
  http://dev. mysql
  .com/doc/refman/4.1/en/example-maximum-column-group-row.htmlhttp://dev.mysql.com/doc/refman/4.1/en/example-maximum-column-group-row.html
 
  My favorite way to write these kinds of queries is to make a temporary
  table (or more if needed) identifying the group and max-per-group then using
  that temp table to create the final query. In your case, the group is the
  recipient_id and the max-per-group will be MAX(`time`);
 
  CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT
  `recipient_id`
 ,MAX(`time`) as lastmsg
  FROM status_log;
 
  CREATE TEMPORARY TABLE tmpLastStatuses SELECT
  sl.*
  FROM status_log sl
  INNER JOIN tmpLastStatusTimes lst
  ON lst.`recipient_id` = sl.`recipient_id`