INNODB transaction log size

2004-06-28 Thread Michael Lee
Hi,
 
I would like to migrate my DB from Sybase ASE to MySQL INNODB table. Data has been 
extracted and stored as a file. I want to use the command Load Data Infile to insert 
the data to MySQL. However, some table contains millions of rows. Can i control the 
batch size of the loading (e.g. commit the transaction after 5 rows inserted)?
 
If no, should i define a very large transaction log to handle the huge transaction? 
(currently, it is 5M)
 
Any suggestion is welcomed.
 
TIA
Michael 

[EMAIL PROTECTED]

http://mobile.yahoo.com.hk/


Duplicate selection with change

2004-06-28 Thread Scott Haneda
4.0.18-standard

I need to duplicate a select of records, all in the same table, but also
make a change to one field.

Here is one table I need to do this on
++---+--+-++
| Field  | Type  | Null | Key | Default|
++---+--+-++
| id | int(11)   |  | PRI | NULL   |
| user_id| int(11)   |  | | 0  |
| image_title| varchar(48)   |  | ||
| image_filename | varchar(32)   |  | ||
| image_url  | varchar(255)  |  | ||
| image_width| int(3)|  | | 0  |
| image_height   | int(3)|  | | 0  |
| type   | tinyint(1)|  | | 0  |
| updated| timestamp(14) | YES  | | NULL   |
| added  | timestamp(14) | YES  | | 00 |
++---+--+-++

So, I need to basically:
SELECT iamge_title, image_filename, image_url, image_width, image_height
from images where user_id = '123';

Then, take that result and insert all those rows in the same table, setting
type='2' and added=NOW()

I think I need to do a subselect, the docs are really sparse on examples,
and I don't see how to change the 2 values I need to change.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Duplicate selection with change

2004-06-28 Thread Scott Haneda
on 06/27/2004 11:31 PM, Scott Haneda at [EMAIL PROTECTED] wrote:

 So, I need to basically:
 SELECT iamge_title, image_filename, image_url, image_width, image_height
 from images where user_id = '123';
 
 Then, take that result and insert all those rows in the same table, setting
 type='2' and added=NOW()
 
 I think I need to do a subselect, the docs are really sparse on examples,
 and I don't see how to change the 2 values I need to change.

I may have just made this work in my simple testing, unless someone can tell
me a better, way, this is what I came up with:

insert into images (user_id, image_title, image_filename, image_url, type,
added) select user_id, image_title, image_filename, image_url, '4', NOW()
from newsletter_images where user_id = '123';

So basically, since I need to select the SAME fields I am inserting, just
select the 2 that I need to change the data in, ignore what they may or may
not have, and drop in hard values in the select statement.

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql on Solaris vs Linux

2004-06-28 Thread Aamir Baig
Hello All,
We recently ran a few tests on mysql on both solaris and linux 
environments. It turned out that mysql was much faster and responsive 
on linux. After doing some digging, we found out one of the possible 
reasons is the fact that mysql source code uses malloc all over the 
place.

On solaris malloc calls block all threads and create heap contention. 
Indeed, Sun actually dis-courages the use of malloc. This would explain 
why we could never see our CPU utilization go up on solaris.

Question #1) are we right in our assumption that mysql runs way faster 
on linux rather then solaris or are we on drugs.
Question #2) If it does indeed run faster, is there a way or a ported 
binary of mysql that runs equally fast on Solaris. Reason being I have 
two big Sun E3500 lying around that I would like to put to good use ..

Thanks in advance,
Aamir
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


SQL challenge

2004-06-28 Thread John Mistler
O.K. you SQL gurus--

I have a difficult query for you that has me stumped.  The table has two
columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT) and theDate (DATETIME).
I need it to find:

1. The COUNT of REPEATED instances of the rowID in the last month.

- so if there are 5 rows with the same rowID in the last month, it would
return 4 (I can only seem to get it to return 10 WHERE t1.rowID =
t2.rowID AND t1.theDate  t1.theDate)

2. The AVERAGE number of REPEATED instances of the rowID per week (Monday
00:00:00 through Monday 00:00:00 one week later) in the last month.

If I need to add table columns I certainly can.

THANKS!

- John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL challenge

2004-06-28 Thread Martijn Tonies
Hi John,


 O.K. you SQL gurus--

 I have a difficult query for you that has me stumped.  The table has two
 columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT) and theDate
(DATETIME).
 I need it to find:

 1. The COUNT of REPEATED instances of the rowID in the last month.

 - so if there are 5 rows with the same rowID in the last month, it would
 return 4 (I can only seem to get it to return 10 WHERE t1.rowID =
 t2.rowID AND t1.theDate  t1.theDate)

Dunno if it works for you ... but:

why not simply do a COUNT(ROWID) grouped by ROWID and
TheDate month and subtract 1 from the count?

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server.
Upscene Productions
http://www.upscene.com


 2. The AVERAGE number of REPEATED instances of the rowID per week (Monday
 00:00:00 through Monday 00:00:00 one week later) in the last month.

 If I need to add table columns I certainly can.

 THANKS!

 - John


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL challenge

2004-06-28 Thread Jigal van Hemert
From: John Mistler [EMAIL PROTECTED]
 I have a difficult query for you that has me stumped.  The table has two
 columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT) and theDate
(DATETIME).
 I need it to find:

 1. The COUNT of REPEATED instances of the rowID in the last month.

This is relatively easy:

SELECT rowID, COUNT( rowID )  -1 AS countrepeat
FROM  tablename
WHERE theDate  CONCAT(DATE_FORMAT(CURDATE(), '%Y-%m-'), '01') AND
theDate = CONCAT(DATE_FORMAT(CURDATE(), '%Y-%m-'), '01') - INTERVAL 1 MONTH
GROUP  BY rowID
HAVING countrepeat 0

See if I have time for #2 later...

Regards, Jigal


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL challenge

2004-06-28 Thread Roger Baklund
* John Mistler
 I have a difficult query for you that has me stumped.  The table has two
 columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT) and theDate
 (DATETIME).
 I need it to find:

 1. The COUNT of REPEATED instances of the rowID in the last month.

 - so if there are 5 rows with the same rowID in the last month, it would
 return 4 (I can only seem to get it to return 10 WHERE t1.rowID =
 t2.rowID AND t1.theDate  t1.theDate)

I'm not sure if I understand, but have you tried something like this:

select rowID,COUNT(*) AS cnt
  from theTable
  where month(theDate) = month(curdate())
  group by rowID
  having cnt1;

If you by last month meant the last in the dataset, you could find the
month by issuing:

select @m:=month(max(theDate)) from theTable;

You say you want the answer 4 when the count is 5...? You can subtract one
from the count in the query:

select rowID,COUNT(*)-1 AS cnt
  from theTable
  where month(theDate) = @m
  group by rowID
  having cnt0;

 2. The AVERAGE number of REPEATED instances of the rowID per week (Monday
 00:00:00 through Monday 00:00:00 one week later) in the last month.

You want to group by week, you can get the week using the week() function.
For weeks starting on monday, the second parameter should be 1. You want the
average of the counts... try using a temporary table, something like this:

create temporary table tmp1 select
  week(theDate,1) AS week, rowID, count(*)-1 AS cnt,
  from theTable
  where month(theDate) = @m
  group by week,rowID
  having cnt0;
select week,avg(cnt) from tmp1 group by week;

--
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [MySQL] Duplicate selection with change

2004-06-28 Thread Ashley M. Kirchner
Scott Haneda wrote:
So, I need to basically:
SELECT iamge_title, image_filename, image_url, image_width, image_height
from images where user_id = '123';
Then, take that result and insert all those rows in the same table, setting
type='2' and added=NOW()
I think I need to do a subselect, the docs are really sparse on examples,
and I don't see how to change the 2 values I need to change.
   Why can't you simply UPDATE the record?
   mysql update images set type='2' and added=NOW() where user_id='123';
--
W | I haven't lost my mind; it's backed up on tape somewhere.
 +
 Ashley M. Kirchner mailto:[EMAIL PROTECTED]   .   303.442.6410 x130
 IT Director / SysAdmin / WebSmith . 800.441.3873 x130
 Photo Craft Laboratories, Inc.. 3550 Arapahoe Ave. #6
 http://www.pcraft.com . .  ..   Boulder, CO 80303, U.S.A.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Error in JDBC retrieval of mediumint column (mysql4.1.2b-alpha-win and java-3.1.2-alpha driver)

2004-06-28 Thread Oliver Chua
To be able to use subqueries, I'm now using mysql-4.1.2b-alpha-win.zip
I'm using mysql-connector-java-3.1.2-alpha.zip as my jdbc driver to connect
to the db...

I'm selecting some columns from a table where the column is created using
mediumint.
Somehow, after creation, it becomes mediumint(9)
PreparedStatement.executeQuery throws SQLException.

I'm thinking maybe the datatype is somehow not recognized.

java.sql.SQLException: Unknown type '9 in column 0 of 3 in binary-encoded
result set.
 at com.mysql.jdbc.MysqlIO.unpackBinaryResultSetRow(MysqlIO.java:3888)
 at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1211)
 at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2036)
 at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:395)
 at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1824)
 at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1278)
 at
com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement
.java:1283)
 at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
nt.java:903)
 at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
...

Any ideas why this is happening? And any workaround available?

Thanks in advance.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SQLException when retrieving resultset containing mediumint (4.1.2b-alpha-win db and java-3.1.2-alpha driver)

2004-06-28 Thread Oliver Chua
I'm using mysql-4.1.2b-alpha-win.zip and mysql-connector-java-3.1.2-alpha.zip
to access db via JDBC.

The column is mediumint(9)
It was created using type mediumint, somehow it was changed to mdiumint(9)
I think there may be a problem in the size...

java.sql.SQLException: Unknown type '9 in column 0 of 3 in binary-encoded result set.
 at com.mysql.jdbc.MysqlIO.unpackBinaryResultSetRow(MysqlIO.java:3888)
 at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1211)
 at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2036)
 at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:395)
 at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1824)
 at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1278)
 at 
com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1283)
 at 
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:903)
 at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)

Any ideas to correct this? Or any workarounds?

triggers or stored procedures

2004-06-28 Thread Carlos J Souza
hello all

What is more important? triggers or stored procedures.  

I think that triggers they are a lot more important than stored proc. because stored 
procs they can be implemented in the front end application.  

In the version 5 should be implemented triggers instead of stored procedures.  


regards

Carlos J Souza]
from brazil
[EMAIL PROTECTED]
2004-06-28


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



error replicating from mysql3.23.36 to mysql4.0.18

2004-06-28 Thread [EMAIL PROTECTED]
(excuse for my english)

all proccess is ok.. the replication starts.. but, a few minutes later, mysql 
shows me this error:
Last_error: Error 'Table 'eshablar_new.contadorZs' doesn't exist' on query 
'UPDATE contadorZs set indCont = indCont + 1 where idUser = 2394'. Default 
database: 'eshablar_new'

the problem is that the 'contadorZs' table has been created by mysqldump as 
'contadorzs' name.. and all querys that reference this table as 'contadorZs' 
chrash...

in the mysql3.23.36 the table names are non-case-sensitive .. and in 
mysql4.0.18 the names of tables are case-sensitive.. 

this is getting me into a lot of troubles..

for example:

look at this in a 3.23.36 version:

=0
mysql create table pepePepe( id int );
Query OK, 0 rows affected (0.00 sec)

mysql show tables;
++
| Tables_in_test |
++
| pepepepe   |
++
1 row in set (0.00 sec)
=0

look that the name wich i create the table is with capital 'P'.. but the 'show 
tables' show me with non-capital 'p' .. 

now .. the same in a mysql 4.0.18:

=0
mysql create table pepePepe( id int );
Query OK, 0 rows affected (0.01 sec)

mysql show tables;
++
| Tables_in_test |
++
| pepePepe   |
++
1 row in set (0.00 sec)
=0

then.. i dont know.. i dont know if i have a unsolutionable problem.. or 
something like that.. 

i only wanted to comment this.. :D

thanks
d2clon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: triggers or stored procedures

2004-06-28 Thread Martijn Tonies
Hi Carlos,

 What is more important? triggers or stored procedures.

 I think that triggers they are a lot more important than stored proc.
because stored procs they can be implemented in the front end application.


Then they wouldn't be Stored Procedures anymore would they :-)

 In the version 5 should be implemented triggers instead of stored
procedures.

Both are very much alike - the difference is direct
calling compared to triggered calling. Anyway, the
language is the same.

IMO, they could/should be implemented both in 5.0

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: triggers or stored procedures

2004-06-28 Thread Josh Trutwin
On Mon, 28 Jun 2004 07:11:04 -0300
Carlos J Souza [EMAIL PROTECTED] wrote:

 hello all
 
 What is more important? triggers or stored procedures.  
 
 I think that triggers they are a lot more important than stored proc. because stored 
 procs they can be implemented in the front end application.  
 
 In the version 5 should be implemented triggers instead of stored procedures.  

What do you usually call with a trigger though?  A stored procedure. 

MySQL seems to have done just fine without these for many years.  I say if they can 
get these features in there without losing the characteristics that make MySQL so 
appealing, great, otherwise forget it.

Josh

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Rows Counter

2004-06-28 Thread Javier Diaz
Hi everyone


I need have a row counter in a query but I'm not sure if there is any way to
do this.  In essence all I need is get a result like this:

Counter  column-A  column-B 
  1   A-1  B-1
  2   A-2  B-2
   : : :
   : : :

where A, B are real columns and Counter  in just a consecutive for each row
in the query result.

Thanks
 Javier


This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: triggers or stored procedures

2004-06-28 Thread Gilbert Wu
Hi,

I would agree with Martijn that both features are important. It is like arguing if air 
conditioning is more important than automatic gearbox when you buy a car. They are 
both useful but which one you go for depends on where and how you drive the car. 
Personally, I would have both and also leather seats and cruise control...

It may make an interesting hobby to build your own car but I hate reinventing the 
wheel.

Gilbert
-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED]
Sent: 28 June 2004 11:39
To: mysql
Subject: Re: triggers or stored procedures


Hi Carlos,

 What is more important? triggers or stored procedures.

 I think that triggers they are a lot more important than stored proc.
because stored procs they can be implemented in the front end application.


Then they wouldn't be Stored Procedures anymore would they :-)

 In the version 5 should be implemented triggers instead of stored
procedures.

Both are very much alike - the difference is direct
calling compared to triggered calling. Anyway, the
language is the same.

IMO, they could/should be implemented both in 5.0

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow query when searching database of over 2 million records

2004-06-28 Thread Aasef Iqbal
Hi,

I am working on a web project project where one of my pages has to show a count of 
total number of matches found and short text for few of them, just like a 
search engine. 

I need to issue two queries first one fetches a count  for total matches found and 
second query finds detail to 10/20 of those results as specified by the user. the 
quries are

select count(distinct CURL) as rc from tbl_directory where MATCH (CName,DSCR) AGAINST 
(software project management ) and LinkType = 1
(Time Taken for Total Count :: 207.27026605606)

select distinct CURL,CName,DSCR, MATCH (CName,DSCR) AGAINST (software project 
management ) as rel from tbl_directory where MATCH (CName,DSCR) 
AGAINST (software project management ) and LinkType = 1 limit 0, 10 
(0.26 seconds)

first query always takes much longer time whenever I search for something new, n in 
subsequent searches the result is quite satisfactory (as it comes from query 
cache)

is there anyway to speed up the search, im querying some 2 million records.

thanx in adavnce

Asif Iqbal 




Cool Things Happen When Mac Users Meet! Join the community in Boston this July: 
www.macworldexpo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Flowing Text Into Multiple Columns

2004-06-28 Thread Parker Morse
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sunday, Jun 27, 2004, at 12:31 US/Eastern, David Blomstrom wrote:
Suppose I want to display an entire field, but not in
one long column. Instead, I want to flow it evenly
into several columns.
[snip]
I haven't yet learned of a way to do this with PHP, so
I wondered if there's some sort of trick you can use
with MySQL to flow text into multiple columns.
Not with MySQL. I've done it with PHP; I'll send that off-list.
pjm
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (Darwin)
iD8DBQFA4BWcnRVGoRROKxIRAtzrAJ41Y1qUb40nvJBJQjcYD6/A2ryhtgCfS5Gr
szMV9uhfWN+KfLRBRByepic=
=c6Xw
-END PGP SIGNATURE-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: slow query when searching database of over 2 million records

2004-06-28 Thread Victor Pendleton
Have you ran an explain plan on the query to identify the execution path?

-Original Message-
From: Aasef Iqbal
To: [EMAIL PROTECTED]
Sent: 6/28/04 6:15 AM
Subject: slow query when searching database of over 2 million records

Hi,

I am working on a web project project where one of my pages has to show
a count of total number of matches found and short text for few of them,
just like a 
search engine. 

I need to issue two queries first one fetches a count  for total matches
found and second query finds detail to 10/20 of those results as
specified by the user. the 
quries are

select count(distinct CURL) as rc from tbl_directory where MATCH
(CName,DSCR) AGAINST (software project management ) and LinkType = 1
(Time Taken for Total Count :: 207.27026605606)

select distinct CURL,CName,DSCR, MATCH (CName,DSCR) AGAINST (software
project management ) as rel from tbl_directory where MATCH (CName,DSCR)

AGAINST (software project management ) and LinkType = 1 limit 0, 10 
(0.26 seconds)

first query always takes much longer time whenever I search for
something new, n in subsequent searches the result is quite satisfactory
(as it comes from query 
cache)

is there anyway to speed up the search, im querying some 2 million
records.

thanx in adavnce

Asif Iqbal 




Cool Things Happen When Mac Users Meet! Join the community in Boston
this July: www.macworldexpo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SQLException when retrieving resultset containing mediumint ( 4.1.2b-alpha-win db and java-3.1.2-alpha driver)

2004-06-28 Thread Victor Pendleton
Can you post the code or sql that you are attempting to execute that is
throwing tis error?

-Original Message-
From: Oliver Chua
To: [EMAIL PROTECTED]
Sent: 6/28/04 5:08 AM
Subject: SQLException when retrieving resultset containing mediumint
(4.1.2b-alpha-win db and java-3.1.2-alpha driver)

I'm using mysql-4.1.2b-alpha-win.zip and
mysql-connector-java-3.1.2-alpha.zip
to access db via JDBC.

The column is mediumint(9)
It was created using type mediumint, somehow it was changed to
mdiumint(9)
I think there may be a problem in the size...

java.sql.SQLException: Unknown type '9 in column 0 of 3 in
binary-encoded result set.
 at com.mysql.jdbc.MysqlIO.unpackBinaryResultSetRow(MysqlIO.java:3888)
 at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1211)
 at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2036)
 at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:395)
 at
com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1824)
 at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1278)
 at
com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedState
ment.java:1283)
 at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedSta
tement.java:903)
 at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:162
2)

Any ideas to correct this? Or any workarounds?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Duplicate selection with change

2004-06-28 Thread SGreen

You got it!! That is the easiest way to duplicate data while
changing(setting) certain columns to new values.

WTG!
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   

  Scott Haneda 

  [EMAIL PROTECTED]To:   MySql [EMAIL PROTECTED]
 
  cc: 

   Fax to: 

  06/28/2004 02:43 Subject:  Re: Duplicate selection with 
change   
  AM   

   

   





on 06/27/2004 11:31 PM, Scott Haneda at [EMAIL PROTECTED] wrote:

 So, I need to basically:
 SELECT iamge_title, image_filename, image_url, image_width, image_height
 from images where user_id = '123';

 Then, take that result and insert all those rows in the same table,
setting
 type='2' and added=NOW()

 I think I need to do a subselect, the docs are really sparse on examples,
 and I don't see how to change the 2 values I need to change.

I may have just made this work in my simple testing, unless someone can
tell
me a better, way, this is what I came up with:

insert into images (user_id, image_title, image_filename, image_url, type,
added) select user_id, image_title, image_filename, image_url, '4', NOW()
from newsletter_images where user_id = '123';

So basically, since I need to select the SAME fields I am inserting, just
select the 2 that I need to change the data in, ignore what they may or may
not have, and drop in hard values in the select statement.

--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: INNODB transaction log size

2004-06-28 Thread Victor Pendleton
I do not believe this is currently an option in the `load data infile`
syntax. One option would be to read the file programmatically and issue the
commits after `x` number of inserts.

-Original Message-
From: Michael Lee
To: [EMAIL PROTECTED]
Sent: 6/28/04 1:21 AM
Subject: INNODB transaction log size

Hi,
 
I would like to migrate my DB from Sybase ASE to MySQL INNODB table.
Data has been extracted and stored as a file. I want to use the command
Load Data Infile to insert the data to MySQL. However, some table
contains millions of rows. Can i control the batch size of the loading
(e.g. commit the transaction after 5 rows inserted)?
 
If no, should i define a very large transaction log to handle the huge
transaction? (currently, it is 5M)
 
Any suggestion is welcomed.
 
TIA
Michael 

???...
??
http://mobile.yahoo.com.hk/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: triggers or stored procedures

2004-06-28 Thread SGreen

Carlos,

I think you have a misunderstanding about what a trigger is or does.  A
trigger is, in its most basic form,  an automatically executed stored
procedure.

The ability to detect an event is pointless withouth the ability to do
something once that event occurs. We have to have the one (stored
procedures) in order to get the other (triggers). Be patient, I am sure
triggers are coming soon.

Respecfully,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   

  Carlos J Souza 

  [EMAIL PROTECTED]To:   mysql [EMAIL PROTECTED]  
 
  g.com.brcc: 

   Fax to: 

  06/28/2004 07:11 Subject:  triggers or stored procedures 

  AM   

   

   





hello all

What is more important? triggers or stored procedures.

I think that triggers they are a lot more important than stored proc.
because stored procs they can be implemented in the front end application.


In the version 5 should be implemented triggers instead of stored
procedures.


regards

Carlos J Souza]
from brazil
[EMAIL PROTECTED]
2004-06-28


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow query when searching database of over 2 million records

2004-06-28 Thread SGreen

You should only need to do your second query and use the
SQL_CALC_FOUND_ROWS and a new query using SELECT FOUND_ROWS()to minimize
the number of times you need to _actually_ search your database.

see: http://dev.mysql.com/doc/mysql/en/SELECT.html
and:  http://dev.mysql.com/doc/mysql/en/Information_functions.html

for more details on using these function to save trips through your data.

I would also look at the EXPLAIN of your query to see if there are any hint
available there to optimize your query.

Respectfully,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




   

  Aasef Iqbal  

  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
 
  m   cc: 

   Fax to: 

  06/28/2004 07:15 Subject:  slow query when searching 
database of over 2 million records  
  AM   

  Please respond to

  aneedz   

   

   





Hi,

I am working on a web project project where one of my pages has to show a
count of total number of matches found and short text for few of them, just
like a
search engine.

I need to issue two queries first one fetches a count  for total matches
found and second query finds detail to 10/20 of those results as specified
by the user. the
quries are

select count(distinct CURL) as rc from tbl_directory where MATCH
(CName,DSCR) AGAINST (software project management ) and LinkType = 1
(Time Taken for Total Count :: 207.27026605606)

select distinct CURL,CName,DSCR, MATCH (CName,DSCR) AGAINST (software
project management ) as rel from tbl_directory where MATCH (CName,DSCR)
AGAINST (software project management ) and LinkType = 1 limit 0, 10
(0.26 seconds)

first query always takes much longer time whenever I search for something
new, n in subsequent searches the result is quite satisfactory (as it comes
from query
cache)

is there anyway to speed up the search, im querying some 2 million records.

thanx in adavnce

Asif Iqbal




Cool Things Happen When Mac Users Meet! Join the community in Boston this
July: www.macworldexpo.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



character sets on client or server?

2004-06-28 Thread Marten Lehmann
Hello,
do I need the character sets on the client side or just on the server? 
Client and server are different servers, so I want to reduce the 
client-package to a minimum:

/usr/bin/mysql
/usr/bin/mysqldump
/usr/bin/mysqlshow
/usr/lib/mysql/libmysqlclient*
I don't need man-pages et cetera. So, if a client wants to use a 
differen locale, does he need the character sets to be stored locally, 
or will they be sent from the server, so that he doesn't have to store 
them locally?

Regards
Marten
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL Storage Engines and Table Types.....

2004-06-28 Thread Scott Fletcher
Hi!
 
How do I know which of the storage engine am I using or running, or
whatever that use the databases/tables?  
 
Thanks,
 Scott F.


RE: MySQL Storage Engines and Table Types.....

2004-06-28 Thread Victor Pendleton
Do a show variables from the MySQL monitor.
show variables; 

-Original Message-
From: Scott Fletcher
To: [EMAIL PROTECTED]
Sent: 6/28/04 10:27 AM
Subject: MySQL Storage Engines and Table Types.

Hi!
 
How do I know which of the storage engine am I using or running, or
whatever that use the databases/tables?  
 
Thanks,
 Scott F.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



select and update field in one query

2004-06-28 Thread darren
Hi all,

I need to select the top 1000 records based on the usage field and update
the checked field to '1'.

I am currently doing it in 2 statements; i.e. select first then update from
a MYSQL_ROW array. How do I do it in just a single statement?

Please advice on a better way. Also...will doing this in one statement be
more efficient from the server perspective?

Thanks in advance


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Storage Engines and Table Types.....

2004-06-28 Thread SGreen

You can see a list of the available storage types with  SHOW ENGINES

To see which engine is in use for any table you can SHOW TABLE STATUS or
SHOW CREATE TABLE tablename

All of these commands, and more, are documented at :
http://dev.mysql.com/doc/mysql/en/SHOW.html

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   

  Scott Fletcher 

  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]  
 
  com cc: 

   Fax to: 

  06/28/2004 11:27 Subject:  MySQL Storage Engines and 
Table Types.
  AM   

   

   





Hi!

How do I know which of the storage engine am I using or running, or
whatever that use the databases/tables?

Thanks,
 Scott F.






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select and update field in one query

2004-06-28 Thread Rhino

- Original Message - 
From: darren [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, June 28, 2004 11:42 AM
Subject: select and update field in one query


 Hi all,

 I need to select the top 1000 records based on the usage field and
update
 the checked field to '1'.

 I am currently doing it in 2 statements; i.e. select first then update
from
 a MYSQL_ROW array. How do I do it in just a single statement?

If you are selecting the 1000 rows so that you can display them and then
update them to keep your database up-to-date, you can't do both in one
statement. A SELECT statement can only display data and an UPDATE can only
update data; an UPDATE cannot display rows *and* change values on those
rows.

If you are selecting the rows simply so that you can update them but don't
display them, the update should be straightforward if you are using a
version of MySQL that supports subqueries (V4.1.x). Something like this:

update my_table
set checked = 1
where primary_key in
(select primary_key
from my_table
where
order by usage
limit 1000)

This query probably won't work as is; I'm not that fluent on MySQL's brand
of SQL yet. Other posters on this group can help you fine tune it so that it
works.

Rhino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select and update field in one query

2004-06-28 Thread SGreen

I would personally do it in three statements. The first one creates a
temporary table from the results of the (SELECT) below. The second is the
update with a JOIN to the temporary table. The third drops the temporary
table. Doing it that way avoids the need to scroll through your recordset
client-side.

I think this could(might) work for you as an all-in-one statement (not
tested):

UPDATE tablename INNER JOIN (SELECT tablename.ID from tablename WHERE
usage='somevalue' ORDER BY id LIMIT 1000) as rows_to_update ON
rows_to_update.id = tablename.id
SET checked=1

As I said, it's not tested. It also relies on the derived table
(anonymous view) feature of MySQL which may not be available in your
version.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  darren 

  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]  
 
  box.com cc: 

   Fax to: 

  06/28/2004 11:42 Subject:  select and update field in 
one query  
  AM   

   

   





Hi all,

I need to select the top 1000 records based on the usage field and update
the checked field to '1'.

I am currently doing it in 2 statements; i.e. select first then update from
a MYSQL_ROW array. How do I do it in just a single statement?

Please advice on a better way. Also...will doing this in one statement be
more efficient from the server perspective?

Thanks in advance


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sql join statement that I do not understand

2004-06-28 Thread Joseph Norris


Group,

First of all thanks to anyone who can respond to this - I am really stumped.


I have been trying to figure this one out and maybe someone out there with a
little deep understanding of joins in sql can give me a hand.

I am working on a system that creates these sql statements on the fly and so
the table names and fields are really perl variables.

This the sql without the join:


select
cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl from
cs_fld,  cs_fld_cs_tbl_l
 where cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
 and  cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type =
'basic'



+-++--++-+--
++
| cs_type | field_name | name | type   | type_sql|
rl_table | cs_tbl |
+-++--++-+--
++
| basic   | status | Status   | recordid   | int |
status   | [23]   |
| basic   | body   | Main Body| textarea   | text|
| [23]   |
| basic   | section| Section  | recordid   | int |
demsect  | [23]   |
| basic   | title  | Title| text   | varchar(50) |
| [23]   |
| basic   | assignu| Assign to User:  | recordid   | int | u
| [23]   |
| basic   | subsect| Subsection   | recordlist | list|
subsect  | [23]   |
| basic   | assigngr   | Assign to Group: | recordid   | int |
cs_com   | [23]   |
| basic   | sorder | Relative Order   | numeric| int |
| [23]   |
| basic   | con_type   | Content Type:| recordid   | int |
con_type | [23]   |
+-++--++-+--
++
9 rows in set (0.01 sec)





This is the join:

select
cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl from
cs_fld left join cs_fld_cs_tbl_l
 ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
 WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type =
'basic'


+-++--++-+--
++
| cs_type | field_name | name | type   | type_sql|
rl_table | cs_tbl |
+-++--++-+--
++
| basic   | status | Status   | recordid   | int |
status   | [23]   |
| basic   | body   | Main Body| textarea   | text|
| [23]   |
| basic   | title  | Title| text   | varchar(50) |
| [23]   |
| basic   | subsect| Subsection   | recordlist | list|
subsect  | [23]   |
| basic   | assigngr   | Assign to Group: | recordid   | int |
cs_com   | [23]   |
| basic   | section| Section  | recordid   | int |
demsect  | [23]   |
| basic   | sorder | Relative Order   | numeric| int |
| [23]   |
| basic   | assignu| Assign to User:  | recordid   | int | u
| [23]   |
| basic   | con_type   | Content Type:| recordid   | int |
con_type | [23]   |
+-++--++-+--
++
9 rows in set (4.44 sec)



Notice the time difference?  I thought that the join statement was supposed
to be more efficient.


I did some dinking with the joins and I go this:

It appears that optimization of joins depends upon what table you are
joining to what:


This is the new sql:

mysql select
- cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl
from
- cs_fld_cs_tbl_l left join cs_fld
-  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
-  WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type
= 'basic';
+-++--++-+--
++
| cs_type | field_name | name | type   | type_sql|
rl_table | cs_tbl |
+-++--++-+--
++
| basic   | status | Status   | recordid   | int |
status   | [23]   |
| basic   | body   | Main Body| textarea   | text|
| [23]   |
| basic   | section| Section  | recordid   | int |
demsect  | [23]   |
| basic   | title  | Title| text   | varchar(50) |
| [23]   |
| basic   | assignu| Assign to User:  | recordid   | int | u
| [23]   |
| basic   | subsect| Subsection   | recordlist | list|
subsect  | [23]   |
| basic   | assigngr   | Assign to Group: | recordid   | int |
cs_com   | [23]   |
| basic   | sorder | Relative Order   | numeric| int |
| [23]   |
| basic   | con_type   | Content Type:| recordid   | int |
con_type | [23]   |
+-++--++-+--
++
9 rows in set (0.01 sec)
All I did was transpose the cs_fld 

UNION with INTO OUTFILE and ORDER BY

2004-06-28 Thread Josh Trutwin
Noticed something interesting about UNION and INTO OUTFILE

If I do this:

(SELECT a FROM b)
UNION
(SELECT a INTO OUTFILE 'out.txt' FROM c);

The query executes - no results printed to the screen but rather saved to the out.txt 
file, as intended.  But if I do:

(SELECT a FROM b)
UNION
(SELECT a INTO OUTFILE 'out.txt' FROM c)
ORDER BY a;

The query works but the results only print out to screen and do not get dumped into 
out.txt.  Actually nothing gets dumped to out.txt.

moving the INTO OUTFILE is invalid syntax:

(SELECT a FROM b)
UNION
(SELECT a FROM c)
INTO OUTFILE 'out.txt' 
ORDER BY a;

Is this normal/intentional?

MySQL 4.0.20 on Debian Linux.

Josh

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow query when searching database of over 2 million records

2004-06-28 Thread Amit_Wadhwa
Use MySQL Query Caching 

-Original Message-
From: Aasef Iqbal [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 28, 2004 4:46 PM
To: [EMAIL PROTECTED]
Subject: slow query when searching database of over 2 million records

Hi,

I am working on a web project project where one of my pages has to show
a count of total number of matches found and short text for few of them,
just like a search engine. 

I need to issue two queries first one fetches a count  for total matches
found and second query finds detail to 10/20 of those results as
specified by the user. the quries are

select count(distinct CURL) as rc from tbl_directory where MATCH
(CName,DSCR) AGAINST (software project management ) and LinkType = 1
(Time Taken for Total Count :: 207.27026605606)

select distinct CURL,CName,DSCR, MATCH (CName,DSCR) AGAINST (software
project management ) as rel from tbl_directory where MATCH (CName,DSCR)
AGAINST (software project management ) and LinkType = 1 limit 0, 10
(0.26 seconds)

first query always takes much longer time whenever I search for
something new, n in subsequent searches the result is quite satisfactory
(as it comes from query
cache)

is there anyway to speed up the search, im querying some 2 million
records.

thanx in adavnce

Asif Iqbal 




Cool Things Happen When Mac Users Meet! Join the community in Boston
this July: www.macworldexpo.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error in JDBC retrieval of mediumint column (mysql4.1.2b-alpha-win and java-3.1.2-alpha driver)

2004-06-28 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Oliver Chua wrote:

 To be able to use subqueries, I'm now using mysql-4.1.2b-alpha-win.zip
 I'm using mysql-connector-java-3.1.2-alpha.zip as my jdbc driver to
connect
 to the db...

 I'm selecting some columns from a table where the column is created using
 mediumint.
 Somehow, after creation, it becomes mediumint(9)
 PreparedStatement.executeQuery throws SQLException.

 I'm thinking maybe the datatype is somehow not recognized.

 java.sql.SQLException: Unknown type '9 in column 0 of 3 in binary-encoded
 result set.
  at com.mysql.jdbc.MysqlIO.unpackBinaryResultSetRow(MysqlIO.java:3888)
  at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1211)
  at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2036)
  at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:395)
  at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1824)
  at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1278)
  at

com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement
 .java:1283)
  at

com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
 nt.java:903)
  at
 com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
 ...

 Any ideas why this is happening? And any workaround available?

 Thanks in advance.

Oliver,

I went ahead and turned this into a bug report
(http://bugs.mysql.com/bug.php?id=4311)

It's now fixed, and will be in the 3.1.3 release of Connector/J. You can
checkout a nightly snapshot build with the fix after 00:00 GMT on the
29th of June at http://downloads.mysql.com/snapshots.php

(in the meantime, you could turn off server-side prepared statements
with 'useServerPrepStmts=false' in your URL).

Sorry for any hassle this has caused.

-Mark

- --
Mr. Mark Matthews
MySQL AB, Software Development Manager, J2EE and Windows Platforms
Office: +1 708 332 0507
www.mysql.com

MySQL Guide to Lower TCO
http://www.mysql.com/it-resources/white-papers/tco.php
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFA4EyatvXNTca6JD8RAtegAJ9QBLyAf4PqoFkNVHZjC3j9htiqDgCgxCL0
tThRw8qq0m1NO3ykKKyF4Xw=
=g3If
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sql join statement that I do not understand

2004-06-28 Thread SGreen

Joseph,

You actually had 3 joined statements. Here are your queries isolated from
the rest of your posting (and slightly reformatted):

1) an INNER JOIN (aka an 'equijoin') using the comma format

select cs_fld_cs_tbl_l.cs_type
  ,field_name
  ,name
  ,type
  ,type_sql
  ,rl_table
  ,cs_tbl
from cs_fld,  cs_fld_cs_tbl_l
where
  cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
  and  cs_fld_cs_tbl_l.cs_tbl_id = '23'
  AND cs_fld_cs_tbl_l.cs_type = 'basic'

2) LEFT JOIN #1

select cs_fld_cs_tbl_l.cs_type
  ,field_name
  ,name
  ,type
  ,type_sql
  ,rl_table
  ,cs_tbl
from cs_fld
left join cs_fld_cs_tbl_l
  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'
  AND cs_fld_cs_tbl_l.cs_type = 'basic'

3) LEFT JOIN #2

select cs_fld_cs_tbl_l.cs_type
  ,field_name
  ,name
  ,type
  ,type_sql
  ,rl_table
  ,cs_tbl
from cs_fld_cs_tbl_l
left join cs_fld
  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'
  AND cs_fld_cs_tbl_l.cs_type = 'basic';

With all JOINS the query engine must construct an internal table that
contains every possible combination of the rows from one table with the
rows from the other table. Fortunately we are allowed to specify (with the
ON clause) exactly which rows out of all those combinations we really want
to deal with. If you had 1000 rows in tablea and 500 rows in tableb and
said :

FROM tablea INNER JOIN tableb

and you didn't specify an ON clause, you would be forcing the engine to
compile (500 x 1000 = 50) rows of data into a temporary table. It's
that temporary table that the WHERE clause must operate against in order to
complete your query. If you specify an ON clause that restricted your
temporary table to just a few dozen rows, the WHERE clause would execute
much faster.

In query 1 you are only going to get a record from cs_fld if a record
exists in cs_fld_cs_tbl_1 so that cs_fld_cs_tbl_l.cs_fld_id =
cs_fld.cs_rid is true.  Your temporary table is composed of just the
matching rows of the two tables which is probably only a few rows. You
WHERE restrictions take practically no time at all to apply to such a small
set.

In query 2 and 3 you are doing LEFT JOINS which means that the results will
consist of __all__ rows from the table in the direction of the JOIN and
only those rows from the table in the opposite direction of the JOIN that
satisfy your ON clause.

In Query 2 you build an internal dataset that consists of ALL of the rows
of cs_fld (the table on the LEFT of the statement). The WHERE clause must
apply its restrictions to all of those rows in order to complete your
query.

In Query 3, you reversed the tables. Now cs_fld_cs_tbl_l is the driving
table in the query. I would have to guess that it is much smaller than
cs_fld so the temporary table is much smaller so your WHERE clause takes
less time to apply. Or, there was an index that the engine could apply to
cs_fld_cs_tbl_l that it wasn't able to use in Query 2.

You can play with these kinds of queries (queries with JOINed tables) by
moving terms from the WHERE clause into the ON clause in attempts to
minimize the size of the temporary table that is the result of your JOINS.
In fact, this is one way to rewrite query 3:

select cs_fld_cs_tbl_l.cs_type
  ,field_name
  ,name
  ,type
  ,type_sql
  ,rl_table
  ,cs_tbl
from cs_fld_cs_tbl_l
left join cs_fld
  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
  AND cs_fld_cs_tbl_l.cs_tbl_id = '23'
  AND cs_fld_cs_tbl_l.cs_type = 'basic';


I eliminated the WHERE clause entirely yet I still have a valid query. This
technique becomes more useful if you have multiple JOINS in a single query.
Moving a restriction into the ON clause does not guarantee better
performance, you must test, test, and re-test in order to determine the
best response for your particular data and index structures.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Joseph Norris  

  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]  
 
  cc: 

   Fax to: 

  06/28/2004 12:32 Subject:  sql join statement that I do 
not understand   
  PM   

   
   

Re: select and update field in one query

2004-06-28 Thread Michael Stassen
I'm assuming you want the rows with the top 1000 usage values.  Why not just 
do it in 1 UPDATE statement?

  UPDATE yourtable SET checked = 1 ORDER BY usage DESC LIMIT 1000;
This should work for any mysql 4.x.x, according to the manual 
http://dev.mysql.com/doc/mysql/en/UPDATE.html.

Michael
Rhino wrote:
- Original Message - 
From: darren [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, June 28, 2004 11:42 AM
Subject: select and update field in one query


Hi all,
I need to select the top 1000 records based on the usage field and
update
the checked field to '1'.
I am currently doing it in 2 statements; i.e. select first then update
from
a MYSQL_ROW array. How do I do it in just a single statement?
If you are selecting the 1000 rows so that you can display them and then
update them to keep your database up-to-date, you can't do both in one
statement. A SELECT statement can only display data and an UPDATE can only
update data; an UPDATE cannot display rows *and* change values on those
rows.
If you are selecting the rows simply so that you can update them but don't
display them, the update should be straightforward if you are using a
version of MySQL that supports subqueries (V4.1.x). Something like this:
update my_table
set checked = 1
where primary_key in
(select primary_key
from my_table
where
order by usage
limit 1000)
This query probably won't work as is; I'm not that fluent on MySQL's brand
of SQL yet. Other posters on this group can help you fine tune it so that it
works.
Rhino


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.
 
 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread SGreen

Have you thought about using Merge tables?

If you have a sliding 5 minute monitoring window that you need to query
frequently you could create a smaller MERGE table to hold to 6 minutes
worth of data composed of six tables of one minute's data each.

At the end of each minute, you create a new table, change the merge
definition, then archive the old minute (the one that just left the
monitoring window) into a larger static table. Your indexes will be small
(only 1 minutes worth of data). The tables you need to query are smaller
(just 6 minutes worth) and you still keep all of your historical data.  You
could even hold off archiving the old tables until you have some free
time if you needed to.

You could also run tiers of tables. One weekly Merge table containing 7
daily tables. The most recent daily table could be a merge table of up
to 24 hourly tables. The most recent Hourly table could have the
results of archiving off your old minutes for the current hour. I
didn't see anything about NOT nesting  merge tables but I would assume it
could cause some serious headaches if you went overboard doing it

You probably want to review: http://dev.mysql.com/doc/mysql/en/MERGE.html
and: http://dev.mysql.com/doc/mysql/en/MERGE_table_problems.html
for details. (especially read about MERGE tables using lots of file
handles!!!)

I am sure if you thought about it you could break down your storage into
something more manageable than I described.

Best Wishes,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Jeremy Zawodny   

  [EMAIL PROTECTED]To:   Aram Mirzadeh [EMAIL 
PROTECTED]  
  om  cc:   [EMAIL PROTECTED] 

   Fax to: 

  06/28/2004 02:24 Subject:  Re: Performance issues

  PM   

  Please respond to

  mysql

   

   





On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:

 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.

 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.

 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.

 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.

 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Job announcement?

2004-06-28 Thread Michael Halligan
Greetings.

My company has an immediate opening in SF for a Sr. Mysql/DB
architect. I was wondering if this would be the appropriate list to
post such an announcement?

Michael

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Help

2004-06-28 Thread SGreen

You need to flip the business table around your join so that you get all of
the businesses listed and check for the appropriate NULL values in the
other tables.

This will give you all of the business that neither have a record in 2004
nor will they be part of package 16

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
  AND r_date = '2004'
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
  AND p_id='16'
WHERE p.p_ID is null and r.r_p_id is null

If you wanted to see businesses that did not have a record in 2004 but were
part of package 16 then do this

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
  AND p_id='16'
WHERE p.p_ID is not null and r.r_date  '2004'

Or businesses not part of part of package 16 for any year other than 2004

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
WHERE p.p_ID is not null
  AND p_id '16'
  and r.r_date  '2004'

(because a record in P, regardless of package, will not exist unless there
is a record in R, the existence of a record in P implies a record in R so
we do not have to check for AND r.r_date IS NOT NULL.)

Once you start dealing in negative queries, you can get yourself into a
lot of logical trouble very quickly. Queries generally perform better
when you are looking _for_ something and not looking for _everything but_
something. What makes this so hard to get right is that any NULL
comparisons will always be FALSE or NULL.

localhost.warehouse2select (null  '2004'),('2003'  '2004'), ('2004' 
'2004')  ;
+--+++
| (null  '2004') | ('2003'  '2004') | ('2004'  '2004') |
+--+++
| NULL |  1 |  0 |
+--+++
1 row in set (0.00 sec)

+---+
| if((null'2004'),'null not match', 'null match') |
+---+
| null match|
+---+
1 row in set (0.01 sec)

And once you start using LEFT and RIGHT JOINS, you are potentially
introducing a lot of NULL values into the data. This is like dealing with
anti-matter, you must work carefully to make sure you don't wipe out the
wrong rows of data from your results.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Michael Baerwolf 

  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
 
  s.com   cc: 

   Fax to: 

  06/25/2004 09:22 Subject:  Query Help

  PM   

   

   





Hello,

I'm having some problems with a join I've been trying. Here's the table
structure

CREATE TABLE `business` (
   `b_id` int(5) NOT NULL auto_increment,
   `b_name` varchar(100) default NULL,
   `b_contact` varchar(100) default NULL,
   `b_address` varchar(100) default NULL,
   `b_city` varchar(50) default NULL,
   `b_state` char(2) default NULL,
   `b_zip` varchar(25) default NULL,
   `b_phone` varchar(20) default NULL,
   `b_fax` varchar(20) default NULL,
   `b_dcn` varchar(10) default NULL,
   PRIMARY KEY  (`b_id`)
) TYPE=MyISAM;

CREATE TABLE `packages` (
   `p_id` int(5) NOT NULL auto_increment,
   `p_name` varchar(75) default NULL,
   PRIMARY KEY  (`p_id`)
) TYPE=MyISAM;

CREATE TABLE `records` (
   `r_id` int(5) NOT NULL auto_increment,
   `r_b_id` int(5) default NULL,
   `r_p_id` int(5) default NULL,
   `r_sold` tinyint(1) default NULL,
   `r_date` year(4) default NULL,
   PRIMARY KEY  (`r_id`)
) TYPE=MyISAM;

I've worked out this to retrieve records based on a package id and year.

SELECT * FROM records AS r
 LEFT JOIN business AS b on (b.b_id=r.r_b_id)
 LEFT JOIN packages AS p on (p.p_id=r.r_p_id)
 WHERE p_id='16' AND r_date = '2004';

Here's where I'm having trouble. I also need to retrieve all the
businesses 

Re: SQL challenge

2004-06-28 Thread John Mistler
Wow, that was it!  I changed the WHERE to (because I wasn't clear):

 . . . WHERE theDate BETWEEN SUBDATE(CURDATE(), INTERVAL 1 MONTH) AND
CURDATE() . . .

Now, I have just one more that I still am stumped by, if anyone (Roger or
other) has a second:

Given theTable with 2 columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT)
and theDate (DATETIME)

I need to find:

1. The average TIME ELAPSED between consecutive REPEATED instances of the
rowID (GROUP BY rowID, I assume) between one month ago and now.

- So, if there are 3 rows with rowID = 1 and 2 rows with rowID = 2, it would
return the average time (total seconds, or HH:MM:SS) of ((time elapsed
between row1 and row2 where rowID = 1, row 2 and row 3 where rowID = 1) AND
(time elapsed between row1 and row2 where rowID = 2)).  *Note that it would
not use the time elapsed between row 1 and row 3 where rowID = 1) for the
average calculation.

2. The average time elapsed between REPEATED instances of the rowID PER WEEK
between one month ago and now. (This one might be as easy as using the
WEEK() function as before . . .)

Thanks,

- John

on 6/28/04 2:37 AM, Roger Baklund at [EMAIL PROTECTED] wrote:

 * John Mistler
 I have a difficult query for you that has me stumped.  The table has two
 columns: rowID (MEDIUMINT-but *NOT* AUTO_INCREMENT) and theDate
 (DATETIME).
 I need it to find:
 
 1. The COUNT of REPEATED instances of the rowID in the last month.
 
 - so if there are 5 rows with the same rowID in the last month, it would
 return 4 (I can only seem to get it to return 10 WHERE t1.rowID =
 t2.rowID AND t1.theDate  t1.theDate)
 
 I'm not sure if I understand, but have you tried something like this:
 
 select rowID,COUNT(*) AS cnt
 from theTable
 where month(theDate) = month(curdate())
 group by rowID
 having cnt1;
 
 If you by last month meant the last in the dataset, you could find the
 month by issuing:
 
 select @m:=month(max(theDate)) from theTable;
 
 You say you want the answer 4 when the count is 5...? You can subtract one
 from the count in the query:
 
 select rowID,COUNT(*)-1 AS cnt
 from theTable
 where month(theDate) = @m
 group by rowID
 having cnt0;
 
 2. The AVERAGE number of REPEATED instances of the rowID per week (Monday
 00:00:00 through Monday 00:00:00 one week later) in the last month.
 
 You want to group by week, you can get the week using the week() function.
 For weeks starting on monday, the second parameter should be 1. You want the
 average of the counts... try using a temporary table, something like this:
 
 create temporary table tmp1 select
 week(theDate,1) AS week, rowID, count(*)-1 AS cnt,
 from theTable
 where month(theDate) = @m
 group by week,rowID
 having cnt0;
 select week,avg(cnt) from tmp1 group by week;
 
 --
 Roger
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Andrew Pattison
By default MySQL flushes keys to disk with every INSERT, hence the
performance degredation with performing several single INSERTs one after the
other. The following extract from the MySQL documentation hints at one way
of changing this on a per-table basis:

a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
index updates faster because they are not flushed to disk until the table is
closed. The downside is that if something kills the server while such a
table is open, you should ensure that they are okay by running the server
with the --myisam-recover option, or by running myisamchk before restarting
the server. (However, even in this case, you should not lose anything by
using DELAY_KEY_WRITE, because the key information can always be generated
from the data rows.)

There is also a way of getting MySQL to do lazy writing of indexes on a
global basis but I couldn't find a quick reference to that.

Cheers

Andrew.

- Original Message - 
From: Jeremy Zawodny [EMAIL PROTECTED]
To: Aram Mirzadeh [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, June 28, 2004 7:24 PM
Subject: Re: Performance issues


 On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
  We have an internal SNMP monitoring system that is monitoring about
  10,000 devices.  Each device is pinged then pulled for about an
  average of 25-30 elements.  Each of the ping results and elements
  are then stored in text file, then another system picks them up
  (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
  is kept for 13 weeks.
 
  The database system is a Xeon 4 way, 12GB of ram with a striped raid
  array dedicated to the database files and its indexes and such.
 
  Every 5 minutes another process goes through the last set of inserts
  and compares them for any threshold breaches, so the entire last set
  of data is looked at.
 
  We're falling behind on the inserts because the system can't seem to
  handle the amount of inserts, the front end that generates the web
  pages based on the previous records is dogging down.
 
  I have read the regular optimizations papers and have done as much
  as I felt safe, are there any huge database optimization papers?
  Anything I should be looking at?

 I'd consider bulking up the INSERTs, performing multi-row INSERTs
 rather than doing them one by one.  That can speed things up quite a
 bit in my experience.

 Jeremy
 -- 
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 [book] High Performance MySQL -- http://highperformancemysql.com/

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Mon, Jun 28, 2004 at 09:21:04PM +0100, Andrew Pattison wrote:
 By default MySQL flushes keys to disk with every INSERT, hence the
 performance degredation with performing several single INSERTs one after the
 other. The following extract from the MySQL documentation hints at one way
 of changing this on a per-table basis:
 
 a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
 index updates faster because they are not flushed to disk until the table is
 closed. The downside is that if something kills the server while such a
 table is open, you should ensure that they are okay by running the server
 with the --myisam-recover option, or by running myisamchk before restarting
 the server. (However, even in this case, you should not lose anything by
 using DELAY_KEY_WRITE, because the key information can always be generated
 from the data rows.)
 
 There is also a way of getting MySQL to do lazy writing of indexes on a
 global basis but I couldn't find a quick reference to that.

Delayed Key Writes:

  http://dev.mysql.com/doc/mysql/en/CREATE_TABLE.html

Search that page for delay and you'll find it.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Help

2004-06-28 Thread SGreen


Here's another variant:

All businesses with a record in any year but 2004 that have a package other
than 16. (This will not find any businesses who have records ONLY in 2004
or  that ONLY have package 16.)

SELECT *
FROM business AS b
INNER JOIN records AS r
  on b.b_id=r.r_b_id
  AND r_date'2004'
INNER JOIN packages AS p
  on p.p_id=r.r_p_id
  AND p_id'16'


Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   

  [EMAIL PROTECTED]

   To:   Michael Baerwolf [EMAIL 
PROTECTED]  
  06/28/2004 04:01 cc:   [EMAIL PROTECTED] 

  PM   Fax to: 

   Subject:  Re: Query Help

   






You need to flip the business table around your join so that you get all of
the businesses listed and check for the appropriate NULL values in the
other tables.

This will give you all of the business that neither have a record in 2004
nor will they be part of package 16

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
  AND r_date = '2004'
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
  AND p_id='16'
WHERE p.p_ID is null and r.r_p_id is null

If you wanted to see businesses that did not have a record in 2004 but were
part of package 16 then do this

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
  AND p_id='16'
WHERE p.p_ID is not null and r.r_date  '2004'

Or businesses not part of part of package 16 for any year other than 2004

SELECT *
FROM business AS b
LEFT JOIN records AS r
  on b.b_id=r.r_b_id
LEFT JOIN packages AS p
  on p.p_id=r.r_p_id
WHERE p.p_ID is not null
  AND p_id '16'
  and r.r_date  '2004'

(because a record in P, regardless of package, will not exist unless there
is a record in R, the existence of a record in P implies a record in R so
we do not have to check for AND r.r_date IS NOT NULL.)

Once you start dealing in negative queries, you can get yourself into a
lot of logical trouble very quickly. Queries generally perform better
when you are looking _for_ something and not looking for _everything but_
something. What makes this so hard to get right is that any NULL
comparisons will always be FALSE or NULL.

localhost.warehouse2select (null  '2004'),('2003'  '2004'), ('2004' 
'2004')  ;
+--+++
| (null  '2004') | ('2003'  '2004') | ('2004'  '2004') |
+--+++
| NULL |  1 |  0 |
+--+++
1 row in set (0.00 sec)

+---+
| if((null'2004'),'null not match', 'null match') |
+---+
| null match|
+---+
1 row in set (0.01 sec)

And once you start using LEFT and RIGHT JOINS, you are potentially
introducing a lot of NULL values into the data. This is like dealing with
anti-matter, you must work carefully to make sure you don't wipe out the
wrong rows of data from your results.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




  Michael Baerwolf

  [EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  s.com   cc:

   Fax to:

  06/25/2004 09:22 Subject:  Query Help

  PM







Hello,

I'm having some problems with a join I've been trying. Here's the table
structure

CREATE TABLE `business` (
   `b_id` int(5) NOT NULL auto_increment,
   `b_name` varchar(100) default NULL,
   `b_contact` varchar(100) default NULL,
   `b_address` varchar(100) default NULL,
   `b_city` varchar(50) default NULL,
   `b_state` char(2) default NULL,
   `b_zip` varchar(25) default NULL,
   `b_phone` varchar(20) default NULL,
   `b_fax` varchar(20) default NULL,
   `b_dcn` varchar(10) default NULL,
   PRIMARY KEY  (`b_id`)
) TYPE=MyISAM;

CREATE TABLE `packages` (
   `p_id` int(5) NOT NULL auto_increment,
   `p_name` varchar(75) default NULL,
   PRIMARY KEY  (`p_id`)
) TYPE=MyISAM;

CREATE TABLE `records` (
   `r_id` int(5) NOT NULL 

Question of Multi Character Set

2004-06-28 Thread Fei Jiangnan
Hi there,
I have Mysql 4.1.1 runing on Redhat Linux 9, both of them are English 
Version.

Now, I need a table which my user can input both Chinese and English, so I 
create a DB as following:

DROP database IF EXISTS user;
CREATE DATABASE IF NOT EXISTS user CHARACTER SET utf8;
use user;
DROP TABLE IF EXISTS password;
CREATE TABLE password (
customerid CHAR(20) NOT NULL,
password CHAR(50) not null,
role CHAR(10) DEFAULT 'customer',
PRIMARY KEY (customerid)
);
I can input Chinese character, but, when I retrieve the table, the character 
looks like question marks.

Is some thing wrong with server side? How can I fix it? My mysql server info 
as following, any helps are appreciated.

Arthur Niu
---
mysql  Ver 14.3 Distrib 4.1.1-alpha, for pc-linux (i686)
Connection id:  2
Current database:   mysql
Current user:   [EMAIL PROTECTED]
SSL:Not in use
Current pager:  stdout
Using outfile:  ''
Using delimiter:;
Server version: 4.1.1-alpha-standard
Protocol version:   10
Connection: Localhost via UNIX socket
Client characterset:latin1_swedish_ci
Server characterset:latin1_swedish_ci
UNIX socket:/var/lib/mysql/mysql.sock
_
MSN Premium helps eliminate e-mail viruses. Get 2 months FREE*  
http://join.msn.com/?pgmarket=en-capage=byoa/premxAPID=1994DI=1034SU=http://hotmail.com/encaHL=Market_MSNIS_Taglines

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


reading past bogus data in log event error in binary log in 4.0.18 ?

2004-06-28 Thread Bill Earl
Hello everyone,

We're having a problem with replication. The servers are all running 4.0.18 under SuSE 
Linux 9.1 Professional.

I noticed that the two slaves were both showing a status of I/O thread not running.  
In the mysqld.log file for both slaves I found the following entry:

040628 12:59:08 Error reading packet from server: bogus data in log event 
(server_errno=1236)
040628 12:59:08 Got fatal error 1236: 'bogus data in log event' from master when 
reading data from binary log
040628 12:59:08 Slave I/O thread exiting, read up to log 'daredevil1-bin.001', 
position 427847536

On the master server, when I run mysqlbinlog on the daredevil1-bin.001 log, I get the 
following error: 
ERROR: Error in Log_event::read_log_event(): 'Event too small', data_len: 0, 
event_type: 0

I've tried using the -f option with mysqlbinlog, but I can't read past this point in 
the log.  However, the master server is running along just fine, writing new info into 
this log file just fine.

Is there any way to get the slaves past this bad record, short of manually resyncing 
everything again?

Thanks for your help!

Bill Earl
Network Admin
www.bestbill.com


Re: Job announcement?

2004-06-28 Thread David Griffiths
I've seen job announcements posted on other lists (and I think this one 
as well). I think it's relevant, and shouldn't offend anyone.

David
Michael Halligan wrote:
Greetings.
My company has an immediate opening in SF for a Sr. Mysql/DB
architect. I was wondering if this would be the appropriate list to
post such an announcement?
Michael
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql client connectivity and orphaned connections

2004-06-28 Thread Jim
Hi,
I'm just wondering if anyone can tell me exactly what happens when one 
is using the mysql client on a remote server and the connection to that 
server (in this case over ssh) gets dropped.

In this case, I was in the midst of a long OPTIMIZE TABLE.  Stepped out 
to get some coffee and the ssh connection timed out.  When I log back in 
again, and log in to mySQL using the client, I can see that the first 
mysql session is still in effect, and show processlist shows that the 
state of the orphaned user is Repair with keycache.  But, this is 
taking too long (over 45 minutes) I think.
+-+--+---++-+--+--+-+
| Id  | User | Host  | db | Command | Time | State 
  | Info|
+-+--+---++-+--+--+-+
|  30 | root | localhost | master | Query   | 2963 | Repair with 
keycache | OPTIMIZE TABLE post |
| 435 | root | localhost | NULL   | Query   | 0| NULL 
  | show processlist|
+-+--+---++-+--+--+-+


Is it possible that the first process stops doing anything when the ssh 
connection is dropped?  Is there any way, other than show processlist 
to tell whether something is actually going on?  Whether there is actual 
progress?

Thanks!
Jim
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql client connectivity and orphaned connections

2004-06-28 Thread Eric Bergen
When a connection is dropped most read only queries will be immediatly
dropped, write queries such as your optimize table will continue until
they complete as not to corrupt any table data or leave anything
unfinished.

-Eric

On Mon, 28 Jun 2004 17:29:22 -0400, Jim [EMAIL PROTECTED] wrote:
 
 Hi,
 
 I'm just wondering if anyone can tell me exactly what happens when one
 is using the mysql client on a remote server and the connection to that
 server (in this case over ssh) gets dropped.
 
 In this case, I was in the midst of a long OPTIMIZE TABLE.  Stepped out
 to get some coffee and the ssh connection timed out.  When I log back in
 again, and log in to mySQL using the client, I can see that the first
 mysql session is still in effect, and show processlist shows that the
 state of the orphaned user is Repair with keycache.  But, this is
 taking too long (over 45 minutes) I think.
 +-+--+---++-+--+--+-+
 | Id  | User | Host  | db | Command | Time | State
| Info|
 +-+--+---++-+--+--+-+
 |  30 | root | localhost | master | Query   | 2963 | Repair with
 keycache | OPTIMIZE TABLE post |
 | 435 | root | localhost | NULL   | Query   | 0| NULL
| show processlist|
 +-+--+---++-+--+--+-+
 
 Is it possible that the first process stops doing anything when the ssh
 connection is dropped?  Is there any way, other than show processlist
 to tell whether something is actually going on?  Whether there is actual
 progress?
 
 Thanks!
 
 Jim
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



time on tenth of secounds

2004-06-28 Thread Matthias Eireiner
Hi there,

I'm currently writing a tool for managing time during a competition. My
problem is, that I want to save the time with the fraction of seconds.
as far as I know MySQL is fine with something like D HH:MM:SS.fraction,
but doesn't store the fraction. Is there anything that could fix that
problem?
I thought about an extra integer column only for the fraction, but I would
love to use the MySQL built in sub and add functions.

Any help is very appreciated.
thanks in advance

- Matthias


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Job Announcement, Sr Database Developer - San Francisco

2004-06-28 Thread Michael Halligan
The company I'm hiring is looking for somebody very senior to help
architect a very modular MySQL, Linux, and Java  based infrastructure.
 The official posting is below, but what we need is :

- Very senior with MySQL development
- Oracle experience to help facilitate a conversion project
- MySQL (a great amount of relevant experience architecting large
MySQL clusters) and InnoDB
- Experience large architecting multi-master replication environments,
 in a high-transactional environment
- Open-minded when it comes to systems design
- Shell/Perl scripting
- CVS


Pluses : 

- Java
- eXtreme Programming experience
- Subversion

It's a rather fun project. We're basically rebuilding from a
monolithic six year old oracle/solaris system, and rebuilding it to be
very modular and fault-tolerant in a very modular Linux/MySQL
platform.

We're located off the embarcadero, so being near enough SF to commute
is a must.

Michael T. Halligan
Infrastructure Architect
MyPoints.com

Please send resume to [EMAIL PROTECTED] 


Now for the official version.



MyPoints is a leading developer of Internet direct marketing and
loyalty infrastructure. We are a leading provider of Internet direct
marketing, incentive and loyalty products and serve more than ten
million consumers and 800 advertisers and partners through our
double-opt-in programs.

Here is a breathtaking opportunity! We're building a major new system,
from scratch, and we need a database administrator who can help us
architect and implement the system. You will have hands-on access to
the latest technology, with a heavy dose of open-source platform
software and plenty of room for innovation.

We continue to grow and are looking for a motivated Database Developer
(Sr. Engineer) who wants to be part of a leading Internet e-commerce
company. This position will be located in our San Francisco
Headquarters.

Job Duties and Key Responsibilities:
# Design, develop and maintain a relational database schema for a
major new system development effort
# Design, develop, test and maintain stored procedures for the new system
# Develop application-specific fault-tolerant distributed database mechanisms

Required Experience and Background:
# Intimately familiar with relational database scheme development
# Intimately familiar with stored procedure development, SQL query
development and optimization
# Intimately familiar with database performance and capacity planning
and measurement
# Min. BS Computer Science or equivalent
# Min. 5 years experience developing and supporting large, complex
database applications in a client-server or N-tier environment

Preferred Experience:
# Proven experience with distributed relational database design issues
# Proven experience with open source/linux development environment
# Proven experience with MySQL
# Familiarity with object-oriented design techniques


The excitement of a growth company is only one of the reasons to join
our world-class organization. You'll receive a competitive
compensation package that allows you to share in the value you'll help
create for our company.

MyPoints.com is an equal opportunity employer committed to workforce diversity.


No agency submittals please. No paid relocation is offered for this position.

EOE

www.MyPoints.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication corruption and 64 bit mysql

2004-06-28 Thread Matthew Kent
After several long days trying to fix this I'm running out of ideas.

Master: RedHat 7.3 kernel 2.4, MySQL 4.0.20 32 bit (mysql.com rpm) -
Slave: Fedora Core 2 64 bit kernel 2.6.5, MySQL-Max-4.0.20-0 64 bit
(mysql.com rpm)

In a varying amount of time after a few hundred thousand queries
replication dies with 

snippy
040625 16:19:12  Error in Log_event::read_log_event(): 'Event too
small', data_len: 0, event_type: 0
040625 16:19:12  Error reading relay log event: slave SQL thread aborted
because of I/O error
/snipped

Using instructions from Sasha Pachev
http://groups.google.ca/groups?hl=enlr=ie=UTF-8selm=c400pk%245pd%241%
40FreeBSD.csie.NCTU.edu.tw I've looked at the binlog on the slave and
can indeed verify a large chunk of empty space and that query is indeed
logged on the master.

Fun part is that it does work when I point our 32 bit master to
different 32 bit slave. So I know it's not a problem with our old
servers, just this fancy new one.

So far I've 

- Tried a different master (we have a pool of 5 similar servers to use
as a master).
- Tried 32-bit server instead of 64-bit Max on the slave (couldn't get
64 bit non-Max to start at all, would just dump).
- Tried swapping nic to a different brand.
- Used tcpdump to attempt to spot any network level issues.
- Tried pointing the binlogs on the master to another local disk
separate from the data. 
- Examined the changelogs for the nic drivers.
- Googled this to no end.

With no luck.

I'm open for suggestions. 

I suppose the next step is to install core 2 32-bit and try again.

Thanks,

Matthew Kent \ SA \ bravenet.com \ 1-250-954-3203 ext 108

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Storage Engines and Table Types.....

2004-06-28 Thread Eric Bergen
Show table status\G 
Will give you a list of tables the Type: field for each table is the
storage engine. Your installation probably defaults to MyISAM

-Eric 

On Mon, 28 Jun 2004 11:51:40 -0400, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 
 You can see a list of the available storage types with  SHOW ENGINES
 
 To see which engine is in use for any table you can SHOW TABLE STATUS or
 SHOW CREATE TABLE tablename
 
 All of these commands, and more, are documented at :
 http://dev.mysql.com/doc/mysql/en/SHOW.html
 
 Yours,
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
   Scott Fletcher
   [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
   com cc:
Fax to:
   06/28/2004 11:27 Subject:  MySQL Storage Engines and 
 Table Types.
   AM
 
 
 
 
 Hi!
 
 How do I know which of the storage engine am I using or running, or
 whatever that use the databases/tables?
 
 Thanks,
  Scott F.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: reading past bogus data in log event error in binary log in 4.0.18 ?

2004-06-28 Thread Eric Bergen
I had this same problem. This isn't fool proof or recomended but what
i ended up doing was calling mysqlbinlog over and over again
incrementing --position (or --offset i can't remember) until it picked
up a good record again and kept going. Chances are you will lose
queries but it does work.

-ERic

On Mon, 28 Jun 2004 13:51:00 -0700, Bill Earl [EMAIL PROTECTED] wrote:
 
 Hello everyone,
 
 We're having a problem with replication. The servers are all running 4.0.18 under 
 SuSE Linux 9.1 Professional.
 
 I noticed that the two slaves were both showing a status of I/O thread not running.  
 In the mysqld.log file for both slaves I found the following entry:
 
 040628 12:59:08 Error reading packet from server: bogus data in log event 
 (server_errno=1236)
 040628 12:59:08 Got fatal error 1236: 'bogus data in log event' from master when 
 reading data from binary log
 040628 12:59:08 Slave I/O thread exiting, read up to log 'daredevil1-bin.001', 
 position 427847536
 
 On the master server, when I run mysqlbinlog on the daredevil1-bin.001 log, I get 
 the following error:
 ERROR: Error in Log_event::read_log_event(): 'Event too small', data_len: 0, 
 event_type: 0
 
 I've tried using the -f option with mysqlbinlog, but I can't read past this point in 
 the log.  However, the master server is running along just fine, writing new info 
 into this log file just fine.
 
 Is there any way to get the slaves past this bad record, short of manually resyncing 
 everything again?
 
 Thanks for your help!
 
 Bill Earl
 Network Admin
 www.bestbill.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sql join statement that I do not understand

2004-06-28 Thread Garth Webb
On Mon, 2004-06-28 at 09:32, Joseph Norris wrote:
 Group,
 
 First of all thanks to anyone who can respond to this - I am really stumped.
 
 
 I have been trying to figure this one out and maybe someone out there with a
 little deep understanding of joins in sql can give me a hand.

I think you're confusing the purpose of JOINs.  The first statement you
gave DOES do a join even though the SQL doesn't explicitly have the word
'join' in it.  Its implicit and its called an 'inner join'.

Your second query that has 'left join' in it is called an 'outer join'. 
Outer joins can return different datasets than inner joins and neither
have anything to do with speed optimizations.  In your last two examples
switching which tables are on which side of the join will have major
implications for what data you will get back.  The fact that they all
return you the same data for the query you are running is coincidence.

If you have a book on SQL I'd suggest you check out chapter on joining. 
Also, I googled this:

http://www.w3schools.com/sql/sql_join.asp

which seems to do a decent job of explaining things.

 I am working on a system that creates these sql statements on the fly and so
 the table names and fields are really perl variables.
 
 This the sql without the join:
 
 
 select
 cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl from
 cs_fld,  cs_fld_cs_tbl_l
  where cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
  and  cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type =
 'basic'
 
 
 
 +-++--++-+--
 ++
 | cs_type | field_name | name | type   | type_sql|
 rl_table | cs_tbl |
 +-++--++-+--
 ++
 | basic   | status | Status   | recordid   | int |
 status   | [23]   |
 | basic   | body   | Main Body| textarea   | text|
 | [23]   |
 | basic   | section| Section  | recordid   | int |
 demsect  | [23]   |
 | basic   | title  | Title| text   | varchar(50) |
 | [23]   |
 | basic   | assignu| Assign to User:  | recordid   | int | u
 | [23]   |
 | basic   | subsect| Subsection   | recordlist | list|
 subsect  | [23]   |
 | basic   | assigngr   | Assign to Group: | recordid   | int |
 cs_com   | [23]   |
 | basic   | sorder | Relative Order   | numeric| int |
 | [23]   |
 | basic   | con_type   | Content Type:| recordid   | int |
 con_type | [23]   |
 +-++--++-+--
 ++
 9 rows in set (0.01 sec)
 
 
 
 
 
 This is the join:
 
 select
 cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl from
 cs_fld left join cs_fld_cs_tbl_l
  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
  WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type =
 'basic'
 
 
 +-++--++-+--
 ++
 | cs_type | field_name | name | type   | type_sql|
 rl_table | cs_tbl |
 +-++--++-+--
 ++
 | basic   | status | Status   | recordid   | int |
 status   | [23]   |
 | basic   | body   | Main Body| textarea   | text|
 | [23]   |
 | basic   | title  | Title| text   | varchar(50) |
 | [23]   |
 | basic   | subsect| Subsection   | recordlist | list|
 subsect  | [23]   |
 | basic   | assigngr   | Assign to Group: | recordid   | int |
 cs_com   | [23]   |
 | basic   | section| Section  | recordid   | int |
 demsect  | [23]   |
 | basic   | sorder | Relative Order   | numeric| int |
 | [23]   |
 | basic   | assignu| Assign to User:  | recordid   | int | u
 | [23]   |
 | basic   | con_type   | Content Type:| recordid   | int |
 con_type | [23]   |
 +-++--++-+--
 ++
 9 rows in set (4.44 sec)
 
 
 
 Notice the time difference?  I thought that the join statement was supposed
 to be more efficient.
 
 
 I did some dinking with the joins and I go this:
 
 It appears that optimization of joins depends upon what table you are
 joining to what:
 
 
 This is the new sql:
 
 mysql select
 - cs_fld_cs_tbl_l.cs_type,field_name,name,type,type_sql,rl_table,cs_tbl
 from
 - cs_fld_cs_tbl_l left join cs_fld
 -  ON cs_fld_cs_tbl_l.cs_fld_id = cs_fld.cs_rid
 -  WHERE cs_fld_cs_tbl_l.cs_tbl_id = '23'  AND cs_fld_cs_tbl_l.cs_type
 = 'basic';
 +-++--++-+--
 ++
 | cs_type | field_name | name | type   | type_sql|
 rl_table | cs_tbl |
 

Select compare to current date

2004-06-28 Thread Robb Kerr
Need help with a SQL Select statement. I've got a table that consists of a 

list of years (1930-2014). I need to create a drop-down list on my page 

that consists of a list of years between 1930 and the current year. How do 

I construct this SELECT? See below...

 

SELECT * FROM table WHERE 'year-field' = year of current date

 

What's the syntax for year of current date?

 

Thanx in advance for your help,

Robb Kerr

 

Robb Kerr

Digital IGUANA

Helping Digital Artists Achieve their Dreams



http://www.digitaliguana.com http://www.digitaliguana.com/ 

http://www.cancerreallysucks.org http://www.cancerreallysucks.org/ 

 



RE: Select compare to current date

2004-06-28 Thread David Rodman
]=
]=What's the syntax for year of current date?

YEAR(NOW())

(rtfm..:-)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Migrating Access Tables -- Empty Columns, Date and Time

2004-06-28 Thread Robert L Cochran
I'm migrating a Microsoft Access 2002 (Service Pack 3) table constructed
by my wife to a corresponding table in MySQL 4.0.20. Some columns in
most of the 3000+ rows are empty. Some of these are contiguous empty
columns. I don't know if Access considers them NULL or not, but when you
export an Access row containing empty columns to a comma separated
values file, the empty column will be represented by a sequence of
placeholder commas. Here is a part of the first table row exported by
Access:

WEEKEND,,8,1,,0,,,at,,,2/12/1998 0:00:00,11/27/1998
0:00:00,,MB

Based on recent experience with loading a simpler Access table, these
empty columns will be imported as is by both mysqlimport and LOAD DATA
LOCAL INFILE, but with warnings. I have 2 questions associated with
this: 

1) How do I make mysqlimport or LOAD DATA LOCAL tell me the text of each
warning? By default they print a summary count of warnings but don't
issue actual warning messages. The default log files show nothing.
mysqlimport -v does not do it.

2) When consecutive commas (meaning at least 1 empty column, sometimes
several) are seen, what does mysqlimport/LOAD DATA do to the
corresponding column entrie(s)? Will it set them to NULL? Or to the
default specified in the CREATE TABLE statement? Should I explicitly set
these to NULL where permitted by the column type?

Last of all, look at this date and time stamp exported by Access:

,2/12/1998 0:00:00,

Will mysqlimport choke on this, since MySQL likes dates to be in
ccyy-mm-dd format? Will I need to reformat the date with a sed script?
 

Thanks

Bob Cochran
Greenbelt, Maryland, USA





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select compare to current date

2004-06-28 Thread John Mistler
This one I can help you with:

SELECT year-field FROM table WHERE YEAR(year-field) = YEAR(CURDATE());

- John

on 6/28/04 4:49 PM, Robb Kerr at [EMAIL PROTECTED] wrote:

 Need help with a SQL Select statement. I've got a table that consists of a
 
 list of years (1930-2014). I need to create a drop-down list on my page
 
 that consists of a list of years between 1930 and the current year. How do
 
 I construct this SELECT? See below...
 
 
 
 SELECT * FROM table WHERE 'year-field' = year of current date
 
 
 
 What's the syntax for year of current date?
 
 
 
 Thanx in advance for your help,
 
 Robb Kerr
 
 
 
 Robb Kerr
 
 Digital IGUANA
 
 Helping Digital Artists Achieve their Dreams
 
 
 
 http://www.digitaliguana.com http://www.digitaliguana.com/
 
 http://www.cancerreallysucks.org http://www.cancerreallysucks.org/
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: INNODB transaction log size

2004-06-28 Thread Michael Lee
Victor,
 
Thanks for your reply.
 
Actually, i cannot find such an option and want to see if i have missed something.
 
Referring to Innodb transaction log, I do some more searching and would like to 
confirm what i found from the web (this information is not available in MySQL 
documentation)
 
. Transaction Log file is the redo log and can be overwritten by the a single 
transaction
 
. Undo Log is stored in the innodb tablespace and used for rollback of uncommitted 
transaction. According to the quota below (from ACID Transaction  in MySQL with 
InnoDB by Arjen Lentz)
 
Writing uncommitted data to the tablespace (with checkpoints) ensures that transaction 
size is not limited by memory or log file size, but simply by the size of the 
tablespace

Therefore, I do not need to increate the log file size even though a single Load Data 
statement insert millions rows.
 
 
For those innodb expert, please inform me whether my conculsion is right or wrong.
 
Thanks  Regards,
Michael

Victor Pendleton [EMAIL PROTECTED] wrote:
I do not believe this is currently an option in the `load data infile`
syntax. One option would be to read the file programmatically and issue the
commits after `x` number of inserts.

-Original Message-
From: Michael Lee
To: [EMAIL PROTECTED]
Sent: 6/28/04 1:21 AM
Subject: INNODB transaction log size

Hi,

I would like to migrate my DB from Sybase ASE to MySQL INNODB table.
Data has been extracted and stored as a file. I want to use the command
Load Data Infile to insert the data to MySQL. However, some table
contains millions of rows. Can i control the batch size of the loading
(e.g. commit the transaction after 5 rows inserted)?

If no, should i define a very large transaction log to handle the huge
transaction? (currently, it is 5M)

Any suggestion is welcomed.

TIA
Michael 

???...
??
http://mobile.yahoo.com.hk/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]


[EMAIL PROTECTED]

http://mobile.yahoo.com.hk/


Full text practices

2004-06-28 Thread East Bay Technologies
I am looking for information on the proper way to code full text queries and more 
information on how they work.
I am especially interested in how to code for multi word queries...

for example : 

Searching for : 'today is the day'

Select * from table1 where match field1 against ('today is the day' IN BOOLEAN MODE) 

Select * from table1 where match field1 against ('today or is or the or day' IN 
BOOLEAN MODE) 

What is the difference between using or not using IN BOOLEAN MODE?
What is the best way to allow people to also search for today is the day in 
quotes--- literal string...

Thanks in advance for any help.
Craig Stadler



Re: Full text practices

2004-06-28 Thread mos
At 10:24 PM 6/28/2004, you wrote:
I am looking for information on the proper way to code full text queries 
and more information on how they work.
http://dev.mysql.com/doc/mysql/en/Fulltext_Search.html
I am especially interested in how to code for multi word queries...
for example :
Searching for : 'today is the day'
Select * from table1 where match field1 against ('today is the day' IN 
BOOLEAN MODE) 
Select * from table1 where match field1 against ('today or is or the or 
day' IN BOOLEAN MODE) 

What is the difference between using or not using IN BOOLEAN MODE?
http://dev.mysql.com/doc/mysql/en/Fulltext_Boolean.html
With boolean mode it must contain all words. Row results are not ranked.
If not using boolean mode, it will rank the results with the rows that have 
the most word occurances at the top of the list.

What is the best way to allow people to also search for today is the day 
in quotes--- literal string...
If it has to be an example match then:
select * from table where memo like '%today is the day%';
Unfortunately this is the slowest way to search. It's fine if you have only 
a few hundred rows.  Fulltext search will choose records where the words 
appear in any order so it will find the day is today.

BTW, I suppose you know MySQL by default does not index words that have 
fewer than 3 letters. MySQL 4.1 (4.0?) has a configuration option that 
allows you to change this min word size. Otherwise in your example it will 
only find today because is, the and day are too small. See 
ft_min_word_len Besides, these words are very common and may appear in 
the exclude word list. This can also be changed in 4.1 but indexing common 
works will of course create a much larger index.

Mike 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Where is the utf8_general_ci collation in sources?

2004-06-28 Thread Dainis Polis
Hi  All!
Where to find  subj.?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Where is the utf8_general_ci collation in sources?

2004-06-28 Thread Paul DuBois
At 7:53 +0300 6/29/04, Dainis Polis wrote:
Hi  All!
Where to find  subj.?
strings/ctype-utf8.c, I believe.
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: GUID storage

2004-06-28 Thread Ray Kiddy
Nowhere in this discussion was this question.
Is it in the plans to have a 128 bit numeric column type for MySQL? If 
so, in what kind of time frame? If not, why not?

Character arrays are obviously, but they are probably not the best way 
to get optimal performance. Native support for a 128 bit number, usable 
as a UUID, would be better.

- ray
On May 12, 2004, at 3:07 PM, Jeremy Zawodny wrote:
On Wed, May 12, 2004 at 02:50:55PM -0700, Larry Lowry wrote:
Well I'm trying to move to MySQL from the MS SQL Server
world.  Most data elements are easy except for the uniqueidentifier.
In the MySQL world what is the preferred/best way to store a
uniqueidentifier?  The easiest would just be a char(36).
If you have unique ids that are 36 characters, then use a char(36).
That seems like the obvious thing to do.
Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
[book] High Performance MySQL -- http://highperformancemysql.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


C API -- huge result sets slowin me down

2004-06-28 Thread Matt Eaton
Hi all,

I was hoping this was the right place for a question about the C API.
I've been grabbing result sets from tables in the C API for a few years
now, but I'm starting to work with result sets that are big enough to
bog me down.  Of course, the result sets aren't insanely big, so I was
wondering why it was taking so long for me to suck them in to C,
especially when I can run the same query from the command line using the
binaries and they can cache it to a file on the hard disk pretty much
instantly.  So, basically, I was just hoping that I've been doing
something wrong, or at least that there was something I could do better,
to make my database communication as fast as the mysql command line
tools.  I've checked out their source and nothing obvious jumps out at
me.  Here's a non-functional sample of my code:

int main(int argc, char *argv[] ) {
int uid;
int sid;
char sqlBuff[4000];
int err = 0;
int i;
// Setup the database communications space:
MYSQL dbase;
MYSQL_RES *result;
MYSQL_ROW row;

float **genAttrib;

//... snip ...


// Connect to the database:
if (mysql_init(dbase) == NULL) err = 1;
else {


if(mysql_real_connect(dbase,localhost,login,pass,test,0,NULL,CL
IENT_FOUND_ROWS) == NULL) {
err = 1;
fprintf(stderr, Failed to connect to database:
Error: %s\n,
mysql_error(dbase));
}
}

// If the connection couldn't be established:
if(err) {
printf(db connection failed!\n);
exit(1);
}


//... snip ...

// This query could have as many as a million rows returned, but
the query itself runs quite fast.  It seems to just be
// sucking it into C that can take up to four seconds on our
dual Xeon server.
sprintf(sqlBuff,SELECT A.* FROM `attribs` as A, login AS L
WHERE A.guid=L.guid AND L.isActive=1 AND L.sid=%d AND
A.guid!=%d,sid,uid);
if (mysql_real_query(dbase,sqlBuff,strlen(sqlBuff))) {
printf(Pool Attributes Select Failed... dumbass\n);
fprintf(stderr, Error: %s\n,
mysql_error(dbase));
exit(1);
}

result = mysql_store_result(dbase);
numRows=mysql_num_rows(result);
for (i=0;inumRows;i++) {
row = mysql_fetch_row(result);
tempq=atoi(row[1]);
tempP=atoi(row[0]);
genAttrib[tempP][tempq]=atoi(row[2]);
}

return 0;
}

So, if someone sees something that I could change to speed things up, or
I should direct this question elsewhere... thanks for your help and
thanks for reading this far!

Thanks again,
Matt

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: GUID storage

2004-06-28 Thread Dan Nelson
In the last episode (Jun 28), Ray Kiddy said:
 Is it in the plans to have a 128 bit numeric column type for MySQL?
 If so, in what kind of time frame? If not, why not?

I don't know of any 32-bit compiler that provides a 128-bit integer
type, which means for most platforms mysql would have to require a
bignum library (openssl or libgmp) to support it in any meaningful
fashion, and it wouldn't be very fast.

 Character arrays are obviously, but they are probably not the best
 way to get optimal performance. Native support for a 128 bit number,
 usable as a UUID, would be better.

You're not doing math on those UUIDs in mysql, are you?  A CHAR(16)
BINARY field to store a raw UUID in sounds pretty optimal to me.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]