Re: Full text search

2003-01-04 Thread Qunfeng Dong
full text search is different than pattern match. If
you want to return stef, you have to use pattern
match. 

Qunfeng

--- "Steffan A. Cline" <[EMAIL PROTECTED]> wrote:
> Am I missing something on mysql full text search?
> 
> 
> I was using a simple statement like
> select firstname from contacts where
> match(firstname,lastname) against
> ('steffa');
> 
> I am actually looking for "steffan" but wanted to
> see what it would return.
> Now, if I search for the full name "steffan" it
> finds it ok. Is there
> something I am missing for it to return any matches
> containing "steff" or
> "steffa" or even "stef" 
> 
> 
> Thanks
> 
> Steffan
> 
>
---
> T E L  6 0 2 . 5 7 9 . 4 2 3 0 | F A X  6 0 2 . 9 7
> 1 . 1 6 9 4
> Steffan A. Cline
> [EMAIL PROTECTED]
> Phoenix, Az
> http://www.ExecuChoice.net  
>USA
> AIM : SteffanC  ICQ : 57234309
> The Executive's Choice in Lasso driven Internet
> Applications
>
---
> 
> 
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Pattern Match on 3.23

2002-12-31 Thread Qunfeng Dong
We are running on linux redhat 7.3, RAM 4G, I am using
my.huge-cnf. 1.2Ghz dual CPUs PIII. The mySQL database
resides on SCSI disk. We do pattern search on varchar
or text fields of a table with about 2.6 millions
records and going (it also joins with other smaller
tables). Hope this helps. 

Qunfeng
 
--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> Qunfeng,
> ..millions of records.. seems like a lot...
> Would you be kind enough to provide me with your
> hardware configuration?
> Thanks.
> 
> 
> At 07:57 AM 12/31/02 -0800, Qunfeng Dong wrote:
> >If you are searching with %pattern%, your speed is
> >depending on the speed of table-scan. The speed of
> >tablescan depends on your my.cnf setting (increase
> >record buffer size?) and how big your records are.
> I
> >am using pattern search here with millions of
> records
> >and the performance is not terriblly too bad. Maybe
> >you can still use fulltext search for general
> cases;
> >and use pattern match ONLY when you are searching
> for
> >3-char-term. You should be able to make such
> "Switch"
> >through your interface.
> >
> >Qunfeng
> >
> >
> >--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> > > Qunfeng,
> > > Thanks for the feedback, I surely appreciate it.
> > >
> > > I asked the pattern match question, because I am
> > > using a hosting service
> > > that hosts MySQL 3.23. Since I have a need to
> search
> > > on terms less than 3
> > > characters long and I can not re-compile, I was
> > > looking for another
> > > solution. I thought that I might be able to use
> > > pattern matching as a
> > > substitute, but it sounds like performance might
> be
> > > an issue with large tables.
> > >
> > > If you have any other recommendations on how I
> could
> > > approach my problem, I
> > > would surely appreciate them.
> > >
> > >
> > > At 07:35 AM 12/31/02 -0800, Qunfeng Dong wrote:
> > > >It can perform pattern match on text field. The
> > > only
> > > >draw back is the speed (especially if you are
> using
> > > >%pattern% to do the search) when you tables are
> > > >getting huge, since there is no index to help.
> > > >
> > > >Qunfeng
> > > >
> > > >--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> > > > > I would like to use pattern matching as a
> > > substitute
> > > > > to fulltext search on
> > > > > MySQL 3.23.
> > > > > Is this a good alternative?
> > > > >
> > > > > Are there any limits, like not being able to
> > > perform
> > > > > a pattern match on a
> > > > > 'text' field, etc., that I need to be aware
> of?
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > >
> > >
> >
>
>-
> > > > > Before posting, please check:
> > > > >http://www.mysql.com/manual.php   (the
> > > manual)
> > > > >http://lists.mysql.com/   (the
> list
> > > > > archive)
> > > > >
> > > > > To request this thread, e-mail
> > > > > <[EMAIL PROTECTED]>
> > > > > To unsubscribe, e-mail
> > > > >
> > >
> >
>
><[EMAIL PROTECTED]>
> > > > > Trouble unsubscribing? Try:
> > > > > http://lists.mysql.com/php/unsubscribe.php
> > > > >
> > > >
> > > >
> > >
> >__
> > > >Do you Yahoo!?
> > > >Yahoo! Mail Plus - Powerful. Affordable. Sign
> up
> > > now.
> > > >http://mailplus.yahoo.com
> > > >
> > >
> >
>
>-
> > > >Before posting, please check:
> > > >http://www.mysql.com/manual.php   (the
> manual)
> > > >http://lists.mysql.com/   (the list
> > > archive)
> > > >
> > > >To request this thread, e-mail
> > > <[EMAIL PROTECTED]>
> > > >To unsubscribe, e-mail
> > >
> <[EMAIL PROTECTED]>
> > > >Trouble unsubscribing? Try:
> > > http://lists.mysql.com/php/unsubscribe.php
> > >
> > >
> >
> >
> 

Re: Pattern Match on 3.23

2002-12-31 Thread Qunfeng Dong
If you are searching with %pattern%, your speed is
depending on the speed of table-scan. The speed of
tablescan depends on your my.cnf setting (increase
record buffer size?) and how big your records are. I
am using pattern search here with millions of records
and the performance is not terriblly too bad. Maybe
you can still use fulltext search for general cases;
and use pattern match ONLY when you are searching for
3-char-term. You should be able to make such "Switch"
through your interface. 

Qunfeng 


--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> Qunfeng,
> Thanks for the feedback, I surely appreciate it.
> 
> I asked the pattern match question, because I am
> using a hosting service 
> that hosts MySQL 3.23. Since I have a need to search
> on terms less than 3 
> characters long and I can not re-compile, I was
> looking for another 
> solution. I thought that I might be able to use
> pattern matching as a 
> substitute, but it sounds like performance might be
> an issue with large tables.
> 
> If you have any other recommendations on how I could
> approach my problem, I 
> would surely appreciate them.
> 
> 
> At 07:35 AM 12/31/02 -0800, Qunfeng Dong wrote:
> >It can perform pattern match on text field. The
> only
> >draw back is the speed (especially if you are using
> >%pattern% to do the search) when you tables are
> >getting huge, since there is no index to help.
> >
> >Qunfeng
> >
> >--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> > > I would like to use pattern matching as a
> substitute
> > > to fulltext search on
> > > MySQL 3.23.
> > > Is this a good alternative?
> > >
> > > Are there any limits, like not being able to
> perform
> > > a pattern match on a
> > > 'text' field, etc., that I need to be aware of?
> > >
> > > Thanks.
> > >
> > >
> > >
>
>-
> > > Before posting, please check:
> > >http://www.mysql.com/manual.php   (the
> manual)
> > >http://lists.mysql.com/   (the list
> > > archive)
> > >
> > > To request this thread, e-mail
> > > <[EMAIL PROTECTED]>
> > > To unsubscribe, e-mail
> > >
>
><[EMAIL PROTECTED]>
> > > Trouble unsubscribing? Try:
> > > http://lists.mysql.com/php/unsubscribe.php
> > >
> >
> >
> >__
> >Do you Yahoo!?
> >Yahoo! Mail Plus - Powerful. Affordable. Sign up
> now.
> >http://mailplus.yahoo.com
> >
>
>-
> >Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list
> archive)
> >
> >To request this thread, e-mail
> <[EMAIL PROTECTED]>
> >To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> >Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Pattern Match on 3.23

2002-12-31 Thread Qunfeng Dong
It can perform pattern match on text field. The only
draw back is the speed (especially if you are using
%pattern% to do the search) when you tables are
getting huge, since there is no index to help.

Qunfeng

--- Frank Peavy <[EMAIL PROTECTED]> wrote:
> I would like to use pattern matching as a substitute
> to fulltext search on 
> MySQL 3.23.
> Is this a good alternative?
> 
> Are there any limits, like not being able to perform
> a pattern match on a 
> 'text' field, etc., that I need to be aware of?
> 
> Thanks.
> 
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




replace ... select (why showing duplicates?)

2002-12-30 Thread Qunfeng Dong
Hi, 

When I do "replace table2 select (cols) table1 where
clause" to update some records in the table2,
sometimes I got the something like following
indicating Duplicates records being detected. 

---
Query OK, 122259 rows affected (1 min 18.28 sec)
Records: 122259  Duplicates: 122254  Warnings: 0
---

Can I safely ignore such "warnings"? The thing bothers
me is that the records selected from the table1 are
NOT duplicate with the existing records in the table2,
but still showing the "Duplicates"? They are NOT
duplicate because the count(*) with the whereclause
returns 0 records before the above "replace ...
select"
mysql> select count(*) from table2 where clause;
+--+
| count(*) |
+--+
|0 |
+--+


I am using mysql3.23.49 Linux redhat7.3

Thanks!

Qunfeng Dong  

__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Thanks! Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-19 Thread Qunfeng Dong
Well, thanks to all of your great help! I am able to
speed up the query {select count(*) from NEW_Sequence
s left join NEW_Sequence_Homolog h on s.Seq_ID =
h.Seq_ID;} from 1 min 52.61 sec down to 20.62 sec now.
The only thing I changed so far was the Seq_ID from
type varchar to bigint. The Seq_ID was not all
numerical for different type of Sequences; but I
managed to assign numerical code to those
non-numerical ones now. 

Qunfeng

> > CREATE TABLE NewSequence
> > (
> > Seq_ID  varchar(50) NOT NULL,
> > GenBank_Acc varchar(10),
> > Organismvarchar(50) NOT NULL,
> > Seq_Type  enum("EST","GSS","EST
> Contig","EST
> > Singlet","GSS Contig","GSS Singlet","GSS Plasmid
> > Contig","Protein") NOT NULL,
> > Seq_Length  int NOT NULL,
> > Seq_Title   textNOT NULL,
> > Comment text,
> > Entry_Date  dateNOT NULL,
> > PRIMARY KEY (Seq_ID),
> > UNIQUE  (GenBank_Acc),
> > INDEX (Seq_Type),
> > INDEX (Organism)
> > );
> >
> > This NewSequence table is used to track some
> general
> > info about sequence. Notice I have to use text
> > datatype to describe "Comment" and "Seq_Title"
> fields;
> > therefore I have to use varchar for other string
> > fields. In addition, the Seq_ID is not numerical.
> > BTW, I found indexing on Seq_Type. Organism which
> are
> > very repeative still helps with accessing. This
> table
> > has 2676711 rows.
> >
> >
> > CREATE TABLE NewSequence_Homolog
> > (
> > Seq_ID  varchar(50) NOT NULL,
> > Homolog_PID int NOT NULL,
> > Homolog_Descvarchar(50) NOT NULL,
> > Homolog_Species varchar(50),
> > PRIMARY KEY (Seq_ID, Homolog_PID)
> > );
> >
> > This NewSequence_Homolog table is to track which
> > protein sequences (homolog) are similar to the
> > sequence I store in the NewSequence table. This
> table
> > has 997654 rows.
> >
> > mysql> select count(*) from NewSequence s left
> join
> > NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
> > +--+
> > | count(*) |
> > +--+
> > |  3292029 |
> > +--+
> > 1 row in set (1 min 30.50 sec)
> >
> > So a simple left join took about 1 min and half.
> > First, is this slow or I am too picky?
> >
> > This is the "Explain".
> > mysql> explain select count(*) from NewSequence s
> left
> > join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
> >
>
+---+---+---+-+-+--+-+--
> ---+
> > | table | type  | possible_keys | key |
> key_len |
> > ref  | rows| Extra   |
> >
>
+---+---+---+-+-+--+-+--
> ---+
> > | s | index | NULL  | PRIMARY | 
> 50 |
> > NULL | 2676711 | Using index |
> > | h | ref   | PRIMARY   | PRIMARY | 
> 50 |
> > s.Seq_ID |9976 | Using index |
> >
>
+---+---+---+-+-+--+-+--
> ---+
> >
> >
> > I am running MySQL 3.23.49 on RedHat linux 7.3 on
> a
> > dedicated server with 4 GB memory. The only
> setting I
> > changed is to copy the my-huge.cnf into
> /etc/my.cnf.
> >
> > Qunfeng
> >
> > --- "Michael T. Babcock" <[EMAIL PROTECTED]>
> > wrote:
> > > Qunfeng Dong wrote:
> > >
> > > >not-so-good performance (join on tables much
> > > smaller
> > > >than yours takes minutes even using index) and
> I
> > > seem
> > > >to read all the docs I could find on the web
> about
> > > how
> > > >to optimize but they are not working for me (I
> am
> > > >
> > >
> > > Have you stored a slow query log to run them
> through
> > > 'explain' and see
> > > why they're slow?  Do you want to post some of
> them
> > > here so we can
> > > suggest what might be done to make them faster?
> 
=== message truncated ===


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Missing values

2002-12-19 Thread Qunfeng Dong
replace missing value (NULL) as \N in your .txt file

--- Gianluca Carnabuci <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I've been trying to import a huge .txt file into a
> MySql table. In the .txt file, missing values are
> recorded as empty cells (it might be that there's
> some hidden character instead, but I wouldn't know
> how to figure that out). When I LOAD DATA INFILE,
> MySql writes the missing values as zeros, rather
> than nulls. I can't UPDATE these zeros as nulls
> after loading the data, because some of the data are
> actually zeros in the original .txt file. 
> Do you have any suggestions?
> 
> Gianluca 
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records :-) But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum("EST","GSS","EST Contig","EST  
Singlet","GSS Contig","GSS Singlet","GSS Plasmid
Contig","Protein") NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

This NewSequence table is used to track some general
info about sequence. Notice I have to use text
datatype to describe "Comment" and "Seq_Title" fields;
therefore I have to use varchar for other string
fields. In addition, the Seq_ID is not numerical. 
BTW, I found indexing on Seq_Type. Organism which are
very repeative still helps with accessing. This table
has 2676711 rows.


CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);

This NewSequence_Homolog table is to track which
protein sequences (homolog) are similar to the
sequence I store in the NewSequence table. This table
has 997654 rows. 

mysql> select count(*) from NewSequence s left join
NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+--+
| count(*) |
+--+
|  3292029 |
+--+
1 row in set (1 min 30.50 sec)

So a simple left join took about 1 min and half.
First, is this slow or I am too picky?

This is the "Explain".
mysql> explain select count(*) from NewSequence s left
join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---+---+---+-+-+--+-+-+
| table | type  | possible_keys | key | key_len |
ref  | rows| Extra   |
+---+---+---+-+-+--+-+-+
| s | index | NULL  | PRIMARY |  50 |
NULL | 2676711 | Using index |
| h | ref   | PRIMARY   | PRIMARY |  50 |
s.Seq_ID |9976 | Using index |
+---+---+---+-+-+--+-+-+


I am running MySQL 3.23.49 on RedHat linux 7.3 on a
dedicated server with 4 GB memory. The only setting I
changed is to copy the my-huge.cnf into /etc/my.cnf.

Qunfeng

--- "Michael T. Babcock" <[EMAIL PROTECTED]>
wrote:
> Qunfeng Dong wrote:
> 
> >not-so-good performance (join on tables much
> smaller
> >than yours takes minutes even using index) and I
> seem
> >to read all the docs I could find on the web about
> how
> >to optimize but they are not working for me (I am
> >
> 
> Have you stored a slow query log to run them through
> 'explain' and see 
> why they're slow?  Do you want to post some of them
> here so we can 
> suggest what might be done to make them faster?
> 
> -- 
> Michael T. Babcock
> C.T.O., FibreSpeed Ltd.
> http://www.fibrespeed.net/~mbabcock
> 
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records. But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum("EST","GSS","EST Contig","EST
Singlet","GSS Contig","GSS Singlet","GSS Plasmid
Contig","Protein") NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);






--- "Michael T. Babcock" <[EMAIL PROTECTED]>
wrote:
> Qunfeng Dong wrote:
> 
> >not-so-good performance (join on tables much
> smaller
> >than yours takes minutes even using index) and I
> seem
> >to read all the docs I could find on the web about
> how
> >to optimize but they are not working for me (I am
> >
> 
> Have you stored a slow query log to run them through
> 'explain' and see 
> why they're slow?  Do you want to post some of them
> here so we can 
> suggest what might be done to make them faster?
> 
> -- 
> Michael T. Babcock
> C.T.O., FibreSpeed Ltd.
> http://www.fibrespeed.net/~mbabcock
> 
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records? - Impressive! How do you guys do that?

2002-12-18 Thread Qunfeng Dong
I am very encouraged to hear all these successful
proofs. I do want to stick to MySQL (we are using it
to develop a biology database). But I am indeed seeing
not-so-good performance (join on tables much smaller
than yours takes minutes even using index) and I seem
to read all the docs I could find on the web about how
to optimize but they are not working for me (I am
going to order Jeremy Zawodny's "Advanced MySQL" and
see if I am missing anything). Am I one of the few who
are encountering the problems? What's your secrets to
successfully run such large databases with MySQL? How
much time have you spend on fine-tune the performance?

Qunfeng

--- Peter Vertes <[EMAIL PROTECTED]> wrote:
> Hi,
> 
>   I've been using MySQL intercompany for a while now
> with great results.  Even the diehard MSSQL people
> are amazed at how fast it can be at time.  One of
> the things I use it for is to store syslog events in
> it.  I wrote a backend that parses a syslog file as
> data is being written into it and does multiple
> things with each syslog entry depending what the
> entry contains.  When I'm done with it the syslog
> entry goes into a MySQL database where I can store
> the data and let the operations team access it
> through a PHP enabled webpage to see either what is
> going on in the system real-time of be able to do
> queries about certain hosts, processes or show some
> stats (what happened to machine x on date y and what
> processes were running on it, etc...).
>   The MySQL database is being hosted on a Dell
> Precisions 540 workstation box.  It's a P4 1.7GHz
> Xeon with 512MB of ram and a 40GB IDE disc running
> Windows 2000 Server.  That MySQL database is also
> being used for other things (nothing too intensive)
> and I muck around with it also and use it as a test
> db.  The machine also handles webserving chores and
> runs backup chores and other operations related
> tasks.
>   The database only holds about 1 months worth of
> data in it, the rest we don't really need but we
> keep around for a while outside of the db zipped up.
>  As of when I'm writing this there were about 18.7
> million entries in that table:
> 
> mysql> select count(*) from notifications;
> +--+
> | count(*) |
> +--+
> | 18711190 |
> +--+
> 1 row in set (0.00 sec)
> 
> All these entries have been accumulated from
> December 1, 2002 till present day:
> 
> mysql> select distinct syslogdate from notifications
> order by syslogdate;
> ++
> | syslogdate |
> ++
> | 2002-12-01 |
> | 2002-12-02 |
> | 2002-12-03 |
> | 2002-12-04 |
> | 2002-12-05 |
> | 2002-12-06 |
> | 2002-12-07 |
> | 2002-12-08 |
> | 2002-12-09 |
> | 2002-12-10 |
> | 2002-12-11 |
> | 2002-12-12 |
> | 2002-12-13 |
> | 2002-12-14 |
> | 2002-12-15 |
> | 2002-12-16 |
> | 2002-12-17 |
> | 2002-12-18 |
> ++
> 18 rows in set (12.95 sec)
> 
>   Notice it took almost 13 seconds to complete that
> last query.  I tried this on a MSSQL server and
> after 2 minutes I turned the query off.  That kind
> of performance was unacceptable for a webapp that
> uses a database that does real time queries.  I'm
> quite happy with the performance of MySQL and I just
> love to see the MSSQL guys retreat when I show off
> how fast some queries can be (they always strike
> back with transactional stuff, blah, blah, blah :) 
> Anyway, I would suggest you use Linux for your
> dbserver with some kind of journaling file system. 
> I would go with ReiserFS because if memory serves
> correctly it can handle files up to 4 terabytes but
> you might want to double check since I'm quite
> forgetful with facts like that :)  I would also
> recommend the fastest SCSI drives you can find. 
> When I do queries in any 10 million+ database I
> barely get any CPU activity but I get A LOT of disk
> activity and I think this IDE drive is holding MySQL
> back.  When I have time I'm thinking about moving
> this database/webapp beast onto a SCSI Linux box and
> see how well it performs.  I think you'll be very
> pleased with the performance you'll get out of
> MySQL.
> 
> -Pete
> 
> P.S.: Thanks again MySQL team :)
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECT

Re: Can MySQL handle 120 million records?

2002-12-17 Thread Qunfeng Dong
I am not sure. Does anyone know any real examples of
mysql handling huge database and still perform well? I
am having problems with the performance with the MySQL
Left join recently. A big table (about 2.5 million
records) left join a small table (about 350K records)
takes generally 2 mins to finish. I check the
"explain" and primary key index on the small table was
indeed used for the joining. My system is Redhat Linux
7.3 with 4 GB memory. I also tried replacing the
default my.cnf with my-huge.cnf. It didn't help at
all. 

Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit. 

Qunfeng Dong
--- "B.G. Mahesh" <[EMAIL PROTECTED]>
wrote:
> 
> hi
> 
> We are evaluating few databases for developing an
> application with
> following specs,
> 
> 1.OS not very important. Leaning towards Linux
> 
> 2.Currently the database has about 5 million
> records but it will grow
> to 120 million records.
> 
> 3.The tables will have billing information for a
> telecom company.
> Nothing complex.
> 
> 4.Back office staff will use the data in the
> database to create
> invoices to be sent to customers. This data is not
> connected to the
> live telecom system [e.g. switches etc]. We get the
> data every day
> from the telecom company.
> 
> 5.Staff may perform queries on the database to get
> reports like
> "busiest hour of the day" etc etc. I don't see too
> many concurrent
> users using the system, however the system needs to
> be stable.
> 
> 6.Need to create excel, pdf files from the data in
> the database. This
> I think has nothing to do with the database, however
> this is a requirement.
> 
> 7.Needless to say, good security is a must which
> will also be built
> into the front end application.
> 
> We are considering the following databases,
> 
> 1.MYSQL
> 2.Postgres
> 3.Oracle
> 4.MSQL
> 
> If MYSQL or Postgres can do the job I prefer not to
> spend the money on
> Oracle/MSQL. However, if Oracle/MSQL are required
> for getting good
> reports and scalability, so be it. We will use
> Oracle/MSQL.
> 
> Any pointers/advice is appreciated
> 
> 
> -- 
> -- 
> B.G. Mahesh
> mailto:[EMAIL PROTECTED]
> http://www.indiainfo.com/
> India's first ISO certified portal
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can not write to a file in /home/.....

2002-12-17 Thread Qunfeng Dong
chown -R mysql:mysql /home/medic/
and make sure there is no outfile.txt already in that
dir.

--- sam <[EMAIL PROTECTED]> wrote:
> I executed the following statement:
> 
> SELECT * INTO OUTFILE '/home/medic/outfile.txt' FROM
> fool;
> 
> I get the error meassge
> 
> " Can't create/write to file
> '/home/medic/outfile.txt' (Errcode:13) "
> 
> I followed the solutions  from the manual "A.3.3
> Problems with File
> Permissions"
> and still have a problem.
> 
> I created a my.cnf and when I put in the line:
> set-varibale = UMASK=0777 in the {mysqld} section -
> the deamon will not
> start.
> 
> I created a script as follows:
> 
> UMASK=0777
> export UMASK
> UMASK_DIR=0777
> export UMASK_DIR
> etc/bin/safe_mysqld &
> 
> Can anyone help
> 
> sam,
> 
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mailing list problem?

2002-12-16 Thread Qunfeng Dong
I also got that msg and I did seem to receive all the
emails from the list. 

Qunfeng 

--- Bill Rausch <[EMAIL PROTECTED]> wrote:
> I just got a message from the ezmlm program telling
> me that the
> mysql digests have been bouncing and it is going to
> remove me from the list.
> 
> In fact, the digests have been coming through just
> fine.  Any ideas
> on what it could be referring to?  Did anyone else
> get such a message?
> 
> Bill
> 
> -- 
> Bill Rausch, Software Development, UNIX, Mac,
> Windows
> Numerical Applications, Richland, WA  509-943-0861
> x302
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How can I speed up the Left Join on big tables?

2002-12-16 Thread Qunfeng Dong
Dear Stefan, 

Thanks for your help. I didn't know MySQL doesn't
automatically create index on primary key (I probably
should create UNIQUE index on them now). 

About not mixing char and varchar in one table, I
don't find that info in the on-line documents. I could
successfully create a test table
create table testTable(
 Seq_ID char(20),
 Title varchar(100)
);

Qunfeng

 
--- "Stefan Hinz, iConnect (Berlin)"
<[EMAIL PROTECTED]> wrote:
> Qunfeng,
> 
> > A simple left join on two big table took 5 mins to
> > finish.
> 
> These lines tell about the cause of the problem:
> 
> > | table | type   | possible_keys | key |
> key_len |
> > | s | index  | NULL  | PRIMARY | 
> 50 |
> 
> MySQL has no key (index) which it can use to speed
> up the search on the
> first table, newSequence (alias s). So, it has to
> scan all of the rows:
> 
> > ref  | rows| Extra   |
> > NULL | 2684094 | Using index |
> 
> MySQL will still use the primary key, _trying_ to be
> faster than without.
> 
> Does the Seq_ID have to be VARCHAR? This column type
> isn't very easy to
> index, especially without a length specification.
> 
> As you cannot have CHAR (> 3) and VARCHAR in one
> table, I would suggest you
> split up table newSequence into two tables (one
> fixed-length (i.e. without
> VARCHAR/TEXT columns), the other variable-length).
> This will speed up
> count() queries (and others) amazingly.
> 
> If you can use something like INT instead of CHAR,
> it's even faster.
> 
> If, for any reason, you have to stick to VARCHAR,
> you should index the
> column separately. Leave the primary key as is, but
> add another key (index)
> like that:
> 
>  CREATE INDEX make_it_fast ON newSequence
> (Seq_ID(10));
> 
> This will only make sense if the first 10 characters
> can tell the difference
> between different records. If not, you can
> experiment setting the index size
> to 20, 30, ...
> 
> I hope this will give you some ideas on how you can
> improve performance.
> 
> Regards,
> --
>   Stefan Hinz <[EMAIL PROTECTED]>
>   CEO / Geschäftsleitung iConnect GmbH
> <http://iConnect.de>
>   Heesestr. 6, 12169 Berlin (Germany)
>   Telefon: +49 30 7970948-0  Fax: +49 30 7970948-3
> 
> 
> - Original Message -
> From: "Qunfeng Dong" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, December 16, 2002 6:42 PM
> Subject: How can I speed up the Left Join on big
> tables?
> 
> 
> > Hi,
> >
> > A simple left join on two big table took 5 mins to
> > finish.
> >
> > Here is the "explain"
> > mysql> explain select count(*) from newSequence s
> left
> > join newSequence_Homolog h on s.Seq_ID = h.Seq_ID;
> >
>
+---++---+-+-+--+-+-
> +
> > | table | type   | possible_keys | key |
> key_len |
> > ref  | rows| Extra   |
> >
>
+---++---+-+-+--+-+-
> +
> > | s | index  | NULL  | PRIMARY | 
> 50 |
> > NULL | 2684094 | Using index |
> > | h | eq_ref | PRIMARY   | PRIMARY | 
> 50 |
> > s.Seq_ID |   1 | Using index |
> >
>
+---++---+-+-+--+-+-
> +
> > 2 rows in set (0.00 sec)
> >
> > here are the two tables' definitaion
> > mysql> describe newSequence;
> >
>
+-+-
>
---+--+-++--
> -+
> > | Field   | Type
> >
> >  | Null | Key | Default| Extra |
> >
>
+-+-
>
---+--+-++--
> -+
> > | Seq_ID  | varchar(50)
> >
> >  |  | PRI ||   |
> > | GenBank_Acc | varchar(10)
> >
> >  | YES  | MUL | NULL   |   |
> > | Organism| varchar(50)
> >
> >  |  | MUL ||   |
> > | Seq_Type| enum('EST','GSS','EST Contig','EST
> > Singlet','GSS Contig','GSS Singlet','GSS Plasmid
> > Contig','Protein') |  | MUL | EST|
>   |
> > | Seq_Length  | int(11)
> >
> >

How can I speed up the Left Join on big tables?

2002-12-16 Thread Qunfeng Dong
Hi, 

A simple left join on two big table took 5 mins to
finish. 

Here is the "explain"
mysql> explain select count(*) from newSequence s left
join newSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---++---+-+-+--+-+-+
| table | type   | possible_keys | key | key_len |
ref  | rows| Extra   |
+---++---+-+-+--+-+-+
| s | index  | NULL  | PRIMARY |  50 |
NULL | 2684094 | Using index |
| h | eq_ref | PRIMARY   | PRIMARY |  50 |
s.Seq_ID |   1 | Using index |
+---++---+-+-+--+-+-+
2 rows in set (0.00 sec)

here are the two tables' definitaion
mysql> describe newSequence;
+-++--+-++---+
| Field   | Type  
  
 | Null | Key | Default| Extra |
+-++--+-++---+
| Seq_ID  | varchar(50)   
  
 |  | PRI ||   |
| GenBank_Acc | varchar(10)   
  
 | YES  | MUL | NULL   |   |
| Organism| varchar(50)   
  
 |  | MUL ||   |
| Seq_Type| enum('EST','GSS','EST Contig','EST
Singlet','GSS Contig','GSS Singlet','GSS Plasmid
Contig','Protein') |  | MUL | EST|   |
| Seq_Length  | int(11)   
  
 |  | | 0  |   |
| Seq_Title   | text  
  
 |  | MUL ||   |
| Comment | text  
  
 | YES  | MUL | NULL   |   |
| Entry_Date  | date  
  
 |  | | -00-00 |   |
+-++--+-++---+
8 rows in set (0.00 sec)

There are 2684094 records on this table.

mysql> describe newSequence_Homolog;
+--+-+--+-+-+---+
| Field| Type| Null | Key |
Default | Extra |
+--+-+--+-+-+---+
| Seq_ID   | varchar(50) |  | PRI |   
 |   |
| Homolog1_PID | varchar(20) | YES  | MUL | NULL  
 |   |
| Homolog1_Desc| varchar(50) | YES  | MUL | NULL  
 |   |
| Homolog1_Species | varchar(50) | YES  | | NULL  
 |   |
| Homolog2_PID | varchar(20) | YES  | MUL | NULL  
 |   |
| Homolog2_Desc| varchar(50) | YES  | MUL | NULL  
 |   |
| Homolog2_Species | varchar(50) | YES  | | NULL  
 |   |
| Homolog3_PID | varchar(20) | YES  | MUL | NULL  
 |   |
| Homolog3_Desc| varchar(50) | YES  | MUL | NULL  
 |   |
| Homolog3_Species | varchar(50) | YES  | | NULL  
 |   |
+--+-+--+-+-+---+
10 rows in set (0.00 sec)
There are 357944 records in this tables. 

I've already copied
/usr/share/doc/mysql-server-3.23.49/my-huge.cnf as
/etc/my.cnf

Is there any other thing I can do to improve the speed
of join? I really hate to merge the two tables
together.  I am running MySQL3.23.49 on redhat
linux7.3. My MySQL Server has 4 GB memory.

Eventually, I need to do (select *) instead of the
above select count(*)
mysql> explain select * from newSequence s left join
newSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---++---+-+-+--+-+---+
| table | type   | possible_keys | key | key_len |
ref  | rows| Extra |
+---++---+-+-+--+-+---+
| s | ALL| NULL  | NULL|NULL |
NULL | 2684094 |   |
| h | eq_ref | PRIMARY   | PRIMARY |  50 |
s.Seq_ID |   1 |   |
+---+----+---+-+-+--+-+---+
2 rows in set (0.00 sec)


Thanks!

Qunfeng Dong 

_

Re: Tuning MySQL Server Parameter

2002-12-06 Thread Qunfeng Dong
Thanks! I copied
/usr/share/doc/mysql-server-3.23.49/my-huge.cnf into
/etc/my.cnf and restarted mysqld from
/etc/rc.d/init.d/mysqld

but it's not improving anything. 
my join query is very simple

>select count(B.columnb) from B left join A on
B.columnb = A.columna. 

Both columna and columnb are varchar(11) and indexed. 
Table B has about 34,000 records and Table A has about
2,500,000 records. The above query took about 3 hours
to finish. Something is just not right. 

Qunfeng Dong

--- David Bordas <[EMAIL PROTECTED]> wrote:
> > I wish to tune our MySQL Server Parameter to
> increase
> > the speed of Join. I was trying to do a simple
> join
> > with two tables. One is big (~2,500,000 records);
> the
> > other one is small. The current join seems to take
> > forever to finish even on the indexed attribute.
> >
> > I am trying to learn from
> > http://www.mysql.com/doc/en/Server_parameters.html
> but
> > not confident enough to play with our server yet.
> Any
> > advice will be much appreciated. I am running
> > mysql3.23.49 on linux7.3 with 4 GB memory. So I
> want
> > to try the following from that doc:
> >
> > shell> safe_mysqld -O key_buffer=64M -O
> > table_cache=256 -O sort_buffer=4M -O
> > read_buffer_size=1M &
> >
> > My questions: if I run the above command (as
> root),
> > should I run it every time when the server starts?
> If
> > so, how can I set the above option automatically
> when
> > server starts. Thanks!
> 
> Modify you my.cnf to add or change this parameter
> and mysql will "normally"
> read this cnf file each time you launch it via
> mysql.server script ...
> 
> David
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql NULl value in perl problem

2002-12-05 Thread Qunfeng Dong
Isn't NULL value in MySQL stored as '\N'? If so, you
can check that in your perl script. 

Qunfeng 

--- David Wu <[EMAIL PROTECTED]> wrote:
> Hi guys,
> 
> Running into a frustrating problem. When I have a
> empty table in mysql 
> database, i tried run a select statement in my perl
> script and 
> supposing get a NULl return value. Is the NULL
> returned from mysql is 
> described as string in perl or as undef in perl?..
> As there is no the 
> word NULL in perl
> Thank you very much guys
> 
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Tuning MySQL Server Parameter

2002-12-05 Thread Qunfeng Dong
Hi, 

I wish to tune our MySQL Server Parameter to increase
the speed of Join. I was trying to do a simple join
with two tables. One is big (~2,500,000 records); the
other one is small. The current join seems to take
forever to finish even on the indexed attribute.

I am trying to learn from 
http://www.mysql.com/doc/en/Server_parameters.html but
not confident enough to play with our server yet. Any
advice will be much appreciated. I am running
mysql3.23.49 on linux7.3 with 4 GB memory. So I want
to try the following from that doc:

shell> safe_mysqld -O key_buffer=64M -O
table_cache=256 -O sort_buffer=4M -O
read_buffer_size=1M &

My questions: if I run the above command (as root),
should I run it every time when the server starts? If
so, how can I set the above option automatically when
server starts. Thanks!

Qunfeng Dong



__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




multiple mysql query in PHP mysql_query?

2002-11-08 Thread Qunfeng Dong
Hi, Can anybody give me a simple example of using
PHP's mysql_query to perform mulitple mysql queries. I
am using MySQl 3.23, trying to use create temporary
table and insert ... select to overcome the lack of
union operation in that verion of MySQL.
Thanks!Qunfeng Dong

__
Do you Yahoo!?
U2 on LAUNCH - Exclusive greatest hits videos
http://launch.yahoo.com/u2

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php