RE: Update databases, the unanswered question

2002-12-18 Thread Dan
http://worldcommunity.com/opensource/utilities/mysql_backup.html

I've heard great things about this script and I'm just about to start using
it myself.

-Dan

-Original Message-
From: Scott Haneda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 11:31 PM
To: MySql
Subject: Update databases, the unanswered question


Hello all, I have dug through the archives to no avail, the archives point
to many ways to back up your databases, I have found a way that works for
me, but it is no where near as automated as what I want it to be, so I
thought I would ask a few people here what the best approach would be.

To start with, my first question,
Mysqldump has a option to dump all databases to a file, this logically seems
the simplest way, and is close to what I want to do, however, it dumps all
databases to one file.  My question is, in the event I had to restore a
database from this type of dump, would I also be restoring ALL databases?
If this is the case, this is not the option I am looking for, I anticipate a
client calling and telling me they accidentally deleted a whole bunch of
records, and want a restore from a few days ago.  If I have to restore ALL
databases, I would be messing up a whole lot of clients databases.  When you
restore, does mysql ask you which database you want to restore even if the
dump file has tons of other information in it?  If not

The approach I have been using now is this...
I have a small shell script that cron calls, in it is
mysqldump -u backup my_database | gzip > /path_to_storage_$newtime.sql.gz

I have many entries like this where I change "my_database" with the correct
database name.

You will also notice I do not have a password in there, is that safe?  I
have created a user with select only privileges just for backup, what is the
best way to pass a username and password for these types of purposes?

And finally, the main question,
mysql> show databases;
+---+
| Database  |
+---+
| dsadsadsad|
| ewrererrewrerr|
| 324effdfdfd   |
| liffdfdsfewfdsake |
| mysql |
+---+

As you can see, it is rather simple to get a real up to date list of all
databases, but it is bordered by bars, +'s and -'s, is there any way to
issue a show databases with a flag that tells it to return the database
names in a list, perhaps comma separated or better yet \r separated.  This
way, I can create a shell script that will loop through all the databases,
and when I add new databases I need not ever worry about messing with my
backup script.

Any suggestions of pre-made scripts that would work under OS X would be much
appreciated.

-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Update databases, the unanswered question

2002-12-18 Thread Scott Haneda
Hello all, I have dug through the archives to no avail, the archives point
to many ways to back up your databases, I have found a way that works for
me, but it is no where near as automated as what I want it to be, so I
thought I would ask a few people here what the best approach would be.

To start with, my first question,
Mysqldump has a option to dump all databases to a file, this logically seems
the simplest way, and is close to what I want to do, however, it dumps all
databases to one file.  My question is, in the event I had to restore a
database from this type of dump, would I also be restoring ALL databases?
If this is the case, this is not the option I am looking for, I anticipate a
client calling and telling me they accidentally deleted a whole bunch of
records, and want a restore from a few days ago.  If I have to restore ALL
databases, I would be messing up a whole lot of clients databases.  When you
restore, does mysql ask you which database you want to restore even if the
dump file has tons of other information in it?  If not

The approach I have been using now is this...
I have a small shell script that cron calls, in it is
mysqldump -u backup my_database | gzip > /path_to_storage_$newtime.sql.gz

I have many entries like this where I change "my_database" with the correct
database name.

You will also notice I do not have a password in there, is that safe?  I
have created a user with select only privileges just for backup, what is the
best way to pass a username and password for these types of purposes?

And finally, the main question,
mysql> show databases;
+---+
| Database  |
+---+
| dsadsadsad|
| ewrererrewrerr|
| 324effdfdfd   |
| liffdfdsfewfdsake |
| mysql |
+---+

As you can see, it is rather simple to get a real up to date list of all
databases, but it is bordered by bars, +'s and -'s, is there any way to
issue a show databases with a flag that tells it to return the database
names in a list, perhaps comma separated or better yet \r separated.  This
way, I can create a shell script that will loop through all the databases,
and when I add new databases I need not ever worry about messing with my
backup script.

Any suggestions of pre-made scripts that would work under OS X would be much
appreciated.

-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




GUI for the server part of MySQL

2002-12-18 Thread Stan Sebastian

Is there any GUI for the server part of MySQL fro Windows. I mean not a
client for MySQL, like SQLyog OR MySqlFront.

Something better then the MySqlAdmin?

--
  Sebastian Stan
 [EMAIL PROTECTED]
###

Disclaimer on Exchange

This message has been scanned by F-Secure Anti-Virus for Microsoft Exchange.
For more information, connect to http://www.F-Secure.com/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail 
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: win2k update

2002-12-18 Thread Iikka Meriläinen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> Is there a patch update from .3 to .4? So that I don't have to reinstall it
> on my win2k box?   If I have to reinstall it do I need to first uninstall
> .3 first?
>
You have to reinstall. But that isn't a big problem - after new installation
copy the data and configuration files into the new server and get going. You
don't have to uninstall .3 (hmm.. what major version you might mean?) before
installing .4. Don't reinstall into the same directory. If something goes
wrong, you can always revert to the previous, working installation.

Iikka

Sh*t.. SQL QUERY


 Iikka Meriläinen
 Vaala, Finland
 E-mail: [EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)

iD8DBQE+AWq9CVGYZ+r4ZncRAi+cAJ4xixt3NX3C7tVsM9kBfsmZONOiGwCfWurj
UKhCXx7Zs/mfAzANQ8IcfD0=
=MW3d
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Login problems

2002-12-18 Thread Mere Komane


Hi I'm Mere!

I've just installed mysql for use of bugzilla.I was logging well all the time
and suddenly my "super user" root can't access the databases especially mysql.In
short root can't access mysql even if I'm using the password I use to login
with.The surprise is that bugs as a user does login with password different from
root's and too correctly can't access mysql.

Hopw do I get the password restored and full access to mysql?

Please help!!

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: SELECT and UPDATE at the same time?

2002-12-18 Thread Paul DuBois
At 23:43 -0500 12/18/02, Dan Cumpian wrote:

Jeff,

Not if your outer loop is in a separate query. In that case, your query
is essentially a cursor and is static once OPENed. As you move from
record to record, what you are seeing is the records at the time the
query was opened. Now, if you were to update records that you haven't
processed yet, then they may show up as modified in your outer query


Your basis for saying that?


because (depending on how you are connecting to the database) the server
only returns several hundred rows at a time and caches them as you go


Your basis for saying that?


through the record set. But that doesn't sound like it will impact you.

HTH,
Dan Cumpian


-Original Message-
From: Jeff Snoxell [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 12:56 PM
To: [EMAIL PROTECTED]
Subject: SELECT and UPDATE at the same time?

Hello again,

I'm selecting a group of records from my database. I then loop through
the
selected records and do some work based on what I find. But what I also
want to do as I interrogate each record is update some of its fields
with
new values... but won't that screw up the outer loop? I mean if I try to

execute a query whilst looping around the result set of a former query
will
I not screw up my result set that I'm looping through?

Also, is it possible to update specific fields of certain records within
a
SELECT query? ie can I do something like this:

SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"

Ta,

Jeff



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Suitability of mySQL for desktop use?

2002-12-18 Thread Dan Cumpian
Frank,

I am using MySQL for a PC desktop application. It runs fine, but you
have to be careful how you structure your database because (depending on
how much data you will be storing and processing) be putting a lot of
stress on the PC the application and MySQL are running on.

I am using Delphi 6, and MySQL 4.0.5 running as a service on WinXP and
Win2000. The W2K is a generic (cheap) Dell with IDE disks. It bogs down
occasionally, but recovers fairly quickly. The other system I am running
it on is a 2.2Ghz machine with dual 15,000 RPM SCSI disks and 1GB RAM.
On that machine, I can't even tell that MySQL is running.

HTH,
Dan Cumpian

-Original Message-
From: Frank Marousek [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, December 18, 2002 3:57 PM
To: [EMAIL PROTECTED]
Subject: Suitability of mySQL for desktop use?

How suitable is mySQL for use in a desktop application i.e. one that
would
be installed completely by the user and run on a standalone PC? My
application is written in Delphi and currently uses dBase files, but as
Delphi support for accessing dBase files is being phased out, I am
looking
at possible alternatives. My application already makes heavy use of SQL,
and
moving to an SQL database would seem like a good fit otherwise. mySQL is
also appealing as I may at some point port this application to the web,
in
which case mySQL seems like it would be a good platform.

Comments?


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Severe performace problem linking tables with mysql

2002-12-18 Thread Dan Cumpian
Joseph,

I've noticed this as well. MySQL seems to do most types of queries
extremely well, but CERTAIN joins are very slow. I ended up having to
denormalize my data structures somewhat in order to maintain good
performance with MySQL. What I don't know, since I do not have access to
a Linux box, is whether the problem is OS related. In any case, I found
that I couldn't create data structures in the same way as on MSSQL.

HTH,
Dan Cumpian


-Original Message-
From: Joseph Dietz [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, December 18, 2002 1:40 PM
To: [EMAIL PROTECTED]
Subject: Severe performace problem linking tables with mysql


PLATFORM: 3.23.52-max-nt with Windows 2000 professional (default table
type)

I have discovered a performace issue when joining several tables
together. 
The performance is extremely poor when performing select queries using
the 
WHERE clause and joining the tables with the pk_media_id = fk_media_id 
etc... I guess this is what people might think about when considering
using 
mysql. With SQL 2000 Windows Server, and ACCESS, there was no loss in 
performace. 4 of my tables are cross reference tables as such:

pk=primary key, fk= foreign key

MediasMediaAuthorsAuthors

pk_media_id   fk_media_id, fk_author_id   pk_author_id

(Many authors for each media)


_
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. 
http://join.msn.com/?page=features/virus


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: replication issues

2002-12-18 Thread Michael T. Babcock
Dan wrote:


not attempting to write to the index field. I also attempted to have it skip
the entry with the SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; command.  This
command failed.  It did not fail when I eliminated the GLOBAL word in the
command like so: SET SQL_SLAVE_SKIP_COUNTER=1;  However, still no luck with
 


This won't help your situation much, but the manual fails to mention on 
that page that the GLOBAL is a new feature for the 4.x series.  Could 
the MySQL team take a hint from the php documentation team and put such 
version notices in the margins where appropriate?

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: SELECT and UPDATE at the same time?

2002-12-18 Thread Dan Cumpian
Jeff,

Not if your outer loop is in a separate query. In that case, your query
is essentially a cursor and is static once OPENed. As you move from
record to record, what you are seeing is the records at the time the
query was opened. Now, if you were to update records that you haven't
processed yet, then they may show up as modified in your outer query
because (depending on how you are connecting to the database) the server
only returns several hundred rows at a time and caches them as you go
through the record set. But that doesn't sound like it will impact you.

HTH,
Dan Cumpian


-Original Message-
From: Jeff Snoxell [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, December 18, 2002 12:56 PM
To: [EMAIL PROTECTED]
Subject: SELECT and UPDATE at the same time?

Hello again,

I'm selecting a group of records from my database. I then loop through
the 
selected records and do some work based on what I find. But what I also 
want to do as I interrogate each record is update some of its fields
with 
new values... but won't that screw up the outer loop? I mean if I try to

execute a query whilst looping around the result set of a former query
will 
I not screw up my result set that I'm looping through?

Also, is it possible to update specific fields of certain records within
a 
SELECT query? ie can I do something like this:

SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"

Ta,


Jeff


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: really slow query results --- SOLVED

2002-12-18 Thread Dan Nelson
In the last episode (Dec 19), Joseph Dietz said:
> Someone helped me solve the problem with my slow query results! I was
> missing the index which I should have added to my cross reference
> tables.. AND I should have used the INNER JOIN clause instead of
> joining my table with the WHERE clause. In the future I will be far
> more careful designing my queries.

INNER JOIN and WHERE do the same thing:

   * `INNER JOIN' and `,' (comma) are semantically equivalent.  Both do
 a full join between the tables used. Normally, you specify how the
 tables should be linked in the WHERE condition.

-- 
Dan Nelson
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread JamesD
i like mySQL, but it has a long way to go to
gain the level of 'confidence'  that oracle, db2 or mssql or sybase
have when it comes to frontline mission
critical stuff. I think it will in time...thats why i stick with
it. besides, confidence is often just a synonym for 'knowledge'

select count(*) from x (0.00 seconds)

if you dig into the source, you will probably find this
common select is built in and running all the time,

and we are all just getting a pre-filled variable
returned from the method " select count(*) from (x) "

I hope someone can prove me wrong...

i agree, in many common queries it is a heck of a lot faster
than MSSQL.

Jim

-Original Message-
From: Peter Vertes [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 6:47 AM
To: [EMAIL PROTECTED]
Subject: RE: Can MySQL handle 120 million records?


Hi,

I've been using MySQL intercompany for a while now with great results.
Even the diehard MSSQL people are amazed at how fast it can be at time.  One
of the things I use it for is to store syslog events in it.  I wrote a
backend that parses a syslog file as data is being written into it and does
multiple things with each syslog entry depending what the entry contains.
When I'm done with it the syslog entry goes into a MySQL database where I
can store the data and let the operations team access it through a PHP
enabled webpage to see either what is going on in the system real-time of be
able to do queries about certain hosts, processes or show some stats (what
happened to machine x on date y and what processes were running on it,
etc...).
The MySQL database is being hosted on a Dell Precisions 540 workstation
box.  It's a P4 1.7GHz Xeon with 512MB of ram and a 40GB IDE disc running
Windows 2000 Server.  That MySQL database is also being used for other
things (nothing too intensive) and I muck around with it also and use it as
a test db.  The machine also handles webserving chores and runs backup
chores and other operations related tasks.
The database only holds about 1 months worth of data in it, the rest we
don't really need but we keep around for a while outside of the db zipped
up.  As of when I'm writing this there were about 18.7 million entries in
that table:

mysql> select count(*) from notifications;
+--+
| count(*) |
+--+
| 18711190 |
+--+
1 row in set (0.00 sec)

All these entries have been accumulated from December 1, 2002 till present
day:

mysql> select distinct syslogdate from notifications order by syslogdate;
++
| syslogdate |
++
| 2002-12-01 |
| 2002-12-02 |
| 2002-12-03 |
| 2002-12-04 |
| 2002-12-05 |
| 2002-12-06 |
| 2002-12-07 |
| 2002-12-08 |
| 2002-12-09 |
| 2002-12-10 |
| 2002-12-11 |
| 2002-12-12 |
| 2002-12-13 |
| 2002-12-14 |
| 2002-12-15 |
| 2002-12-16 |
| 2002-12-17 |
| 2002-12-18 |
++
18 rows in set (12.95 sec)

Notice it took almost 13 seconds to complete that last query.  I tried this
on a MSSQL server and after 2 minutes I turned the query off.  That kind of
performance was unacceptable for a webapp that uses a database that does
real time queries.  I'm quite happy with the performance of MySQL and I just
love to see the MSSQL guys retreat when I show off how fast some queries can
be (they always strike back with transactional stuff, blah, blah, blah :)
Anyway, I would suggest you use Linux for your dbserver with some kind of
journaling file system.  I would go with ReiserFS because if memory serves
correctly it can handle files up to 4 terabytes but you might want to double
check since I'm quite forgetful with facts like that :)  I would also
recommend the fastest SCSI drives you can find.  When I do queries in any 10
million+ database I barely get any CPU activity but I get A LOT of disk
activity and I think this IDE drive is holding MySQL back.  When I have time
I'm thinking about moving this database/webapp beast onto a SCSI Linux box
and see how well it performs.  I think you'll be very pleased with the
performance you'll get out of MySQL.

-Pete

P.S.: Thanks again MySQL team :)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: $Variables and Select statements

2002-12-18 Thread C. Reeve
Thanks for the information, I will try that in this situation, however, I
have done this in other select statements and it has worked fine. I will
follow-up in the PHP list if I still have problems.

Please note also that since I assumed this was a MySQL problem I posted it
to this group. Since these two programs are so closely interlinked, it is
hard for us newbies to tell where the problem may lie. I have in the past
posted a question to both groups as I wasn't sure of it either. These lists
are for those that know and those that don't and both have to be tolerant of
the other.

Beauford

- Original Message -
From: "Basil Hussain" <[EMAIL PROTECTED]>
To: "C. Reeve" <[EMAIL PROTECTED]>; "MySQL List"
<[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 11:57 AM
Subject: RE: $Variables and Select statements


> Hi,
>
> > while ($line = mysql_fetch_array($result, MYSQL_ASSOC)) {
> >
> > echo " > ALIGN=left>".$line['name']." > ALIGN=right>".$line['$wk']."";
> >
> > }
>
> Here is your problem. Your are indeed substituting $wk into the query
> correctly, but when you go to output the result rows you are not.
>
> You should simply remove the quote marks from:
>
> $line['$wk']
>
> so that it is:
>
> $line[$wk]
>
> PHP does not interpret variable names in single-quoted strings - only in
> double-quoted strings. With your current code, PHP is only looking for an
> array element called '$wk', which doesn't exist!
>
> In future, may I suggest you that post questions like this to the PHP
> mailing list, as this is nothing to do with MySQL.
>
> Regards,
>
> Basil Hussain
> ---
> Internet Developer, Kodak Weddings
> E-Mail: [EMAIL PROTECTED]
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




How can I update an entire row at once?

2002-12-18 Thread David T McWilliams
I am using php and mysql to read a row from a table, store it as an array
called $user_data, then manipulate the array.

$id = 123;
$result = mysql_query(" SELECT * FROM users WHERE id='$id' ");
$user_data = mysql_fetch_assoc($result);
$user_data = ProcessData($user_data);

Now I want to save all the changes in the $user_data array back to the
table.  What is the most efficient way to do this?

-David McWilliams


Sign Up for Juno Platinum Internet Access Today
Only $9.95 per month!
Visit www.juno.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL 3.23.54-max GLIBC errors

2002-12-18 Thread Nicholas Gaugler
Lenz,

Thanks, I downloaded it and it all works fine now, the whole bin directory
is statically linked.


Nickg

-Original Message-
From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 2:46 PM
To: Nick
Cc: [EMAIL PROTECTED]
Subject: Re: MySQL 3.23.54-max GLIBC errors


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 13 Dec 2002, Nick wrote:

> I am currently running MySQL3.23.53-max with no problems, well
> my_print_default complains about the same thing that MySQL-3.23.54
> complains about, but MySQLd runs fine.  I am attempting to upgrade to
> MySQL3.23.54 (mysql-max-3.23.54-pc-linux-i686.tar.gz) and MySQLd does
> not run at all, below are the errors I receive:

3.23.54a-Max for Linux accidentally was linked dynamically instead of
statically. Could you please give this binary below a try? I will publish
it on our download pages as well soon:

http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i686.tar.gz

Sorry for the inconvenience.

Bye,
LenZ
- --
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+AN6GSVDhKrJykfIRArA/AJ9ihw5wZYzV9NZh/T0/Z7ptbV1aKACfRIWx
t3of7OZosnw4Q1s8PPgt5Tw=
=CRxn
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Converting many tables into MyISAM

2002-12-18 Thread Benjamin Pflugmann
Hello.

For changing one table, the recommended way is

  ALTER TABLE foo TYPE=MYISAM;

There is no built-in way to change several tables at once. I regulary
have the need to apply the same command to some tables. If you use
some UNIX shell, you can do something like this:

  mysql your_db -t -e "SHOW TABLE STATUS" | awk '/ ISAM / {print $2}'

That will give a list of tables of type ISAM. When you are sure that
you get the tables you want, change the awk command to

  ... | awk '/ ISAM / {print "ALTER TABLE "$2" TYPE=MYISAM;"}' 

When you are satisfied with the output (and maybe tested one line by
copy&paste), just append a call to the command line client after awk:

  ... | mysql your_db 


So the complete command line would be:

  mysql your_db -t -e "SHOW TABLE STATUS" \
  | awk '/ ISAM / {print "ALTER TABLE "$2" TYPE=MYISAM;"}' \
  | mysql your_db

HTH,

Benjamin.


On Wed 2002-12-18 at 18:58:04 -, [EMAIL PROTECTED] wrote:
> I have about 300 database tables that are mostly ISAM and some MyISAM
> format. I would like to move them all into MyISAM - what's the
> easiest/quickest way? I am running 3.23.54.
> 
> I was thinking of doing mysqldump, then using a search/replace in the file
> CREATE TABLE .. TYPE=, then recreating the database files using mysql, but
> it seems a bit heavyhanded to me ;)
[...]

-- 
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




replication issues

2002-12-18 Thread Dan
Hey guys,

I followed the mysql replication instructions in the mysql manual to the
tee.  http://www.mysql.com/doc/en/Replication_HOWTO.html However replication
immediately fails.  I tailed the logs and noticed that it failed on an
insert query on key1, claiming "duplicate entry".  Now I've read online that
duplicate entry problems are often caused by autoincrement fields.  The
query was writing to a table that had an autoincrement index, however it was
not attempting to write to the index field. I also attempted to have it skip
the entry with the SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; command.  This
command failed.  It did not fail when I eliminated the GLOBAL word in the
command like so: SET SQL_SLAVE_SKIP_COUNTER=1;  However, still no luck with
replication. It failed on the very next insert query.

FYI: I am using Linux (Redhat 7.3), and the simplest possible replication
scheme, with one master and one slave on separate machines, both using the
latest stable version of mysql 3.23.54a.

Any ideas?  Is there a better "how to" on the web that any of you know of?
Have any of you seen this issue.

I really appreciate any help you have to offer.  Thanks much.


-Dan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Upgrading from 4.0.1 Alpha to 4.0.5

2002-12-18 Thread Andrew Kuebler
Currently I'm using 4.0.1 Alpha and I'm thinking of upgrading to 4.0.5
as I want to be able to issue UPDATE queries that span multiple tables.
Is there anything in particular I should know of or be aware of before
installing 4.0.5? What is the proper procedure for upgrading? Should I
first uninstall 4.0.1 and then install 4.0.5?

I do not see any upgrade to instructions to 4.0.5 here like I do with
other versions:
http://www.mysql.com/doc/en/Upgrading-from-4.0.html

Thank you in advance.

Andrew



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Ryan Fox

- Original Message -
From: "Qunfeng Dong" <[EMAIL PROTECTED]>
>
> We are trying to develop a simple biology database to
> maintain some DNA Sequence information. My problem is
> coming from the following two tables:


Making indexes smaller will help.  Does it need to be varchar(50)?
Also, I'd consider creating a numeric auto_increment primary key on your
NewSequence table, and using it to relate the 2 tables together.  It may
make for some more complex SQL statements to describe the relationship, but
you'll gain the time back in performance.  Consider using the following.

CREATE TABLE NewSequence
(
id  int(11) not null auto_increment,
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum("EST","GSS","EST Contig","EST
Singlet","GSS Contig","GSS Singlet","GSS Plasmid
Contig","Protein") NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (id),
UNIQUE (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

CREATE TABLE NewSequence_Homolog
(
id  int(11) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (id, Homolog_PID)
);

This would make your example query:
select count(*) from NewSequence s left join NewSequence_Homolog h on s.id =
h.id;
And this would run much quicker, as instead of searching through 50
character indexes for each table, it would only have to look at 11 digit
indexes.  Much quicker.

Ryan

sql, query and stuff


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




win2k update

2002-12-18 Thread John Chang
Is there a patch update from .3 to .4? So that I don't have to reinstall it
on my win2k box?   If I have to reinstall it do I need to first uninstall 
.3 first?
mysql.


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records?

2002-12-18 Thread RBRoa
I am proud to be a MySQL solid rock database engine user, I used it as my
backend data holder in any form. From traffic analysis to subscriber
administration. I never experienced downtime due to bug ever since. So if ur
planning to use the open source as your billing handler. U better make use
of your brain to think and decide which is handy to you. And if your
convinced about the solidity of this engine then, I think your better to
mail everything to other side.



R R--

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Jocelyn Fournier
Hi,

I think you'd better add an unique ID to both table defined as int
corresponding to each seq_ID, and then do the join on this ID rather than on
Seq_ID (join on varchar is far from the fastest solution :)) (unless seq_ID
could be converted into int directly ?)

(but it takes time, even for me (bi athlon MP 2200+) :


mysql> SELECT COUNT(*) FROM searchmainhardwarefr7 LEFT JOIN
searchjoinhardwarefr7 ON
searchjoinhardwarefr7.numreponse=searchmainhardwarefr7.numreponse;
+--+
| COUNT(*) |
+--+
| 39396361 |
+--+
1 row in set (3 min 23.15 sec)

mysql> EXPLAIN SELECT COUNT(*) FROM searchmainhardwarefr7 LEFT JOIN
searchjoinhardwarefr7 ON
searchjoinhardwarefr7.numreponse=searchmainhardwarefr7.numreponse;
++-+---++---+---
-+-+--+--+-+
| id | select_type | table | type   | possible_keys | key
| key_len | ref  | rows | Extra   |
++-+---++---+---
-+-+--+--+-+
|  1 | SIMPLE  | searchmainhardwarefr7 | index  | NULL  |
numreponse |   4 | NULL | 39396576 | Using
index |
|  1 | SIMPLE  | searchjoinhardwarefr7 | eq_ref | numreponse|
numreponse |   4 | searchmainhardwarefr7.numreponse |1 | Using
index |
++-+---++---+---
-+-+--+--+-+
)


Regards,
  Jocelyn
- Original Message -
From: "Qunfeng Dong" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 9:17 PM
Subject: Re: Can MySQL handle 120 million records? - Ok, If you guys really
can handle tens of millions records, you have to help me to enjoy MySQL too
:-)


> Boy, you guys are die-hard MySQL fans :-) I think your
> strong defending convinced us MySQL can handle 120
> million records :-) But I know some ordinary users out
> there like me who are not experts on tuning the MySQL
> performance (they did send me private emails saying
> they encountered the similar slow join problem). So
> please help us to keep the faith.
>
> We are trying to develop a simple biology database to
> maintain some DNA Sequence information. My problem is
> coming from the following two tables:
>
> CREATE TABLE NewSequence
> (
> Seq_ID  varchar(50) NOT NULL,
> GenBank_Acc varchar(10),
> Organismvarchar(50) NOT NULL,
> Seq_Type  enum("EST","GSS","EST Contig","EST
> Singlet","GSS Contig","GSS Singlet","GSS Plasmid
> Contig","Protein") NOT NULL,
> Seq_Length  int NOT NULL,
> Seq_Title   textNOT NULL,
> Comment text,
> Entry_Date  dateNOT NULL,
> PRIMARY KEY (Seq_ID),
> UNIQUE  (GenBank_Acc),
> INDEX (Seq_Type),
> INDEX (Organism)
> );
>
> This NewSequence table is used to track some general
> info about sequence. Notice I have to use text
> datatype to describe "Comment" and "Seq_Title" fields;
> therefore I have to use varchar for other string
> fields. In addition, the Seq_ID is not numerical.
> BTW, I found indexing on Seq_Type. Organism which are
> very repeative still helps with accessing. This table
> has 2676711 rows.
>
>
> CREATE TABLE NewSequence_Homolog
> (
> Seq_ID  varchar(50) NOT NULL,
> Homolog_PID int NOT NULL,
> Homolog_Descvarchar(50) NOT NULL,
> Homolog_Species varchar(50),
> PRIMARY KEY (Seq_ID, Homolog_PID)
> );
>
> This NewSequence_Homolog table is to track which
> protein sequences (homolog) are similar to the
> sequence I store in the NewSequence table. This table
> has 997654 rows.
>
> mysql> select count(*) from NewSequence s left join
> NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
> +--+
> | count(*) |
> +--+
> |  3292029 |
> +--+
> 1 row in set (1 min 30.50 sec)
>
> So a simple left join took about 1 min and half.
> First, is this slow or I am too picky?
>
> This is the "Explain".
> mysql> explain select count(*) from NewSequence s left
> join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
>
+---+---+---+-+-+--+-+--
---+
> | table | type  | possible_keys | key | key_len |
> ref  | rows| Extra   |
>
+---+---+---+-+-+--+-+--
---+
> | s | index | NULL  | PRIMARY |  50 |
> NULL | 2676711 | Using index |
> | h | ref   | PRIMARY   | PRIMARY |  50 |
> s.Seq_ID |9976 | Using index |
>
+---+---+---+-+-+--+-+--
---+
>
>
> I am running M

RE: 3.23.54a Instability

2002-12-18 Thread Kees Hoekzema
Hey David,

> -Original Message-
> From: Lopez David E-r9374c [mailto:[EMAIL PROTECTED]]
>
> kees
>
> How do you measure spiked queries/s?
>
> All I see is average queries/s from the status command.
>
> I can see the calculation based on uptime in seconds and
> total queries in that time. But that's average. My boss
> wants avg and skipped on a web site.

We (a collegue actually) wrote a script that runs every 5 mins and stores
the 'Questions' from MySQL. After all the servers are done (3 boxes running
MySQL) it makes some nice graphs:

All servers, wednesday 18 december:
http://www.tweakers.net/stats?Action=Generator&Mode=Serverstats&Time=1040166
000&Tijd=00%3A00&Dagen=1&Server=&Col=SQLQueries&x=38&y=10

One server, wednesday 18 december:
http://www.tweakers.net/stats?Action=Generator&Mode=Serverstats&Time=1040166
000&Tijd=00%3A00&Dagen=1&Server=Alicia&Col=SQLQueries&x=50&y=14

All servers, one week:
http://www.tweakers.net/stats?Action=Generator&Mode=Serverstats&Time=1039561
200&Tijd=00%3A00&Dagen=7&Server=&Col=SQLQueries&x=52&y=8

Just play a bit with the "Grafiekgenerator" to get different graphs. note:
the site is in dutch, but most english speakers shouldn't have much trouble
with it, there are a lot of english terms ;)).

This script was written by us, so don't expect an opensource project (the
source is way too messy, and my collegue don't want to OS it, but it isn't
difficult to rercreate this with mrtg / rrdtool).

>
> David
- kees



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




really slow query results --- SOLVED

2002-12-18 Thread Joseph Dietz
Someone helped me solve the problem with my slow query results!
I was missing the index which I should have added to my cross reference 
tables.. AND I should have used the INNER JOIN clause instead of joining my 
table with the WHERE clause. In the future I will be far more careful 
designing my queries.

Example:
pk = private key
fk = foreign key

each media has multiple authors:

pk_media_id -- fk_media_id, fk_author_id---pk_aurthor_id

each media has multiple tissues:

pk_media_id--- fk_media_id, fk_tissue_id---pk_tissue_id

All I had to do was add an index to the fk_author_id and fk_tissue_id using 
the following syntax:

ALTER TABLE MediaAuthors ADD KEY (fk_author_id);
ALTER TABLE MediaTissues ADD KEY (fk_tissue_id);




EXPLAIN SELECT DISTINCT Medias.pk_media_id, Organisms.common_name, 
Tissues.type As tiss_type, Cells.type As cell_type,

Organelles.type As org_type, Macromolecules.type As macro_type,
Authors.last_name, Authors.organization, Medias.file_name, Medias.format, 
Medias.label
FROM Macromolecules
INNER JOIN MediaMacromolecules ON fk_macromolecule_id = pk_macromolecule_id
INNER JOIN Medias ON pk_media_id = MediaMacromolecules.fk_media_id
INNER JOIN MediaAuthors ON pk_media_id = MediaAuthors.fk_media_id
INNER JOIN Authors ON fk_author_id = pk_author_id
INNER JOIN MediaTissues ON pk_media_id = MediaTissues.fk_media_id
INNER JOIN Tissues ON fk_tissue_id = pk_tissue_id
INNER JOIN MediaCells ON pk_media_id = MediaCells.fk_media_id
INNER JOIN Cells ON fk_cell_id = pk_cell_id
INNER JOIN MediaOrganelles ON pk_media_id = MediaOrganelles.fk_media_id
INNER JOIN Organelles ON fk_organelle_id = pk_organelle_id
INNER JOIN Organisms ON fk_organism_id = pk_organism_id
INNER JOIN Techniques ON fk_technique_id = pk_technique_id
INNER JOIN Admin ON Medias.fk_admin_id = pk_admin_id
WHERE pk_macromolecule_id = 1

Thank you all! Joseph:):)



_
Help STOP SPAM with the new MSN 8 and get 2 months FREE*  
http://join.msn.com/?page=features/junkmail


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: 3.23.54a Instability

2002-12-18 Thread Lopez David E-r9374c
kees

How do you measure spiked queries/s? 

All I see is average queries/s from the status command.

I can see the calculation based on uptime in seconds and
total queries in that time. But that's average. My boss
wants avg and skipped on a web site.

David

> -Original Message-
> From: Kees Hoekzema [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 18, 2002 1:54 PM
> To: Lenz Grimmer
> Cc: [EMAIL PROTECTED]
> Subject: RE: 3.23.54a Instability
> 
> 
> > From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
> > Sent: Wednesday, December 18, 2002 9:43 PM
> 
> > OK, as I already assumed, the Max binary was not linked 
> statically. Doh!
> > Interesting, that this also causes the load to spike, even 
> though it's not
> > statically linked against an unpatched glibc.
> Maybe it is the version of glibc i'm using? it is unstable, 
> the load spikes
> to +50 in 2 minutes.
> (it is the glibc package form Slackware 8.1)
> 
> 
> > Could you please give this one a try? If this one solves the
> > problem, I will publish it on our download pages ASAP:
> >
> > 
> http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i6
> 86.tar.gz
> 
> Ok, this one works without the spikes,
> normally the load would jump from ~1 to +50, nog it is stable 
> at 1 again.
> And if this one works for me, it'll work for almost everyone ;)
> (kinda loaded server, +1000 queries/s spikes and avg of +350 q/s)
> 
> -kees
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Re: SELECT and UPDATE at the same time?

2002-12-18 Thread Mike Hillyer
Two Ideas:

First of all, the DATESTAMP datatype automatically updates it's datestamp
with every modification to the row. You may want to use it to automatically
update the time of your update. Also, you may want to look into using the
TEMPORARY table type. Make a temporary table, fill it with the primary keys
of the records you want to update by selecting into it from the main table,
then iterate through those for processing, with the updates going to the
main table.

Mike Hillyer


-Original Message-
From: Jeff Snoxell [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 1:54 PM
To: [EMAIL PROTECTED]
Subject: Re: Re: SELECT and UPDATE at the same time?



>>
>>SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"
>
>No. That's goofy anyway.  Why wouldn't you just use a regular
>UPDATE query?
>
>UPDATE mytable Status = "OLD" WHERE Age > 50;

Cos I want to do a fairly long-winded process on the records of those who
are Age>50 and subsequently update all of their records with the Date/Time
that the change took place.

If I do the long-winded process and _then_ do the UPDATE query as you
suggest I'm bound to update records which have been added in the meantime
(by other clients, it's a busy database) which I've not yet processed.

I could make a list of all primary key values I've processed and then
update them... or something equally ugly. I just figured there'd be a
cleaner way to do it.

I really want to avoid grabbing all the results as my server is not located
on my local machine.

Oh, I could update all the records where age>50 with a known date/time then
do my select query to select only those records with that "time-stamp", and
then do my long-winded processing.

Can't help thinking there must be a nicer way to do it.

Thanks,

Jeff


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




pthread error on HPUX 11.23 IPF

2002-12-18 Thread Zengfa Gao
Hi,

I am trying to compile MySQL 3.23 on HPUX 11.23 IPF.

I am using gcc 3.0. gcc 2.95 is not available on HPUX
11.23.


CC=gcc \
CXX=g++ \
CFLAGS="-D_HPUX_SOURCE -D__hpux__ -D_REENTRANT" \
CXXFLAGS="-D_HPUX_SOURCE -D__hpux__ -D_REENTRANT" \
CPPFLAGS="-D_HPUX_SOURCE -D__hpux__ -D_REENTRANT" \
LDFLAGS="-L/usr/local/lib" \
./configure  \
 --prefix=/usr/local/mysql \
 --exec-prefix=/usr/local/mysql \
 --with-libwrap=/usr/local \
 --with-low-memory


gmake

gcc -DHAVE_CONFIG_H -I. -I. -I.. -I./../include
-I../include  -D_POSIX_C_SOURCE=199506L -D_HPUX_SOURCE
-D__hpux__ -D_REENTRANT  -O3 -DDBUG_OFF
-D_POSIX_C_SOURCE=199506L -D_HPUX_SOURCE -D__hpux__
-D_REENTRANT  -DHAVE_BROKEN_PREAD -DDONT_USE_FINITE
-DHAVE_BROKEN_GETPASS -DNO_FCNTL_NONBLOCK
-DDO_NOT_REMOVE_THREAD_WRAPPERS
-DHAVE_BROKEN_PTHREAD_COND_TIMEDWAIT -c strxmov.c
In file included from /usr/include/pthread.h:7,
 from ../include/global.h:97,
 from strxmov.c:33:
/usr/include/sys/pthread.h:1039: parse error before
'*' token
gmake[2]: *** [strxmov.o] Error 1
gmake[2]: Leaving directory
`/var/mysql-3.23.52/strings'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/var/mysql-3.23.52'
gmake: *** [all-recursive-am] Error 2


Does any one have some ideas?

Thanks!

Zengfa


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SELECT and UPDATE at the same time?

2002-12-18 Thread Gelu Gogancea
Hi,
You can use REPLACE...SELECT

Regards,

Gelu
_
G.NET SOFTWARE COMPANY

Permanent e-mail address : [EMAIL PROTECTED]
  [EMAIL PROTECTED]
- Original Message -
From: "Jeff Snoxell" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 7:56 PM
Subject: SELECT and UPDATE at the same time?


> Hello again,
>
> I'm selecting a group of records from my database. I then loop through the
> selected records and do some work based on what I find. But what I also
> want to do as I interrogate each record is update some of its fields with
> new values... but won't that screw up the outer loop? I mean if I try to
> execute a query whilst looping around the result set of a former query
will
> I not screw up my result set that I'm looping through?
>
> Also, is it possible to update specific fields of certain records within a
> SELECT query? ie can I do something like this:
>
> SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"
>
> Ta,
>
>
> Jeff
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




re: Accessing last_insert_id problem.

2002-12-18 Thread Paul DuBois
At 15:31 + 12/18/02, Jeff Snoxell wrote:

Hi,

I've got that sussed now and am happily using the aquired ref to 
subsequently play with the record etc. One other related problem and 
I recon I'm sorted

How do I reset the auto-increment value?

I'm working with MySQL 3.23.36 so, according to "MySQL, Paul Dubois, 
New Riders" running the query:

"DELETE FROM my_table_name"

should reset it... but it doesnt'.

What SQL do I use to reset the val.

You can't believe anything that book says.  The author didn't even include
anything about TRUNCATE TABLE my_table_name.



Ta,

Jeff



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Accessing last_insert_id problem.

2002-12-18 Thread Paul DuBois
At 12:21 + 12/18/02, Jeff Snoxell wrote:

Hi,

I'm adding records to a db using the Perl DBI. Subsequent to adding 
a record I need to know the value of the auto-incrementing 'Ref' 
field so that I can place a copy of the relavent details into a log 
file.

I could query for the "LAST_INSERT_ID" but what if another process 
has added another record in the interim?

Doesn't matter.



Is this a job for table locking?


No, it's probably more of a job for reading the section on LAST_INSERT_ID()
in the manual again, particularly the part about "cannot be changed by
another client." :-)

http://www.mysql.com/doc/en/Miscellaneous_functions.html



Many thanks,

Jeff



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




PASSWORD() function not working

2002-12-18 Thread Matt Parlane
>Description:
   The password function does not work as intended with latest 4.1 tree 
(today's).  It returns a string of LENGTH() = 45 starting with a * 
(asterisk) and 44 alphanumeric characters /[a-z0-9]/
>How-To-Repeat:
   Using latest 4.1 BK tree, SELECT PASSWORD('something');
>Fix:
   Not known.

>Submitter-Id:
>Originator:
>Organization:
>MySQL support: none
>Synopsis: PASSWORD() function not working
>Severity: serious
>Priority: high
>Category:  mysql
>Class: sw-bug
>Release:   mysql-4.1.0-alpha (Source distribution)

>Environment:
   Slackware Linux 8.1 - 2.4.20 - 256MB - GCC 2.95.3 - PIII 933MHz 
(single processor)
System: Linux homer 2.4.20 #2 SMP Mon Dec 2 15:14:02 NZDT 2002 i686 unknown
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc 
/usr/bin/cc
GCC: Reading specs from /usr/lib/gcc-lib/i386-slackware-linux/2.95.3/specs
gcc version 2.95.3 20010315 (release)
Compilation info: CC='gcc'  CFLAGS=''  CXX='g++'  CXXFLAGS=''  
LDFLAGS=''  ASFLAGS=''
LIBC:
lrwxrwxrwx1 root root   13 Jul 20 14:34 /lib/libc.so.6 
-> libc-2.2.5.so
-rwxr-xr-x1 root root  5029105 May 18  2002 /lib/libc-2.2.5.so
-rw-r--r--1 root root 24991240 May 18  2002 /usr/lib/libc.a
-rw-r--r--1 root root  178 May 18  2002 /usr/lib/libc.so
Configure command: ./configure 
'--prefix=/usr/local/mysql-4.1-2002-12-19' '--enable-assembler' 
'--with-mysqld-ldflags=-all-sta
tic' '--without-isam' '--without-innodb'



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Password function not working with latest 4.1 tree

2002-12-18 Thread matt
>Description:
with the latest 4.1 tree (from today) the PASSWORD() function returns random 
alpha-numeric text /[a-f0-9]/
45 characters in length (which is too long for a password string). The string 
always starts with a * (asterisk).
example:*95144feaa0f433f3f62c29382697a1e631b283f860f0

>How-To-Repeat:
Using latest BK 4.1 tree, SELECT PASSWORD('something');
>Fix:
Not known.

>Submitter-Id:  
>Originator:
>Organization:
 
>MySQL support: none
>Synopsis:  Password function does not work as intended
>Severity:  serious
>Priority:  high
>Category:  mysql
>Class: sw-bug
>Release:   mysql-4.1.0-alpha (Source distribution)

>Environment:
Slackware Linux 2.4.20 gcc 2.95.3
System: Linux homer 2.4.20 #2 SMP Mon Dec 2 15:14:02 NZDT 2002 i686 unknown
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc
GCC: Reading specs from /usr/lib/gcc-lib/i386-slackware-linux/2.95.3/specs
gcc version 2.95.3 20010315 (release)
Compilation info: CC='gcc'  CFLAGS=''  CXX='g++'  CXXFLAGS=''  LDFLAGS=''  ASFLAGS=''
LIBC: 
lrwxrwxrwx1 root root   13 Jul 20 14:34 /lib/libc.so.6 -> libc-2.2.5.so
-rwxr-xr-x1 root root  5029105 May 18  2002 /lib/libc-2.2.5.so
-rw-r--r--1 root root 24991240 May 18  2002 /usr/lib/libc.a
-rw-r--r--1 root root  178 May 18  2002 /usr/lib/libc.so
Configure command: ./configure '--prefix=/usr/local/mysql-4.1-2002-12-02' 
'--enable-assembler' '--with-mysqld-ldflags=-all-static' '--without-isam' 
'--without-innodb'


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL security flaws uncovered

2002-12-18 Thread Stefan Hinz, iConnect \(Berlin\)
Csongor, Michael:

>> Maybe they're taking the MS route.

> I second this. These vulnerabilities are serious, they must be given
> more attention. Apache, PHP, RedHat and so on and so on are very careful
> with issues like this, all vulnerabilities/exploits are immediately
> published through all possible channels.

Have a look at the website of the person who uncovered the security flaw:


Vendor Response
03. December 2002 Vendor was contacted by email.
04. December 2002 Vendor informs me that bugs are fixed and that they
started building new packages.
12. December 2002 Vendor has released MySQL 3.23.54 which fixes these
vulnerabilities.


Doesn't look like the "MS way" to me. See for yourselves:

http://security.e-matters.de/advisories/042002.html

Regards,
--
  Stefan Hinz <[EMAIL PROTECTED]>
  CEO / Geschäftsleitung iConnect GmbH 
  Heesestr. 6, 12169 Berlin (Germany)
  Telefon: +49 30 7970948-0  Fax: +49 30 7970948-3


- Original Message -
From: "Csongor Fagyal" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 11:33 AM
Subject: Re: MySQL security flaws uncovered


> Michael She wrote:
>
> > It's bad for business : )
> >
> > Maybe they're taking the MS route.
>
> I second this. These vulnerabilities are serious, they must be given
> more attention. Apache, PHP, RedHat and so on and so on are very careful
> with issues like this, all vulnerabilities/exploits are immediately
> published through all possible channels. Yes, it is always a pain to
> find out something like this, obviously the MySQL team just would like
> to forget this once and for all, but doing troublesome
> reinstalls/upgrades and so on is still better then getting our system
> hacked.
>
> - Cs.
>
> >
> >
> >
> > At 12:19 AM 12/18/2002 -0500, Michael Bacarella wrote:
> >
> >> A good question posted to another list..
> >>
> >>  forwarded message follows 
> >>
> >> > Several vulnerabilities have been found in the MySQL database system,
a
> >> > light database package commonly used in Linux environments but
> >> which runs
> >> > also on Microsoft platforms, HP-Unix, Mac OS and more.
> >> > http://zdnet.com.com/2100-1104-977958.html
> >>
> >> So why no mention on the MySQL.COM site?  That rather bugs me.  In
> >> contrast,
> >> sites for products like Apache or Bind are very clear about
current/past
> >> security issues.
> >>
> >> Is MySQL.COM the wrong place?
> >


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Severe performace problem linking tables with mysql

2002-12-18 Thread Joseph Dietz

Here are my queries. I have an entity relationship diagram but I don't think 
I can attach it here! Thank you for all your help. If you would like my 
entity relationship diagram, please email me.
Sincerely,
Joseph


explain SELECT distinct Medias.pk_media_id, Organisms.common_name, 
Tissues.type As tiss_type, Cells.type As cell_type,
Organelles.type As org_type, Macromolecules.type As macro_type, 
Authors.last_name, Authors.organization, Medias.file_name, Medias.format, 
Medias.label FROM Medias, Organisms, MediaAuthors, MediaTissues, MediaCells, 
MediaOrganelles, MediaMacromolecules,
Authors, Techniques, Tissues, Cells, Organelles, Macromolecules, Admin
WHERE Macromolecules.pk_macromolecule_id = 1 and Medias.fk_organism_id = 
Organisms.pk_organism_id and Medias.fk_technique_id = 
Techniques.pk_technique_id
and Medias.pk_media_id = MediaAuthors.fk_media_id and 
MediaAuthors.fk_author_id = Authors.pk_author_id
and Medias.pk_media_id = MediaTissues.fk_media_id and 
MediaTissues.fk_tissue_id = Tissues.pk_tissue_id
and Medias.pk_media_id = MediaCells.fk_media_id and MediaCells.fk_cell_id = 
Cells.pk_cell_id
and Medias.pk_media_id = MediaOrganelles.fk_media_id and 
MediaOrganelles.fk_organelle_id = Organelles.pk_organelle_id
and Medias.pk_media_id = MediaMacromolecules.fk_media_id and 
MediaMacromolecules.fk_macromolecule_id = Macromolecules.pk_macromolecule_id 
and Medias.fk_admin_id = Admin.pk_admin_id

+-++
+-+-+-+--+--
---+
| table   | type   | possible_keys
| key | key_len | ref | rows | 
Extra
  |
+-++
+-+-+-+--+--
---+
| Macromolecules  | const  | 
PRIMARY,pk_macromolecule_id,i_macromolecule_id
| PRIMARY |   8 | const   |1 | 
Using
temporary |
| MediaCells  | index  | fk_media_id,i_mediacell_id
| fk_media_id |  16 | NULL|2 | 
Using
index |
| Authors | ALL| PRIMARY,pk_author_id,i_author_id
| NULL|NULL | NULL|2 |
  |
| Tissues | ALL| PRIMARY,pk_tissue_id,i_tissue_id
| NULL|NULL | NULL|2 |
  |
| Medias  | eq_ref | PRIMARY,pk_media_id,i_media_id
| PRIMARY |   8 | MediaCells.fk_media_id  |1 |
  |
| MediaTissues| eq_ref | fk_media_id,i_mediatissue_id
| fk_media_id |  16 | Medias.pk_media_id,Tissues.pk_tissue_id |1 | 
Using
index |
| MediaOrganelles | index  | fk_media_id,i_mediaorganelle_id
| fk_media_id |  16 | NULL|2 | 
where
used; Using index |
| MediaMacromolecules | eq_ref | fk_media_id,i_mediamacromolecule_id
| fk_media_id |  16 | Medias.pk_media_id,const|1 | 
Using
index |
| Organisms   | eq_ref | PRIMARY,pk_organism_id,i_organism_id
| PRIMARY |   4 | Medias.fk_organism_id   |1 |
  |
| Techniques  | eq_ref | PRIMARY,pk_technique_id,i_technique_id
| PRIMARY |   4 | Medias.fk_technique_id  |1 | 
Using
index |
| MediaAuthors| eq_ref | fk_media_id,i_mediaauthor_id
| fk_media_id |  12 | Medias.pk_media_id,Authors.pk_author_id |1 | 
Using
index |
| Cells   | eq_ref | PRIMARY,pk_cell_id,i_cell_id
| PRIMARY |   8 | MediaCells.fk_cell_id   |1 |
  |
| Organelles  | eq_ref | PRIMARY,pk_organelle_id,i_organelle_id
| PRIMARY |   8 | MediaOrganelles.fk_organelle_id |1 |
  |
| Admin   | eq_ref | PRIMARY,pk_admin_id,i_admin_id
| PRIMARY |   4 | Medias.fk_admin_id  |1 | 
Using
index; Distinct   |
+-++
+-+-+-+--+--
---+
14 rows in set (19.55 sec)





this is better!///

EXPLAIN SELECT DISTINCT Medias.pk_media_id, Organisms.common_name, 
Tissues.type As tiss_type, Cells.type As cell_type, Organelles.type As 
org_type, Macromolecules.type As macro_type,
Authors.last_name, Authors.organization, Medias.file_name, Medias.format, 
Medias.label
FROM Macromolecules
INNER JOIN MediaMacromolecules ON fk_macromolecule_id = pk_macromolecule_id
INNER JOIN Medias ON pk_media_id = Medias.pk_media_id
INNER JOIN MediaAuthors ON pk_media_id 

Re: SELECT and UPDATE at the same time?

2002-12-18 Thread Paul DuBois
At 21:40 +0200 12/18/02, Gelu Gogancea wrote:

Hi,
You can use REPLACE...SELECT


Not in the case described below.  You cannot replace into the same
table from which you're selecting.



Regards,

Gelu
_
G.NET SOFTWARE COMPANY

Permanent e-mail address : [EMAIL PROTECTED]
  [EMAIL PROTECTED]
- Original Message -
From: "Jeff Snoxell" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 7:56 PM
Subject: SELECT and UPDATE at the same time?



 Hello again,

 I'm selecting a group of records from my database. I then loop through the
 selected records and do some work based on what I find. But what I also
 want to do as I interrogate each record is update some of its fields with
 new values... but won't that screw up the outer loop? I mean if I try to
 execute a query whilst looping around the result set of a former query

will

 I not screw up my result set that I'm looping through?

 Also, is it possible to update specific fields of certain records within a
 SELECT query? ie can I do something like this:

 SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"

 Ta,



 > Jeff



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




FW: Can MySQL handle 120 million records? - Impressive! How doyou guys do that?

2002-12-18 Thread R. Hannes Niedner

-- Forwarded Message
From: Joseph D <[EMAIL PROTECTED]>
Date: Wed, 18 Dec 2002 13:55:47 -0800 (PST)
To: "R. Hannes Niedner" <[EMAIL PROTECTED]>
Subject: Re: Can MySQL handle 120 million records? - Impressive! How do you
guys do that?

thanks i actually can't post anything to the newsgroup because i'm using
Yahoo mail and there seems to be attachment error.

here is my table structure and where clause.

explain SELECT distinct Medias.pk_media_id, Organisms.common_name,
Tissues.type As tiss_type, Cells.type As cell_type,
Organelles.type As org_type, Macromolecules.type As macro_type,
Authors.last_name, Authors.organization, Medias.file_name, Medias.format,
Medias.label FROM Medias, Organisms, MediaAuthors, MediaTissues, MediaCells,
MediaOrganelles, MediaMacromolecules,
Authors, Techniques, Tissues, Cells, Organelles, Macromolecules, Admin
WHERE Macromolecules.pk_macromolecule_id = 1 and Medias.fk_organism_id =
Organisms.pk_organism_id and Medias.fk_technique_id =
Techniques.pk_technique_id
and Medias.pk_media_id = MediaAuthors.fk_media_id and
MediaAuthors.fk_author_id = Authors.pk_author_id
and Medias.pk_media_id = MediaTissues.fk_media_id and
MediaTissues.fk_tissue_id = Tissues.pk_tissue_id
and Medias.pk_media_id = MediaCells.fk_media_id and MediaCells.fk_cell_id =
Cells.pk_cell_id 
and Medias.pk_media_id = MediaOrganelles.fk_media_id and
MediaOrganelles.fk_organelle_id = Organelles.pk_organelle_id
and Medias.pk_media_id = MediaMacromolecules.fk_media_id and
MediaMacromolecules.fk_macromolecule_id = Macromolecules.pk_macromolecule_id
and Medias.fk_admin_id = Admin.pk_admin_id



//--
//Entity-relationship diagram
//--
//
//----------
//OrganismsMediasMediaAuthorsAuthors
//----------
//pk_organism_id--pk_media_id|fk_media_id
--pk_author_id
//  |fk_organism_id   |fk_author_id--|
//fk_technique_id-|  |
//fk_admin_id-||  |
//||  |
//||  |
//|   |  |
//||  |
//||  |-----
//||  |MediaTissuesTissues
//||  |-----
//||  |fk_media_id |--pk_tissue_id
//||  |fk_tissue_id--
//  ||  |
//||  |-----
//||  |MediaCellsCells
//||  |-----
//||  |fk_media_id |--pk_cell_id
//||  |fk_cell_id
//||  |
//||  |-----
//||  |MediaOrganellesOrganelles
//||  |-----
//||  |fk_media_id |--pk_organelle_id
//||  |fk_organelle_id---
//||  |
//||  |-----
//||  |MediaMacromoleculesMacromolecules
//||  |-----
//||  |fk_media_id
|--pk_macromolecule_id
//||  fk_macromolecule_id---
//||
//||  -----
//||  Techniques
//||  -----
//||---pk_technique_id
//|
//|   -----
---
//|   StudentsStudentComments
//|   -----
---
//|   pk_student_id-|
pk_comment_id
//| |-fk_student_id (not
unique)
//|
//|   -----
//|   AdminAdminComments
//|   -----
//|---pk_admin_id--|pk_comment_id
//|--fk_admin_id
//|
//|--
//|AdminMessages
//|--
//|pk_message_id
//

Re: Can MySQL handle 120 million records?

2002-12-18 Thread Csongor Fagyal



What you need to have is a _good_ install, and then MySQL is superb. But 
to have a "good install" is not as easy as it sounds.
   


Can you list the elements of a good install?
 

Well...

One which does not make mysqld hang once in every hour (or minute).

Seriously speaking, this is what I meant. (I have previously posted a 
message on this list about this problem I am still facing.) A good 
install is one which works - if it works for a day, it will work 
basically forever. Usually it is even easy to install MySQL. However, if 
you start to twist it and/or you have some sort of incompatibility (say, 
in your gcc or some library you don't even know abaout), you are 
screwed. (At least I am.)

Currently I have two MySQL-s running on the same machine, on different 
ports, under different users, etc... I just could not set up this using 
one binary (it looks like some paths are getting compilled into the 
binaries) - even though when it worked, it crashed after a few minutes. 
Right now I have an RPM and a source install - this way the two MySQL 
instances are not messing up each other (but both of them are randomly 
crashing). I think this has something to do with that I am using RedHat 
8... will compile 3.23.54a tomorrow (and freshen the RPM... brrr!) to 
see what develops.

- Cs.



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Converting many tables into MyISAM

2002-12-18 Thread Paul DuBois
At 18:58 + 12/18/02, John P wrote:

I have about 300 database tables that are mostly ISAM and some MyISAM
format. I would like to move them all into MyISAM - what's the
easiest/quickest way? I am running 3.23.54.

I was thinking of doing mysqldump, then using a search/replace in the file
CREATE TABLE .. TYPE=, then recreating the database files using mysql, but
it seems a bit heavyhanded to me ;)

Thanks,
John



Any single table can be converted using ALTER TABLE:

ALTER TABLE tbl_name TYPE = MyISAM;


For all tables in a given database, you can convert them like this, at
least under Unix:


% mysql -N -e 'SHOW TABLES' db_name \
| sed -e 's/.*/ALTER TABLE & TYPE = MyISAM;/' \
| mysql db_name

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: OT: Spam Filter (again)

2002-12-18 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 02:37:14PM -0500, Michael T. Babcock wrote:
>
> Just for anyone who cares (or cares to do the same), when I reply to
> messages at my average 80wpm and hit send, I don't check if
> SQL,QUERY,etc. is in the message 90% of the time.  Now, besides
> thanking the crew for adding a couple more keywords to the filter,
> it still ticks me off to get a bounce to a legitimate response.  If
> I get such bounces from now on, I'm ignoring them (that includes the
> last 3 messages I've sent).  I'm not going to bother to retype /
> reformat / forward (because its ugly) the messages.  I always
> reply-to-all, so the author of the previous message gets their copy.

I implemented a very simple trick.

My mailer is configured to notice when I'm posting to a MySQL list.
When I am, it auto-appends the real-time stats from our master server
below my sig file.  That's enough to make the filter happy.

But, yeah, I agree.  There are better ways to do this on the mail
server side.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 145,631,476 queries (454/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: OT: Spam Filter (again)

2002-12-18 Thread B. van Ouwerkerk
Allow postings from members only.
or
I think it's on the PHP lists, if a non-members sends a message he/she gets 
a message back. After replying to that message the original one hits the 
list..
mysql, query


B.

At 14:37 18-12-2002 -0500, Michael T. Babcock wrote:
Just for anyone who cares (or cares to do the same), when I reply to 
messages at my average 80wpm and hit send, I don't check if SQL,QUERY,etc. 
is in the message 90% of the time.  Now, besides thanking the crew for 
adding a couple more keywords to the filter, it still ticks me off to get 
a bounce to a legitimate response.  If I get such bounces from now on, I'm 
ignoring them (that includes the last 3 messages I've sent).  I'm not 
going to bother to retype / reformat / forward (because its ugly) the 
messages.  I always reply-to-all, so the author of the previous message 
gets their copy.

PS, it wouldn't be that hard (depending on your mail distribution method) 
to allow subscribers to post without going through the filter and only 
filter unsubscribed people.

PPS, you could always make it database-driven and add people who post 
successfully more than once a week to the no-spam-check list instead.


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Converting many tables into MyISAM

2002-12-18 Thread Chris . Kulish

I just did this sort of thing in the innodb direction.  What I did was a
mysql dump, global search and replace of MyISAM to InnoDB in the dump ddl
file, dropped and recreated the database, then restored.

-
Chris Kulish
Systems Engineer
ING Advisors Network
Ph. (515) 698-7583
Fx. (515) 698-3583

"... if you must mount the gallows, give a jest to the crowd, a coin to the
hangman, and make the drop with a smile on your lips."
--Birgitte
-


   
  
  "John P" 
  
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED] 
  
  .co.uk>  cc:   (bcc: Chris 
Kulish/BDN/ING-FSI-NA)  
   Subject:  Converting many tables into 
MyISAM  
  12/18/2002 12:58 
  
  PM   
  
   
  
   
  




I have about 300 database tables that are mostly ISAM and some MyISAM
format. I would like to move them all into MyISAM - what's the
easiest/quickest way? I am running 3.23.54.

I was thinking of doing mysqldump, then using a search/replace in the file
CREATE TABLE .. TYPE=, then recreating the database files using mysql, but
it seems a bit heavyhanded to me ;)

Thanks,
John



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php









-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Kees Hoekzema
> -Original Message-
> From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
> Thanks for confirming! I will publish the binary right away. Sorry for the
> trouble.
Just a small thing I noticed:
version 3.23.54-max-log

missing the 'c' in the version string ;)

- kees


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Aaron Clausen
On Wed, 18 Dec 2002, Gerald Jensen wrote:

> Joe is right ... we switched from another SQL server to MySQL in 1999, and
> have never looked back.
>
> MySQL has been rock solid for our applications, the MySQL development team
> is great to work with, and our customers like it.
>

That's been my experience as well.  We have an in-house billing system which
I originally wrote to work with MS-Access.  I converted the whole thing over
to MySQL via ODBC in June 2000, and it has worked flawlessly ever since.  We
run it under Win2k, though I'm seriously thinking of moving the database
server over to Linux in the next six months.  But MySQL has been rock solid.
I have lost no data, save through my own stupidity, at any point.  I would
recommend it without reservations.

-- 
Aaron Clausen


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Severe performace problem linking tables with mysql

2002-12-18 Thread Benjamin Pflugmann
Hi.

On Wed 2002-12-18 at 18:40:04 +, [EMAIL PROTECTED] wrote:
> 
> PLATFORM: 3.23.52-max-nt with Windows 2000 professional (default table type)
> 
> I have discovered a performace issue when joining several tables together. 
> The performance is extremely poor when performing select queries using the 
> WHERE clause and joining the tables with the pk_media_id = fk_media_id 
> etc... I guess this is what people might think about when considering using 
> mysql. With SQL 2000 Windows Server, and ACCESS, there was no loss in 
> performace. 4 of my tables are cross reference tables as such:

Access? You aren't serious, are you? If you really experience
Microsoft Access being faster than MySQL, you are doing something
*seriously* wrong. Probably keys missing or not used for whatever
reason.

Please post the result of EXPLAIN for the query in question and the
result of SHOW INDEX. Also SHOW CREATE TABLE, if posting that
information is OK with you. And whatever information you consider to
be of interest.

Regards,

Benjamin.


-- 
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records :-) But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum("EST","GSS","EST Contig","EST  
Singlet","GSS Contig","GSS Singlet","GSS Plasmid
Contig","Protein") NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

This NewSequence table is used to track some general
info about sequence. Notice I have to use text
datatype to describe "Comment" and "Seq_Title" fields;
therefore I have to use varchar for other string
fields. In addition, the Seq_ID is not numerical. 
BTW, I found indexing on Seq_Type. Organism which are
very repeative still helps with accessing. This table
has 2676711 rows.


CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);

This NewSequence_Homolog table is to track which
protein sequences (homolog) are similar to the
sequence I store in the NewSequence table. This table
has 997654 rows. 

mysql> select count(*) from NewSequence s left join
NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+--+
| count(*) |
+--+
|  3292029 |
+--+
1 row in set (1 min 30.50 sec)

So a simple left join took about 1 min and half.
First, is this slow or I am too picky?

This is the "Explain".
mysql> explain select count(*) from NewSequence s left
join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---+---+---+-+-+--+-+-+
| table | type  | possible_keys | key | key_len |
ref  | rows| Extra   |
+---+---+---+-+-+--+-+-+
| s | index | NULL  | PRIMARY |  50 |
NULL | 2676711 | Using index |
| h | ref   | PRIMARY   | PRIMARY |  50 |
s.Seq_ID |9976 | Using index |
+---+---+---+-+-+--+-+-+


I am running MySQL 3.23.49 on RedHat linux 7.3 on a
dedicated server with 4 GB memory. The only setting I
changed is to copy the my-huge.cnf into /etc/my.cnf.

Qunfeng

--- "Michael T. Babcock" <[EMAIL PROTECTED]>
wrote:
> Qunfeng Dong wrote:
> 
> >not-so-good performance (join on tables much
> smaller
> >than yours takes minutes even using index) and I
> seem
> >to read all the docs I could find on the web about
> how
> >to optimize but they are not working for me (I am
> >
> 
> Have you stored a slow query log to run them through
> 'explain' and see 
> why they're slow?  Do you want to post some of them
> here so we can 
> suggest what might be done to make them faster?
> 
> -- 
> Michael T. Babcock
> C.T.O., FibreSpeed Ltd.
> http://www.fibrespeed.net/~mbabcock
> 
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Lenz Grimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 18 Dec 2002, Kees Hoekzema wrote:

> Maybe it is the version of glibc i'm using? it is unstable, the load spikes
> to +50 in 2 minutes.
> (it is the glibc package form Slackware 8.1)

I'm not sure. It also happens if I link against the default glibc 2.2.4
that ships with my build host. If I link against (a patched) glibc 2.2.5,
things seem to be fine. We've been able to reproduce this phenomenon, but
have not been able to spot the actual cause for this.

> > Could you please give this one a try? If this one solves the
> > problem, I will publish it on our download pages ASAP:
> >
> > http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i686.tar.gz
>
> Ok, this one works without the spikes,
> normally the load would jump from ~1 to +50, nog it is stable at 1 again.
> And if this one works for me, it'll work for almost everyone ;)
> (kinda loaded server, +1000 queries/s spikes and avg of +350 q/s)

Thanks for confirming! I will publish the binary right away. Sorry for the
trouble.

Bye,
LenZ
- -- 
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+AOTjSVDhKrJykfIRAmuYAJ9AGZNh9odg1Zio7nwrEI+AENGNxACfUDNi
UszV6fvk7mmn8CMaVsGNQZw=
=6Dx2
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records. But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum("EST","GSS","EST Contig","EST
Singlet","GSS Contig","GSS Singlet","GSS Plasmid
Contig","Protein") NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);






--- "Michael T. Babcock" <[EMAIL PROTECTED]>
wrote:
> Qunfeng Dong wrote:
> 
> >not-so-good performance (join on tables much
> smaller
> >than yours takes minutes even using index) and I
> seem
> >to read all the docs I could find on the web about
> how
> >to optimize but they are not working for me (I am
> >
> 
> Have you stored a slow query log to run them through
> 'explain' and see 
> why they're slow?  Do you want to post some of them
> here so we can 
> suggest what might be done to make them faster?
> 
> -- 
> Michael T. Babcock
> C.T.O., FibreSpeed Ltd.
> http://www.fibrespeed.net/~mbabcock
> 
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Kees Hoekzema
> From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 18, 2002 9:43 PM

> OK, as I already assumed, the Max binary was not linked statically. Doh!
> Interesting, that this also causes the load to spike, even though it's not
> statically linked against an unpatched glibc.
Maybe it is the version of glibc i'm using? it is unstable, the load spikes
to +50 in 2 minutes.
(it is the glibc package form Slackware 8.1)


> Could you please give this one a try? If this one solves the
> problem, I will publish it on our download pages ASAP:
>
> http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i686.tar.gz

Ok, this one works without the spikes,
normally the load would jump from ~1 to +50, nog it is stable at 1 again.
And if this one works for me, it'll work for almost everyone ;)
(kinda loaded server, +1000 queries/s spikes and avg of +350 q/s)

-kees


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Suitability of mySQL for desktop use?

2002-12-18 Thread Frank Marousek
How suitable is mySQL for use in a desktop application i.e. one that would
be installed completely by the user and run on a standalone PC? My
application is written in Delphi and currently uses dBase files, but as
Delphi support for accessing dBase files is being phased out, I am looking
at possible alternatives. My application already makes heavy use of SQL, and
moving to an SQL database would seem like a good fit otherwise. mySQL is
also appealing as I may at some point port this application to the web, in
which case mySQL seems like it would be a good platform.

Comments?


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re: SELECT and UPDATE at the same time?

2002-12-18 Thread Jeff Snoxell



SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"


No. That's goofy anyway.  Why wouldn't you just use a regular
UPDATE query?

UPDATE mytable Status = "OLD" WHERE Age > 50;


Cos I want to do a fairly long-winded process on the records of those who 
are Age>50 and subsequently update all of their records with the Date/Time 
that the change took place.

If I do the long-winded process and _then_ do the UPDATE query as you 
suggest I'm bound to update records which have been added in the meantime 
(by other clients, it's a busy database) which I've not yet processed.

I could make a list of all primary key values I've processed and then 
update them... or something equally ugly. I just figured there'd be a 
cleaner way to do it.

I really want to avoid grabbing all the results as my server is not located 
on my local machine.

Oh, I could update all the records where age>50 with a known date/time then 
do my select query to select only those records with that "time-stamp", and 
then do my long-winded processing.

Can't help thinking there must be a nicer way to do it.

Thanks,

Jeff 


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: MySQL 3.23.54-max GLIBC errors

2002-12-18 Thread Lenz Grimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 13 Dec 2002, Nick wrote:

> I am currently running MySQL3.23.53-max with no problems, well
> my_print_default complains about the same thing that MySQL-3.23.54
> complains about, but MySQLd runs fine.  I am attempting to upgrade to
> MySQL3.23.54 (mysql-max-3.23.54-pc-linux-i686.tar.gz) and MySQLd does
> not run at all, below are the errors I receive:

3.23.54a-Max for Linux accidentally was linked dynamically instead of
statically. Could you please give this binary below a try? I will publish
it on our download pages as well soon:

http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i686.tar.gz

Sorry for the inconvenience.

Bye,
LenZ
- -- 
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+AN6GSVDhKrJykfIRArA/AJ9ihw5wZYzV9NZh/T0/Z7ptbV1aKACfRIWx
t3of7OZosnw4Q1s8PPgt5Tw=
=CRxn
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Lenz Grimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 18 Dec 2002, Kees Hoekzema wrote:

> Ok, here you go:
> # ldd /usr/local/mysql-max-3.23.54a-pc-linux-i686/bin/mysqld
> libdl.so.2 => /lib/libdl.so.2 (0x4002)
> libpthread.so.0 => /lib/libpthread.so.0 (0x40024000)
> libz.so.1 => /usr/lib/libz.so.1 (0x4003a000)
> libcrypt.so.1 => /lib/libcrypt.so.1 (0x40049000)
> libnsl.so.1 => /lib/libnsl.so.1 (0x40077000)
> libm.so.6 => /lib/libm.so.6 (0x4008c000)
> libc.so.6 => /lib/libc.so.6 (0x400ae000)
> libnss_files.so.2 => /lib/libnss_files.so.2 (0x401c1000)
> libnss_dns.so.2 => /lib/libnss_dns.so.2 (0x401cb000)
> libresolv.so.2 => /lib/libresolv.so.2 (0x401cf000)
> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x4000)
>
> While:
> # ldd /usr/local/mysql-max-3.23.53a-pc-linux-gnu-i686/bin/mysqld
> not a dynamic executable
> (doh, that one worked ;))

OK, as I already assumed, the Max binary was not linked statically. Doh!
Interesting, that this also causes the load to spike, even though it's not
statically linked against an unpatched glibc.

> > What version of Linux are you using?
> Running Slackware 8.1 GNU/Linux, kernel 2.4.17, GNU libc 2.2.3
>
> Have fun recompiling ;)

Thanks. Could you please give this one a try? If this one solves the
problem, I will publish it on our download pages ASAP:

http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i686.tar.gz

Sorry about that - it seems like getting out that security update as fast
as possible caused too many glitches here...

Bye,
LenZ
- -- 
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+AN3nSVDhKrJykfIRAooHAJ44IRyZ9/NqDsyRF0HGJb5+rhU16QCfSV+G
Nfxo7DLBr+NygDwZDT7b/3M=
=vVjF
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael She wrote:


I guess you can say I'm a follower.  Other DB systems have been in use 
for years, so their reliability has been generally proven through 
use.  It's good to know that a lot of people have had success with 
MySQL, but considering MySQL is the new comer, I'm still a little tepid!


MySQL may be new compared to Oracle, for example, but many other in-use 
DBs are in fact fairly new designs.  They just happen to be written by* 
large companies you recognize every day.

*written by  = written by programmers that were hired by and/or written 
by a company that was purchased by ...

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
I guess you can say I'm a follower.  Other DB systems have been in use for 
years, so their reliability has been generally proven through use.  It's 
good to know that a lot of people have had success with MySQL, but 
considering MySQL is the new comer, I'm still a little tepid!


At 01:22 PM 12/18/2002 -0500, Michael T. Babcock wrote:
Michael She wrote:
X-MDRcpt-To: [EMAIL PROTECTED]
X-Return-Path: [EMAIL PROTECTED]
X-MDaemon-Deliver-To: [EMAIL PROTECTED]


2. Some of the comments in the mySQL manual... people losing data doing 
routine stuff like table optimizations, adding keys, etc.  If a database 
is reliable, things like that shouldn't happen.  Comments like those in 
the MySQL manual scared me.


1) Do you believe this doesn't ever happen with other 'enterprise' level 
DB systems?
2) What do you think Microsoft's pages would look like if they allowed 
arbitrary user comments (read their newsgroups ...)
3) Those reports should be filed as bugs to be fixed, not added as 
comments in the manual.

--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Kees Hoekzema
Hey Lenz,

> Can you please try a "ldd /path/to/mysqld" and check, if it
> requires shared libraries or is a static binary?

Ok, here you go:
# ldd /usr/local/mysql-max-3.23.54a-pc-linux-i686/bin/mysqld
libdl.so.2 => /lib/libdl.so.2 (0x4002)
libpthread.so.0 => /lib/libpthread.so.0 (0x40024000)
libz.so.1 => /usr/lib/libz.so.1 (0x4003a000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x40049000)
libnsl.so.1 => /lib/libnsl.so.1 (0x40077000)
libm.so.6 => /lib/libm.so.6 (0x4008c000)
libc.so.6 => /lib/libc.so.6 (0x400ae000)
libnss_files.so.2 => /lib/libnss_files.so.2 (0x401c1000)
libnss_dns.so.2 => /lib/libnss_dns.so.2 (0x401cb000)
libresolv.so.2 => /lib/libresolv.so.2 (0x401cf000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x4000)

While:
# ldd /usr/local/mysql-max-3.23.53a-pc-linux-gnu-i686/bin/mysqld
not a dynamic executable
(doh, that one worked ;))

> 
> What version of Linux are you using?
Running Slackware 8.1 GNU/Linux, kernel 2.4.17, GNU libc 2.2.3

Have fun recompiling ;)
 -kees

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Adam Nelson
That's the only thing wrong with Mysql is what it doesn't do.
Everything it does do it does fantastically.

> -Original Message-
> From: Michael T. Babcock [mailto:[EMAIL PROTECTED]] 
> Sent: Wednesday, December 18, 2002 10:12 AM
> To: Michael She
> Cc: Qunfeng Dong; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Can MySQL handle 120 million records?
> 
> 
> Michael She wrote:
> 
> > I agree.  MySQL is a great database, but I wouldn't call it 
> enterprise 
> > grade.  Considering that the database is used to store billing 
> > information... one has to be weary about losing all the 
> records due to 
> > a bug or deficiency in
> 
> 
> Besides actual additional features (management software, SNMP 
> support, 
> stored-procedures and the like), what would lead you to decide that 
> MySQL isn't ready "for the big time", assuming a site didn't need the 
> above.
> 
> -- 
> Michael T. Babcock
> C.T.O., FibreSpeed Ltd.
> http://www.fibrespeed.net/~mbabcock
> 
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Severe performace problem linking tables with mysql

2002-12-18 Thread Michael T. Babcock

Joseph Dietz wrote:


MediasMediaAuthorsAuthors

pk_media_id   fk_media_id, fk_author_id   pk_author_id



In your table definition, is there an index on each of your keys shown 
above?  What kind of performance 'degredation'?  How many values are you 
checking?  What WHERE or ORDER BY clauses are you using (try with none 
to test)?  If possibly, pass along the original query that doesn't work 
as quickly as expected.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Severe performace problem linking tables with mysql

2002-12-18 Thread Jennifer Goodie
Have you tried optimizing your query?  It is more likely that the problem is
a poorly optimized query and/or poorly tuned server than it is mySQL.  Joins
will be slow if you do not take the time to figure out the best way to do
what you are trying to accomplish.  Forcing the table order can really speed
up joins as can making sure the tables are properly indexed and that the
indices are being used by the queries.  You also might want to think about
tuning your server variables to achieve optimum performance.

The easiest place to start is running an explain on your query to see what
you can change to make it quicker.

-Original Message-
From: Joseph Dietz [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 10:40 AM
To: [EMAIL PROTECTED]
Subject: Severe performace problem linking tables with
mysql

PLATFORM: 3.23.52-max-nt with Windows 2000 professional (default table type)

I have discovered a performace issue when joining several tables together.
The performance is extremely poor when performing select queries using the
WHERE clause and joining the tables with the pk_media_id = fk_media_id
etc... I guess this is what people might think about when considering using
mysql. With SQL 2000 Windows Server, and ACCESS, there was no loss in
performace. 4 of my tables are cross reference tables as such:

pk=primary key, fk= foreign key

MediasMediaAuthorsAuthors

pk_media_id   fk_media_id, fk_author_id   pk_author_id

(Many authors for each media)


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL security flaws uncovered

2002-12-18 Thread Adam Nelson
The real problem is the lack of a central knowledgebase.  Is there one
that I'm not aware of?  Even if there is, it should be very obvious off
the front page of the website.

> -Original Message-
> From: Csongor Fagyal [mailto:[EMAIL PROTECTED]] 
> Sent: Wednesday, December 18, 2002 5:34 AM
> To: [EMAIL PROTECTED]
> Subject: Re: MySQL security flaws uncovered
> 
> 
> Michael She wrote:
> 
> > It's bad for business : )
> >
> > Maybe they're taking the MS route.
> 
> I second this. These vulnerabilities are serious, they must be given 
> more attention. Apache, PHP, RedHat and so on and so on are 
> very careful 
> with issues like this, all vulnerabilities/exploits are immediately 
> published through all possible channels. Yes, it is always a pain to 
> find out something like this, obviously the MySQL team just 
> would like 
> to forget this once and for all, but doing troublesome 
> reinstalls/upgrades and so on is still better then getting our system 
> hacked.
> 
> - Cs.
> 
> >
> >
> >
> > At 12:19 AM 12/18/2002 -0500, Michael Bacarella wrote:
> >
> >> A good question posted to another list..
> >>
> >>  forwarded message follows 
> >>
> >> > Several vulnerabilities have been found in the MySQL 
> database system, a
> >> > light database package commonly used in Linux environments but 
> >> which runs
> >> > also on Microsoft platforms, HP-Unix, Mac OS and more.
> >> > http://zdnet.com.com/2100-1104-977958.html
> >>
> >> So why no mention on the MySQL.COM site?  That rather bugs me.  In 
> >> contrast,
> >> sites for products like Apache or Bind are very clear 
> about current/past
> >> security issues.
> >>
> >> Is MySQL.COM the wrong place?
> >
> 
> 
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: 3.23.54a Instability

2002-12-18 Thread Lenz Grimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 18 Dec 2002, Kees Hoekzema wrote:

> Recently I tried to upgrade MySQL to a newer version. I have
> 3.23.53a-max running but that is vulnarable to a bug. So I went to
> mysql.com and grabbed the .54a-max binary.
>
> First thing I noticed was the small size, only 6M while .53a was 10M.
> Net thing I started .54a.. it took the load on the server skyhigh.. hey
> does that sound familiar? yes it was the same problem I had with
> 3.23.50-3.23.52 and which was fixed in 3.23.53.
>
> I would very much like to use the .54 version on my (high loaded)
> servers, but for now it isn't possible. Will there be another stable
> version in the near future? or do I have to stick with the (exploitable)
> old mysql?

I need a vacation. Looks like the 3.23.54a-max Linux binary was not linked
statically. Can you please try a "ldd /path/to/mysqld" and check, if it
requires shared libraries or is a static binary?

What version of Linux are you using?

Bye,
LenZ
- -- 
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+ANenSVDhKrJykfIRAsnaAJ99Qv6s3UesdjRZ6O6XYL57mNoZXQCdHiL7
dqOrZaAkEOs4fTz95PJDMLE=
=2KT0
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Wayne Lewis
Use "iostat -x" while the query is running. You are likely I/O bound doing a
table scan on the protected (BIG) table.

There has been alot of discussion about RAM and CPU on this thread regarding
performance, but nothing regarding disk I/O. If you're going to put tens of
millions of records in a database and expect fast random access to them
(i.e. not just reading the more recently inserted records). Then you need
lots of disks and a good volume manager. Don't try to compensate for lack of
disk with lots of RAM. Eventually the DB will be too big to budget for that
RAM.

Plan the disk requirements before anything else. A good rule of thumb is
that a 10K SCSI disk can do about 200 random I/O per second. You can verify
your disks using bonnie (http://www.textuality.com/bonnie/) or the like
(make sure you are looking at RANDOM I/O not sequential I/O).

Next compute the rate of I/O needed. If you are using InnoDB (which you
should, otherwise MyISAM's course grain locking will introduce
non-linearities into the performance that are too hard to model) then you
know I/O is done in 16K pages. How many such pages need to be read per
second? Look at all your queries and determine the access plans. Focus on
the ones that will dominate I/O requirements. Understanding how InnoDB lays
out data is critical for modeling. Each secondary index is in its own B-Tree
with leaves containing the value of the primary key. The data itself is in
another B-Tree keyed off the primary key. (Very similar to Oracle w/ index
organized tables.)

Unless you can benefit from clustering around the primary key, it safest to
assume one I/O per leaf-item (secondary or primary) as the tables and
indexes get large. For back of the envelope calculations its okay to ignore
the non-leaf pages and assume they are cached.

Hence for a table with primary and secondary keys named PK and SK
respectively:
"SELECT * FROM FOO WHERE PK = ?" = 1 I/O
"SELECT * FROM FOO WHERE SK = ?" = 2 I/O
"SELECT FOO.* FROM FOO, BAR WHERE FOO.SK = BAR.PK AND BAR.SK = ?" = 3 I/O

Now take all the queries and determine the number of their occurances for
some unit of work. That unit of work should correspond to something
externally observable event, like "user logins". Say you have 3 queries in
the system like this:

Query   # I/O   Occurances Per Unit Work
  Q1   3  3
  Q2   4  2
  Q3   1  3

Now you can say that on average it takes 20 I/Os per unit of work. With one
disk you can do 200/20 = 10 Units of work / second / disk.

To ensure that performance scales with disks, stripe across the disks. This
is where volume management becomes key, MySQL/InnoDB fills each datafile
sequentially so don't just put each datafile on its own disk, that will just
create hotspots and you will be bound by the performance of a single disk.

This is the general idea of planning with any database. The previous only
covers equality match via an index but you can figure out the difference for
tablescans (consider how many rows fit into a page) and range scans
(consider the affect of the primary key as a cluster index).

-Wayne


-Original Message-


I am not sure. Does anyone know any real examples of
mysql handling huge database and still perform well? I
am having problems with the performance with the MySQL
Left join recently. A big table (about 2.5 million
records) left join a small table (about 350K records)
takes generally 2 mins to finish. I check the
"explain" and primary key index on the small table was
indeed used for the joining. My system is Redhat Linux
7.3 with 4 GB memory. I also tried replacing the
default my.cnf with my-huge.cnf. It didn't help at
all.

Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit.

Qunfeng Dong
--- "B.G. Mahesh" <[EMAIL PROTECTED]>
wrote:
>
> hi
>
> We are evaluating few databases for developing an
> application with
> following specs,
>
> 1.OS not very important. Leaning towards Linux
>
> 2.Currently the database has about 5 million
> records but it will grow
> to 120 million records.
>
> 3.The tables will have billing information for a
> telecom company.
> Nothing complex.
>
> 4.Back office staff will use the data in the
> database to create
> invoices to be sent to customers. This data is not
> connected to the
> live telecom system [e.g. switches etc]. We get the
> data every day
> from the telecom company.
>
> 5.Staff may perform queries on the database to get
> reports like
> "busiest hour of the day" etc etc. I don't see too
> many concurrent
> users using the system, however the system needs to
> be stable.
>
> 6.Need to create excel, pdf files from the data in
> the database. This
> I think has nothing to do with the database, however
> this is a requirement.
>
> 7.Needless to say, good security is a must which
> will also be built
> into the front end application.
>
>

SELECT and UPDATE at the same time?

2002-12-18 Thread Jeff Snoxell
Hello again,

I'm selecting a group of records from my database. I then loop through the 
selected records and do some work based on what I find. But what I also 
want to do as I interrogate each record is update some of its fields with 
new values... but won't that screw up the outer loop? I mean if I try to 
execute a query whilst looping around the result set of a former query will 
I not screw up my result set that I'm looping through?

Also, is it possible to update specific fields of certain records within a 
SELECT query? ie can I do something like this:

SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"

Ta,


Jeff


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Joe Stump wrote:


Like previous posters have pointed out. If given the same freedom within
Oracle's online documentation you'd have to believe there would be horror
stories outlining loss of data.
 


The most significant factor I've ever seen in people liking Oracle for 
their sites is the speed with which an Oracle DBA will show up and fix 
everything for them.  As I understand it, the MySQL team gives quite 
good response times as well, and you get to deal with the actual 
programmers if you pay for the privilege (and even on this list).

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Recover deleted records

2002-12-18 Thread Benjamin Pflugmann
Hello.

On Wed 2002-12-18 at 14:49:12 +, [EMAIL PROTECTED] wrote:
> Hi,
> 
> I posted this a few days ago, but haven't seen a reply as yet, and I'm 
> getting quite desperate now!!

You probably got no reply because there is no real solution to your
problem.

> I've managed to delete all the records in a table, well haven't we all done 
> it sometime!, and was wondering whether it's possible to recover
> them. I notice that the .MYD and .MYI files still appear to be showing the 
> original data.

You are right, the data is still there (some data is lost, if dynamic
row storage is used). But there is no tool ready to recover the data.
You are expected to have backups. Recovering the data "by hand" will
be a *lot* of work.

Before doing anything else, make sure to create a backup of the files,
in case anything else goes wrong.

If you need the data desperately enough to invest in the recovering,
you may want to consider contacting the MySQL people directly and ask
if they are able to help.

If you have update-logs enabled, you could re-run the queries to
recreate the tables.

Regards,

Benjamin.

-- 
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael Bacarella wrote:


We've never lost data. Our database server has crashed hard from OS failures
and suffered plenty of unclean shutdowns. MySQL/InnoDB always recovers
perfectly.
 


Running a slave off-site tops off crash recovery almost 100%.  We run a 
backup of our clients' data to multiple (unfortunately) mysql instances 
on our server over SSH links to their sites.  If their sites were dead 
for some reason or actually lost data, we have it all safe and live.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



MySQL acces from WinXP

2002-12-18 Thread Gyetvan Andras
Hi!

I've just start to use MySQL on a Linux box. I didn't do any special, 
just set it up via Webmin (create DB, add user). It is a RedHat 8.0 and 
I use the MySQL inclued in the original distribution. From the Linux box 
everything seems to work, but when I try to connect to the MySQL server 
from a Windows XP machine using the latest MyODBC it is complaining 
about "Lost connection while connect to the server". I gave the same 
error when I tried to use MySQLGUI (on XP)

There is no firewall enabled on the Linux.

So what did I wrong, what should I check ?

Thanks,
Andras



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread W. D.
At 13:05 12/18/2002, Csongor Fagyal, wrote:
>What you need to have is a _good_ install, and then MySQL is superb. But 
>to have a "good install" is not as easy as it sounds.

Can you list the elements of a good install?



Start Here to Find It Fast!© -> http://www.US-Webmasters.com/best-start-page/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




OT: Spam Filter (again)

2002-12-18 Thread Michael T. Babcock

Just for anyone who cares (or cares to do the same), when I reply to 
messages at my average 80wpm and hit send, I don't check if 
SQL,QUERY,etc. is in the message 90% of the time.  Now, besides thanking 
the crew for adding a couple more keywords to the filter, it still ticks 
me off to get a bounce to a legitimate response.  If I get such bounces 
from now on, I'm ignoring them (that includes the last 3 messages I've 
sent).  I'm not going to bother to retype / reformat / forward (because 
its ugly) the messages.  I always reply-to-all, so the author of the 
previous message gets their copy.

PS, it wouldn't be that hard (depending on your mail distribution 
method) to allow subscribers to post without going through the filter 
and only filter unsubscribed people.

PPS, you could always make it database-driven and add people who post 
successfully more than once a week to the no-spam-check list instead.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Paul DuBois
At 14:28 -0500 12/18/02, Michael T. Babcock wrote:

Paul DuBois wrote:


And take a look at the last few items in the list, pertaining to page
size, max number of pages, etc.



4 billion * 16kB = max table size = 64TB

Correct?  Sounds pretty serious ;-)


That's what it looks like to me!




--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock


sql, query

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread W. D.


At 10:40 12/18/2002, Jocelyn Fournier wrote:
>Hi,
>
>I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
>tables contains more than 40 millions of rows) under quite high stress
>(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
>I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
>using 4.1 on this production server ;)).
>So for me MySQL is ready for a mission critical environment :)

Wow!  What kind of hardware?  What OS?

Start Here to Find It Fast!© -> http://www.US-Webmasters.com/best-start-page/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SELECT and UPDATE at the same time?

2002-12-18 Thread Paul DuBois
At 17:56 + 12/18/02, Jeff Snoxell wrote:

Hello again,

I'm selecting a group of records from my database. I then loop 
through the selected records and do some work based on what I find. 
But what I also want to do as I interrogate each record is update 
some of its fields with new values... but won't that screw up the 
outer loop? I mean if I try to execute a query whilst looping around 
the result set of a former query will I not screw up my result set 
that I'm looping through?

No, for two reasons:

If your retrieval is based on the mysql_store_result() model, all the rows
in the result set will have been pulled to the client side by the time you
see any of them.  So when you perform your updates, you're not going to be
affecting the actual rows that you're looking at.

If your retrieval is based on the mysql_use_result() model, then you pull
each row from the server one at a time, and you *do* see it as you retrieve
it.  But this also leaves the table read-locked for the duration of the
retrieval, and so you can't update it anyway.  (And besides, the nature of
of the client/server protocol in this case prohibits you from issuing any
queries until you terminate your retrieval operation.)

mysql_store_result() and mysql_use_result() are C API functions that you
can read about in the MySQL manual.  Most other APIs are built on the C API
and some of them allow you to retrieve in one mode or the other.



Also, is it possible to update specific fields of certain records 
within a SELECT query? ie can I do something like this:

SELECT * FROM my_table WHERE Age > 50 AND UPDATE Status = "OLD"

No. That's goofy anyway.  Why wouldn't you just use a regular
UPDATE query?

UPDATE mytable Status = "OLD" WHERE Age > 50;



Ta,

Jeff



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Impressive! How do youguys do that?

2002-12-18 Thread Michael T. Babcock

Qunfeng Dong wrote:


not-so-good performance (join on tables much smaller
than yours takes minutes even using index) and I seem
to read all the docs I could find on the web about how
to optimize but they are not working for me (I am



Have you stored a slow query log to run them through 'explain' and see 
why they're slow?  Do you want to post some of them here so we can 
suggest what might be done to make them faster?

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records? - Impressive! How doyou guys do that?

2002-12-18 Thread R. Hannes Niedner
On 12/18/02 9:48 AM, "Qunfeng Dong" <[EMAIL PROTECTED]> wrote:

> But I am indeed seeing
> not-so-good performance (join on tables much smaller
> than yours takes minutes even using index) and I seem
> to read all the docs I could find on the web about how
> to optimize but they are not working for me

Why don't you just post the table structures and the join query that you
have "trouble" with? There are enough expert here on this list who are happy
to help you further optimize your database if possible.

/h




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Parser seems to have problems with '-' (dash).

2002-12-18 Thread Paul DuBois
At 16:51 +0100 12/18/02, [EMAIL PROTECTED] wrote:

Hello

I have hade several problems with the character '-' (dash, minus, or
whatever you wan't to call it) the parser seems to regardles of quoting be
seen as an substraction operator.


Give us an example of where you use it within a quoted name and get a
failure.  Dashes are legal, at least at long as you quote with backtick:

mysql> create table `a-b` (`c-d` int);
Query OK, 0 rows affected (0.00 sec)

mysql> show create table `a-b`;
+---+-+
| Table | Create Table|
+---+-+
| a-b   | CREATE TABLE `a-b` (
  `c-d` int(11) default NULL
) TYPE=MyISAM |
+---+-+
1 row in set (0.00 sec)


 Does anybody recognice this, have I just
missed the part where the characters allowed in names and passwords is
specified.

antispam device: sql, query, queries, smallint

Merry Christmas/Happy Hannuka/Happy holidays & A Happy New Year

--
 &ers


-
Anders Nygård



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Paul DuBois
At 10:10 -0500 12/18/02, Michael T. Babcock wrote:

Qunfeng Dong wrote:


Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit.


Just use InnoDB tables for these files and you won't have a problem 
AFAIK; you can have multiple 2G files that are used to create one 
big table if you like (any InnoDB people want to comment on actual 
limits?)

Check out:

http://www.innodb.com/ibman.html#InnoDB_restrictions

And take a look at the last few items in the list, pertaining to page
size, max number of pages, etc.



--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock


sql, query

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Csongor Fagyal
W. D. wrote:


At 10:40 12/18/2002, Jocelyn Fournier wrote:
 

Hi,

I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
tables contains more than 40 millions of rows) under quite high stress
(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
using 4.1 on this production server ;)).
So for me MySQL is ready for a mission critical environment :)
   


Wow!  What kind of hardware?  What OS?


We have a magnitude smallar DB, with aroung 5M rows (0.5GB data).

STATUS:
Threads: 15  Questions: 171147358  Slow queries: 2388  Opens: 22690  
Flush tables: 1  Open tables: 119 Queries per second avg: 68.198

The thing is it runs on a commercial P3 server (733Mhz) with 512M RAM. 
It is 3.23.53a, this is a RedHat 7.3 (or 7.1, I don't remember :-)), 
with one 60GB 7200 IDE HDD. We are peaking at aroung 300-500 queries/sec 
(30% of those are inserts or updates). This machine also runs a very 
busy Apache server, yet the load is under 1.0 most of the time (goes up 
to 3-4 when we do the batch processing). So MySQL can be very efficient 
- we have not encountered any data loss whatsoever in the last few 
months. (But that is MyISAM - we had performance problems with InnoDB).

What you need to have is a _good_ install, and then MySQL is superb. But 
to have a "good install" is not as easy as it sounds.

- Csongor


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Can I build a web page to extract data?

2002-12-18 Thread Gary Hostetler
I would like to create a web page that will extract the data from mysql so
it looks exactly like the form that the end user sees on their end.

Also a little bit off topic but I have a situation where it would be great
to have the web page form and it's data sent to an email address. I already
have a program that will send the data but not the form. Anyone know of any
scripts that will do that?
Thanks
Gary


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Parser seems to have problems with '-' (dash).

2002-12-18 Thread Mike Grabski
I believe I ran into this problem before, and I'm pretty sure that a back
tick will work.

NOTE: a backtick is not the same thing as a single quote

Using the backtick will also help you handle (and hopefully rename) tables
and so forth that accidentally or unknowingly use a reserved keyword.

using mysql-front, it had a bad tendancy to enclose table names with
backticks, and so table creation would be allowed, but when i tried to
access the table from the command line i got all sorts of errors, and using
the backticks again worked.

So try the back tick (`table-with-dash`) and see if that works.

Mike

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 10:52 AM
To: [EMAIL PROTECTED]
Subject: Parser seems to have problems with '-' (dash).


Hello

I have hade several problems with the character '-' (dash, minus, or
whatever you wan't to call it) the parser seems to regardles of quoting be
seen as an substraction operator. Does anybody recognice this, have I just
missed the part where the characters allowed in names and passwords is
specified.

antispam device: sql, query, queries, smallint

Merry Christmas/Happy Hannuka/Happy holidays & A Happy New Year

--
 &ers


-
Anders Nygård



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Greg_Cope
> From: Michael She [mailto:[EMAIL PROTECTED]]
> > > I have no problems using MySQL as a lightweight database 
> for simple
> > > chores, but I'm a bit weary about putting into a mission critical
> > > environment.
> >
> >Why, exactly?
> 
> 
> Mainly for 2 reasons:
> 
> 1. MySQL hasn't been "proven" yet in the corporate environment

Works for me and my past and present clients (I am a contractor), and it
would appear others.

> 2. Some of the comments in the mySQL manual... people losing 
> data doing 
> routine stuff like table optimizations, adding keys, etc.  If 
> a database is 
> reliable, things like that shouldn't happen.  Comments like 
> those in the 
> MySQL manual scared me.

Cannot really comment about this, but like others I have never lost any data
that I could blame MySQL for.

I suggest you try it ...

Greg Cope
JITC


> 



This message and any attachment has been virus checked by
Pfizer Corporate Information Technology, Sandwich.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Converting many tables into MyISAM

2002-12-18 Thread John P
I have about 300 database tables that are mostly ISAM and some MyISAM
format. I would like to move them all into MyISAM - what's the
easiest/quickest way? I am running 3.23.54.

I was thinking of doing mysqldump, then using a search/replace in the file
CREATE TABLE .. TYPE=, then recreating the database files using mysql, but
it seems a bit heavyhanded to me ;)

Thanks,
John



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Mysql - Binary vs. Source

2002-12-18 Thread Michael T. Babcock

Lenz Grimmer wrote:


Sorry, but that would not be true. We actually *do* distribute most
binaries in gzipped tar archives. tar.gz is not limited to contain source
files only.
 


I posted a self-correction.  I still think the download page is 
unnecessarily misleading in some ways.  Perhaps fixing the first link is 
more important (the page that shows:

   * *MySQL database server & standard clients:*
 o MySQL 3.23 -- Stable release (recommended) latest version is
   3.23.54a
   + *3.23
 * --
 Default package (use this if you are unsure)
   + *3.23 Max
 *
 -- Package with maximum features
 o MySQL 4.0 -- Beta release (use this for new development)
   latest version is 4.0.5
   + *4.0 Standard
 *
 -- Default package with all the features necessary for
 typical usage, including the InnoDB transactional
 table handler (use this if you are unsure)
   + *4.0 Max
 *
 -- Package with maximum features (SSL, RAID, BDB)
   + *4.0 Debug
 *
 -- Packages compiled with debugging enabled (for bug
 tracing only)
 o *Older releases
   * -- older
   releases (only recommended for special needs)

The source download should be linked for those versions under the binary 
links.

3.23 ...
3.23 max ...
3.23 source ...

Just IMHO of course :)

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Gerald Jensen
Joe is right ... we switched from another SQL server to MySQL in 1999, and
have never looked back.

MySQL has been rock solid for our applications, the MySQL development team
is great to work with, and our customers like it.

Gerald Jensen

- Original Message -
From: "Joe Stump" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "Michael She" <[EMAIL PROTECTED]>
Cc: "Qunfeng Dong" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 1:55 PM
Subject: RE: Can MySQL handle 120 million records?


> Without trying to sound like a troll or a rant I'd like to chime in on the
> side of Jeremy.
>
> I've worked with MySQL on sites that serve up over a million hits a day.
We
> hit the 2gb file limit in Linux (NOT a MySQL problem) and moved to Solaris
> without incident.
>
> A friend of mine had over a billion rows in a few of this tables
> (statistical data mostly).
>
> As Jeremy points out all DB's have their problems, shortcomings, etc. If
you
> have specific complaints fill out a feature request, if you've got
problems
> fill out a bug report, but don't knock MySQL as
> not-worthy-of-enterprise-status because it doesn't *work* like Oracle,
etc.
>
> Overall, in my many experiences, it is more than sufficient for web apps.
>
> --Joe
>
>
> --
> Joe Stump <[EMAIL PROTECTED]>
> http://www.joestump.net
>
>
> -Original Message-
> From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 18, 2002 8:09 AM
> To: Michael She
> Cc: Qunfeng Dong; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Can MySQL handle 120 million records?
>
>
> On Wed, Dec 18, 2002 at 02:37:07AM -0500, Michael She wrote:
> >
> > I agree.  MySQL is a great database, but I wouldn't call it
> > enterprise grade.
>
> Since you haven't told us what "enterprise grade" means to you, that
> doesn't tell us much.  What is it lacking to become "enterprise grade"
> in your mind?
>
> > Considering that the database is used to store billing
> > information... one has to be weary about losing all the records due
> > to a bug or deficiency in MySQL.
>
> That's true of any database server, right?  It's not really a
> MySQL-specific complaint.
>
> > I was searching through some of the MySQL help documentation, and
> > there have been more than a few examples in the comments where
> > people have lost data due to wacky functions on databases greater
> > than a couple of GBs...
>
> Imagine what you'd read it Oracle was open enough to allow comments in
> their on-line docs.  Seriously.  I've heard pretty nasty stories about
> Oracle, Microsoft SQL Server, and so on.
>
> Jeremy
> --
> Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
> <[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/
>
> MySQL 3.23.51: up 3 days, processed 136,618,914 queries (457/sec. avg)
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Mysql - Binary vs. Source

2002-12-18 Thread Lenz Grimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 18 Dec 2002, Michael T. Babcock wrote:

> This binary/source confusion comes from the download page; it says
> "Binary packages (tar.gz)" for the source download and "Linux RPM
> packages (rpm)" for the binary package.  Could someone please change
> these to say "Source distribution (tar.gz)" and "Binary format (Linux
> RPM)"?

Sorry, but that would not be true. We actually *do* distribute most
binaries in gzipped tar archives. tar.gz is not limited to contain source
files only.

Bye,
LenZ
- -- 
For technical support contracts, visit https://order.mysql.com/?ref=mlgr
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Lenz Grimmer <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer
/_/  /_/\_, /___/\___\_\___/ Hamburg, Germany
   <___/   www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.0 (GNU/Linux)
Comment: For info see http://quantumlab.net/pine_privacy_guard/

iD8DBQE+AMErSVDhKrJykfIRAi1ZAJ9tl9d+XUp9mDnFImd8rb4Yvb/4RwCfUwva
FJhSavvM8XNawtXz9PL1VZw=
=+d40
-END PGP SIGNATURE-


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Severe performace problem linking tables with mysql

2002-12-18 Thread Joseph Dietz

PLATFORM: 3.23.52-max-nt with Windows 2000 professional (default table type)

I have discovered a performace issue when joining several tables together. 
The performance is extremely poor when performing select queries using the 
WHERE clause and joining the tables with the pk_media_id = fk_media_id 
etc... I guess this is what people might think about when considering using 
mysql. With SQL 2000 Windows Server, and ACCESS, there was no loss in 
performace. 4 of my tables are cross reference tables as such:

pk=primary key, fk= foreign key

MediasMediaAuthorsAuthors

pk_media_id   fk_media_id, fk_author_id   pk_author_id

(Many authors for each media)


_
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. 
http://join.msn.com/?page=features/virus


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Problem inserting data

2002-12-18 Thread Michael T. Babcock

Octavian Rasnita wrote:


Could you tell me if the following script is OK?
Note: It uses DBD::mysqlPP and not DBD::mysql, because only this module can
be used under Perl 5.8, but it should work the same.
 


I'm using DBD::mysql under PERL 5.8 right now ... FYI.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jocelyn Fournier
Hi,

I assume you are speaking about this comment :

"++--+--+--
+
| Table | Op | Msg_type |
Msg_text |
++--+--+--
+
| database.table_name | optimize | error | 28
when fixing table |
| database.table_name| optimize | status |
Operation failed |
++--+--+--
+
2 rows in set (40.91 sec)

I typed it in again thinking it had some sort of
rollback or perhaps just a machine glitch but
then I typed it in again and got the folowing

mysql> optimize table table_name;
++--+--+--
---+
| Table | Op | Msg_type |
Msg_text |
++--+--+--
---+
| database.table_name | optimize | error |
Can't open file: 'table_name.MYD'. (errno: 144) |
++--+--+--
---+

and lo all my data is lost... thank god for mysql
dump."

Well error 28 means there is no space left on the device.
When you run and OPTIMIZE TABLE statement, MySQL locks the main table and
recreate in // the index file.
As MySQL failed to recreate the index file, the table was marked as crashed
(errno: 144), but in any case data were lost (data file is not altered
during an optimize) :

he just have to execute a REPAIR TABLE statement to have all his record
back.

Take a look at what happens during an optimize of the following :

Before OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr> l searchmainhardwarefr8.*
<19:28:52
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:25
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm

During OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr> l searchmainhardwarefr8.*
<19:29:21
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:25
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql 6696960 Dec 18 19:29
searchmainhardwarefr8.TMM
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm

After OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr> l searchmainhardwarefr8.*
<19:29:22
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:29
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm


As you can see, only the MYI file (index file) has changed, the data file
remains untouched.
Since you can completly recreate the MYI using the MYD file, there is no
data lost possibility, even if the hard disk is full.


Regards,
  Jocelyn


- Original Message -
From: "Michael She" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Muruganandam" <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 5:16 PM
Subject: Re: Can MySQL handle 120 million records?


> At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:
>
>
> > > I have no problems using MySQL as a lightweight database for simple
> > > chores, but I'm a bit weary about putting into a mission critical
> > > environment.
> >
> >Why, exactly?
>
>
> Mainly for 2 reasons:
>
> 1. MySQL hasn't been "proven" yet in the corporate environment
> 2. Some of the comments in the mySQL manual... people losing data doing
> routine stuff like table optimizations, adding keys, etc.  If a database
is
> reliable, things like that shouldn't happen.  Comments like those in the
> MySQL manual scared me.
> --
> Michael She  : [EMAIL PROTECTED]
> Mobile   : (519) 589-7309
> WWW Homepage : http://www.binaryio.com/
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Joe Stump
1. MySQL hasn't been "proven" yet in the corporate environment

Is Yahoo! proven enough? Seriously, how many large corporations have to use
a DB in order for it to be "proven"? Is Access "proven" because every
company on the planet uses it at some level?

2. Some of the comments in the mySQL manual... people losing data doing
routine stuff like table optimizations, adding keys, etc.  If a database is
reliable, things like that shouldn't happen.  Comments like those in the
MySQL manual scared me.

Like previous posters have pointed out. If given the same freedom within
Oracle's online documentation you'd have to believe there would be horror
stories outlining loss of data.

The main questions you have to ask yourself are as follows:

- What type of application am I programming?
- What type of features do I need in my DB?
- How much data am I storing?
- What type of data am I storing?
- How much traffic will this database see?
- How much money can I spend?

If you're storing massive amounts of video and sound within the actual DB
then maybe Oracle is the choice for you. If you need seemless integration
with VB then MS SQL might be the best candidate. If all you need is data and
speed then any DB on the planet would work (if your traffic was low enough
XML in flat files would work). We aren't saying MySQL is the best DB for
every task, but what we are saying is MySQL is proven to be more than
adequate for all sorts of tasks - it just depends on which task you need
your DB to perform very well in.

--Joe

--
Joe Stump <[EMAIL PROTECTED]>
http://www.joestump.net



1. MySQL hasn't been "proven" yet in the corporate environment
2. Some of the comments in the mySQL manual... people losing data doing
routine stuff like table optimizations, adding keys, etc.  If a database is
reliable, things like that shouldn't happen.  Comments like those in the
MySQL manual scared me.
--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael Bacarella
> > > I have no problems using MySQL as a lightweight database for simple
> > > chores, but I'm a bit weary about putting into a mission critical
> > > environment.

> 1. MySQL hasn't been "proven" yet in the corporate environment

We run a periodic billing system backed with MySQL, in addition to
the rest of a web site that is hammered with approximately 300 million
hits per month. These all go against a single MySQL instance running on
modest x86 hardware.

> 2. Some of the comments in the mySQL manual... people losing data doing 
> routine stuff like table optimizations, adding keys, etc.  If a database is 
> reliable, things like that shouldn't happen.  Comments like those in the 
> MySQL manual scared me.

We've never lost data. Our database server has crashed hard from OS failures
and suffered plenty of unclean shutdowns. MySQL/InnoDB always recovers
perfectly.

-- 
Michael Bacarella  | Netgraft Corp
   | 545 Eighth Ave #401
 Systems Analysis  | New York, NY 10018
Technical Support  | 212 946-1038 | 917 670-6982
 Managed Services  | http://netgraft.com/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 12:16:00PM -0500, Michael She wrote:
> At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:
> 
> 1. MySQL hasn't been "proven" yet in the corporate environment

You mean in your corporate environment?

It works well in ours. :-) And we use it to store data that we serve
to millions of users daily.

> 2. Some of the comments in the mySQL manual... people losing data
> doing routine stuff like table optimizations, adding keys, etc.  If
> a database is reliable, things like that shouldn't happen.  Comments
> like those in the MySQL manual scared me.

You can find horror stories about any database product.  You just need
to know where to look or who to ask.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 141,006,515 queries (459/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael She wrote:


2. Some of the comments in the mySQL manual... people losing data 
doing routine stuff like table optimizations, adding keys, etc.  If a 
database is reliable, things like that shouldn't happen.  Comments 
like those in the MySQL manual scared me.


1) Do you believe this doesn't ever happen with other 'enterprise' level 
DB systems?
2) What do you think Microsoft's pages would look like if they allowed 
arbitrary user comments (read their newsgroups ...)
3) Those reports should be filed as bugs to be fixed, not added as 
comments in the manual.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records?

2002-12-18 Thread David Brodbeck


> -Original Message-
> From: Joe Stump [mailto:[EMAIL PROTECTED]]

> We hit the 2gb file limit in Linux (NOT a MySQL problem) and 
> moved to Solaris without incident.

This appears to have been largely fixed in Linux, too, if you use a recent
kernel and glibc.  I recently tried creating a 3 gigabyte file on one of my
ext2fs partitions and it worked fine.  I'm not sure what the new limit is.
The system is RedHat 7.0 with all the current bugfix updates (including
glibc 2.2.4), plus a custom-compiled 2.4.17 kernel.

This has been discussed a lot on the samba mailing list.  It seems to still
be a problem with some distributions and not with others, and no one's quite
sure what the deciding factor is.  It's easy enough to test by dd'ing a few
billion bytes from /dev/zero into a file, though.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jocelyn Fournier
Previous hardware was a Bi PIII-733 with 786 MB of RAM, and 1 SCSI drive,
under Linux (kernel 2.4.18).
It worked fine, with sometimes some slowdown, mainly because of the hard
drive.
Now the server is Bi Athlon MP 2200+, 2 GB of RAM, and Maxtor Atlas 10K3
SCSI 320 (RAID-5) (still kernel 2.4.18)
The load average is roughly 0.7.

Regards,
  Jocelyn

- Original Message -
From: "W. D." <[EMAIL PROTECTED]>
To: "Jocelyn Fournier" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Michael She" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; "Muruganandam" <[EMAIL PROTECTED]>
Sent: Wednesday, December 18, 2002 5:38 PM
Subject: Re: Can MySQL handle 120 million records?




At 10:40 12/18/2002, Jocelyn Fournier wrote:
>Hi,
>
>I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
>tables contains more than 40 millions of rows) under quite high stress
>(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
>I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
>using 4.1 on this production server ;)).
>So for me MySQL is ready for a mission critical environment :)

Wow!  What kind of hardware?  What OS?

Start Here to Find It Fast!© ->
http://www.US-Webmasters.com/best-start-page/






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Joe Stump wrote:


As Jeremy points out all DB's have their problems, shortcomings, etc. If you
have specific complaints fill out a feature request, if you've got problems
fill out a bug report, but don't knock MySQL as
 


There's a nice point on the MySQL site somewhere that if you really want 
a feature, you could always call them and pay for the development of 
that feature.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records? - Impressive! How do you guys do that?

2002-12-18 Thread Qunfeng Dong
I am very encouraged to hear all these successful
proofs. I do want to stick to MySQL (we are using it
to develop a biology database). But I am indeed seeing
not-so-good performance (join on tables much smaller
than yours takes minutes even using index) and I seem
to read all the docs I could find on the web about how
to optimize but they are not working for me (I am
going to order Jeremy Zawodny's "Advanced MySQL" and
see if I am missing anything). Am I one of the few who
are encountering the problems? What's your secrets to
successfully run such large databases with MySQL? How
much time have you spend on fine-tune the performance?

Qunfeng

--- Peter Vertes <[EMAIL PROTECTED]> wrote:
> Hi,
> 
>   I've been using MySQL intercompany for a while now
> with great results.  Even the diehard MSSQL people
> are amazed at how fast it can be at time.  One of
> the things I use it for is to store syslog events in
> it.  I wrote a backend that parses a syslog file as
> data is being written into it and does multiple
> things with each syslog entry depending what the
> entry contains.  When I'm done with it the syslog
> entry goes into a MySQL database where I can store
> the data and let the operations team access it
> through a PHP enabled webpage to see either what is
> going on in the system real-time of be able to do
> queries about certain hosts, processes or show some
> stats (what happened to machine x on date y and what
> processes were running on it, etc...).
>   The MySQL database is being hosted on a Dell
> Precisions 540 workstation box.  It's a P4 1.7GHz
> Xeon with 512MB of ram and a 40GB IDE disc running
> Windows 2000 Server.  That MySQL database is also
> being used for other things (nothing too intensive)
> and I muck around with it also and use it as a test
> db.  The machine also handles webserving chores and
> runs backup chores and other operations related
> tasks.
>   The database only holds about 1 months worth of
> data in it, the rest we don't really need but we
> keep around for a while outside of the db zipped up.
>  As of when I'm writing this there were about 18.7
> million entries in that table:
> 
> mysql> select count(*) from notifications;
> +--+
> | count(*) |
> +--+
> | 18711190 |
> +--+
> 1 row in set (0.00 sec)
> 
> All these entries have been accumulated from
> December 1, 2002 till present day:
> 
> mysql> select distinct syslogdate from notifications
> order by syslogdate;
> ++
> | syslogdate |
> ++
> | 2002-12-01 |
> | 2002-12-02 |
> | 2002-12-03 |
> | 2002-12-04 |
> | 2002-12-05 |
> | 2002-12-06 |
> | 2002-12-07 |
> | 2002-12-08 |
> | 2002-12-09 |
> | 2002-12-10 |
> | 2002-12-11 |
> | 2002-12-12 |
> | 2002-12-13 |
> | 2002-12-14 |
> | 2002-12-15 |
> | 2002-12-16 |
> | 2002-12-17 |
> | 2002-12-18 |
> ++
> 18 rows in set (12.95 sec)
> 
>   Notice it took almost 13 seconds to complete that
> last query.  I tried this on a MSSQL server and
> after 2 minutes I turned the query off.  That kind
> of performance was unacceptable for a webapp that
> uses a database that does real time queries.  I'm
> quite happy with the performance of MySQL and I just
> love to see the MSSQL guys retreat when I show off
> how fast some queries can be (they always strike
> back with transactional stuff, blah, blah, blah :) 
> Anyway, I would suggest you use Linux for your
> dbserver with some kind of journaling file system. 
> I would go with ReiserFS because if memory serves
> correctly it can handle files up to 4 terabytes but
> you might want to double check since I'm quite
> forgetful with facts like that :)  I would also
> recommend the fastest SCSI drives you can find. 
> When I do queries in any 10 million+ database I
> barely get any CPU activity but I get A LOT of disk
> activity and I think this IDE drive is holding MySQL
> back.  When I have time I'm thinking about moving
> this database/webapp beast onto a SCSI Linux box and
> see how well it performs.  I think you'll be very
> pleased with the performance you'll get out of
> MySQL.
> 
> -Pete
> 
> P.S.: Thanks again MySQL team :)
> 
>
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list
> archive)
> 
> To request this thread, e-mail
> <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
>
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECT

Re: ERROR ON CONNECTION

2002-12-18 Thread Rafal Jank
On Wed, 18 Dec 2002 16:36:15 +
ANGELO CARMO <[EMAIL PROTECTED]> wrote:

> When i try connect from WindowsXP machine to red hat 8.0 linux machine mysql
> server (4.05-max), I have the following message error:
> 
> "ERROR 2013: Lost connection to mysql server during query".
> 
> But, I can connect from linux to windowsXP mysql server(4.0.5 max).
> 
> Who knows the problem
> 
Download and install version 4.0.5a


mysql, query
-- 
_/_/  _/_/_/  - Rafał Jank [EMAIL PROTECTED] -
 _/  _/  _/  _/   _/ Wirtualna Polska SA   http://www.wp.pl 
  _/_/_/_/  _/_/_/ul. Traugutta 115c, 80-237 Gdansk, tel/fax. (58)5215625
   _/  _/  _/ ==*  http://szukaj.wp.pl *==--

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re: MySQL Download Types (clarification)

2002-12-18 Thread Michael T. Babcock
heheh ... nice to see:


sql,query,queries,smallint


Re-clarification of my own message; the .tar.gz described as being a 
binary is in fact a binary; my mistake.  Again, since I found it 
confusing, I'm assuming others will (and have) found it confusing.  The 
source distribution is at the very end of the page of course.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Problem inserting data

2002-12-18 Thread Octavian Rasnita
Hello all,

I'm trying to get a table from a text file with columns delimited by the "|"
sign and I can't do it right.

I've made a perl script that gets the file, parses it, and inserts the
variables into the table, but the problem is that the text contains
characters like " ' ?

I've tried using the $sth -> quote() function but it doesn't quote the
characters like "?".
For each question mark sign (?) found, mysql replaces it with "null".

I also tried using $sth -> execute("insert into table values(?,?,?)");
and $sth -> execute($var1, $var2, $var3); etc, but it gives me only errors.

Could you tell me if the following script is OK?
Note: It uses DBD::mysqlPP and not DBD::mysql, because only this module can
be used under Perl 5.8, but it should work the same.

#!/perl/bin/perl -w

use strict;
use DBI;

my $sth = DBI -> connect("dbi:mysqlPP:database=test;host=localhost",
'teddy', undef, {'RaiseError' => 1});

my $file = "f:/teddy/data/search/index_search.txt";
open (IN, $file);

while () {
chomp;
my($url, $time, $titlet, $titlew, $desct, $descw, $keysw, $bodyw) = split
/\|/, $_;
$sth -> do("insert into search values(" . $sth -> quote($url) . "," .
$sth -> quote($time) . "," . $sth -> quote($titlet) . "," . $sth ->
quote($titlew) . "," . $sth -> quote($desct) . "," . $sth -> quote($descw) .
"," . $sth -> quote($keysw) . "," . $sth -> quote($bodyw) . ")");

}


#Thank you.



Teddy,
Teddy's Center: http://teddy.fcc.ro/
Email: [EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:



> I have no problems using MySQL as a lightweight database for simple
> chores, but I'm a bit weary about putting into a mission critical
> environment.

Why, exactly?



Mainly for 2 reasons:

1. MySQL hasn't been "proven" yet in the corporate environment
2. Some of the comments in the mySQL manual... people losing data doing 
routine stuff like table optimizations, adding keys, etc.  If a database is 
reliable, things like that shouldn't happen.  Comments like those in the 
MySQL manual scared me.
--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Implementation of 'Views'

2002-12-18 Thread Michael T. Babcock
Is there anything on the books for implementing 'Views' both for read 
and write, (esp. based on key relationships and joins?)

I know temporary tables give a form of view, but they aren't "live" 
unless updated manually and one can't insert into them and have the 
expected result.

-- Pseudo? SQL --

CREATE VIEW
   CustomerPhoneNumberView Customer.Name as Name,
   PhoneNumbers.Number as Number,
   CustomerPhoneNumber.Type as Type
FROM Customer
LEFT JOIN CustomerPhoneNumber ON CustomerID
LEFT JOIN PhoneNumbers ON PhoneNumberID;

SELECT * FROM CustomerPhoneNumberView;
Name   Number   Type
Joe901212   Phone
Bob4165551313   Fax

INSERT INTO CustomerPhoneNumberView(Name, Number, Type) VALUES ('Gary', 
'701414', 'Phone');

Just wondering :-)

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Mysql - Binary vs. Source

2002-12-18 Thread Michael T. Babcock
Santiago Alba wrote:


I installed binary version (source distribution)... not with rpm
 


This binary/source confusion comes from the download page; it says 
"Binary packages (tar.gz)" for the source download and "Linux RPM 
packages (rpm)" for the binary package.  Could someone please change 
these to say "Source distribution (tar.gz)" and "Binary format (Linux RPM)"?

Just an FYI / request.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.SQL
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



  1   2   >