RE: myisamchk question (important)

2003-08-21 Thread Paul DuBois
I should qualify my answer, to indicate something that may not apply
to the situation you have in mind.
If you're using myisamchk only to *check* tables, it operates in read-only
fashion.  The problems occur if you're using it to repair tables, because
then if you have both myisamchk and the server writing to the table
files, you're not going to like the results.
However, even with checking, if you don't flush the tables, you
may see spurious warnings of inconsistencies. (Due to unflushed changes,
for example.) These do not result in table damage.
At 14:37 -0400 8/21/03, Luc Foisy wrote:
If you read http://www.mysql.com/doc/en/Maintenance_regimen.html 
they say something interesting that would contracdict other places 
in the documentation
They actually recomend running myisamchk on a running instance of 
mysqld. The method on that page (a method they use themselves) would 
not even allow a flush tables to be called before.

Strange... How many answers can I find to this, does anyone know?
I have several conflicting sources now
-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2003 10:06 PM
To: Luc Foisy; MYSQL-List (E-mail)
Subject: Re: myisamchk question (important)
At 9:54 -0400 8/19/03, Luc Foisy wrote:
Would anything happen to the database if I ran myisamchk --silent
/usr/data/mysql/*/*.MYI when I havent run FLUSH TABLES first?
Yes, you may have unflushed changes still in the server's buffers.
Running myisamchk in that case can make the tables *in*consistent
because the tables won't have in them what the server thinks.
Don't do it.

As it states in the documentation:
If mysqld is running, you must force a sync/close of all tables with
FLUSH TABLES and ensure that no one is using the tables while you
are running myisamchk. In MySQL Version 3.23 the easiest way to
avoid this problem is to use CHECK TABLE instead of myisamchk to
check tables.
Or is that only to avoid seeing:
myisamchk: warning: 1 clients is using or hasn't closed the table properly
Its very important that I get an answer soon... Please and Thank you


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


'0' instead of no entry with GROUP BY

2003-08-21 Thread Olaf van Zandwijk
Hi everyone,

I've got a very simple table which contains records of events. It's just 
a table with 2 columns: an id and a timestamp. Every time this event 
occurs, I insert a timestamp in the table.
I use this query to extract the number of events on each separate day:

SELECT COUNT(id) AS number, 
FROM_UNIXTIME(UNIX_TIMESTAMP(stamp),%d-%m-%Y) AS date, 
FROM_UNIXTIME(UNIX_TIMESTAMP(stamp),%Y%m%d) AS sort FROM koffiestats 
GROUP BY date ORDER BY sort ASC

This gives me something like this:
+++--+
| number | date   | sort |
+++--+
|  5 | 02-07-2003 | 20030702 |
|  7 | 03-07-2003 | 20030703 |
|  6 | 04-07-2003 | 20030704 |
|  5 | 07-07-2003 | 20030707 |
|  2 | 08-07-2003 | 20030708 |
|  5 | 09-07-2003 | 20030709 |
+++--+
In this set of records, you see that there are days that nothing 
happened (this is usually in weekends and holidays).The problem is, that 
I want to see a '0' when no event occurred.

Can I achieve this with a different query?

Kind regards,
\Olaf
--
 __XXX__
  (0-0)
+--ooO--(_)--Ooo---+
|  |
| Olaf van Zandwijk|
|  |
| ICQ# 30231605|
| PGP Public Key: http://www.vanzandwijk.net/pgp.txt   |
+--+


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Doing Differential backup

2003-08-21 Thread Dathan Vance Pattishall
Veritas might be a good solution. I'm trying to find out if a mounted
Veritas file system that stores only the snapshots while the data
resides locally is possible. If so then backing up the data could happen
every 5 min if necessary in my environment.

Does anyone know if the Veritas software allows remote snapshots-meaning
data and differential are on totally two separate servers?

--
DVP
 

---Original Message-
--From: Helgi Örn Helgason [mailto:[EMAIL PROTECTED]
--Sent: Wednesday, August 20, 2003 11:51 PM
--To: [EMAIL PROTECTED]
--Subject: Re: Doing Differential backup
--
--On 2003-08-20, Miguel Perez wrote:
--
-- Hi list,
--
-- Does any know how to do differential backups or does exist a tool
-- that do this kind of backups.
--
-- I have mysql 4.0.12 and use innodb tables, mysql is running on
-- redhat 7.3
--
--Hi Miguel,
--take a look in the MySQL manual 4.4.1 Database Backups:
--http://www.mysql.com/doc/en/Backup.html
--and 7.5.7 Backing up and Recovering an InnoDB Database
--http://www.mysql.com/doc/en/Backing_up.html
--Google search on: mysql database tables backup, gives a lot of good
--hints.
--
--Cheers,
--Helgi Örn
--

--/// Helgi Örn Helgason, Registered Linux User: #189958 \\\
--\\\ ~~ SuSE 8.2, MySQL 3.2, PHP 4.3.1 Apache 1.3.27 ~~ ///
--

--MySQL General Mailing List
--For list archives: http://lists.mysql.com/mysql
--To unsubscribe:
--http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Master-Slave Replication

2003-08-21 Thread Sanya Shaik
Hi all,
 
 Thanks for the answers for Master-Master replication. Right now i want to try the 
Master-Slave replication first and then do a circular replication. 
 
Unfortunately, I am facing problems with updating slave automatically. 
 
I started the slave and loaded the data from the master, later any changes made to 
master are not reflected on slave. 
 
Please HELP me in this regard. 


-
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.

Re: Slow results with simple, well-indexed query

2003-08-21 Thread Hans van Harten
Cybot wrote:
 Jesse Sheidlower wrote:
 An example of a query is to get all the words (the cg.cw
 field) in a particular alphabetical range that have been
 added in some timespan (the sref.cd field). The cg table
 has about 3M rows, and the sref table about 70,000; the
 intervening tables are all indexed on the relevant id
 fields:
 mysql SELECT cg.cw FROM cg,q,cit,sref
 - WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id =
 sref.id
 - AND cg.cw BETWEEN 't' AND 'tzzz'
 - AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
 move your DATE before cw
 AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
 AND cg.cw BETWEEN 't' AND 'tzzz'
I'ld try a changed table list after FROM:  sref, cit, q, cg.
Your query started by selecting a few out of 3M, while some out of 70k still
might be less. Just my rule of thumb: Far faster response keep intermediate
results small by joining in the largest table last.

I prefer to write out all joins ...
SELECT cg.cw FROM sref JOIN cit ON sref.id = cit.sref_id
 JOIN q ON cit.id = q.cit_id
 JOIN cg ON q.id = cg.q_id
WHERE sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
AND cg.cw BETWEEN 't' AND 'tzzz'
ORDER BY cg.cw
LIMIT 1000,10;
... don't think there is any gain in it on MySQL.

HansH


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: myisamchk question (important)

2003-08-21 Thread Luc Foisy
Thank you Paul.
whew! So just checking will not do anything to the database, in an case?
I am not really caring if it returns the # users still connected/ or table not closed 
right warning. I am not really looking for that.
I am looking for warnings telling my that tables are corrupted...



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 3:08 PM
To: Luc Foisy; MYSQL-List (E-mail)
Subject: RE: myisamchk question (important)


I should qualify my answer, to indicate something that may not apply
to the situation you have in mind.

If you're using myisamchk only to *check* tables, it operates in read-only
fashion.  The problems occur if you're using it to repair tables, because
then if you have both myisamchk and the server writing to the table
files, you're not going to like the results.

However, even with checking, if you don't flush the tables, you
may see spurious warnings of inconsistencies. (Due to unflushed changes,
for example.) These do not result in table damage.

At 14:37 -0400 8/21/03, Luc Foisy wrote:
If you read http://www.mysql.com/doc/en/Maintenance_regimen.html 
they say something interesting that would contracdict other places 
in the documentation
They actually recomend running myisamchk on a running instance of 
mysqld. The method on that page (a method they use themselves) would 
not even allow a flush tables to be called before.

Strange... How many answers can I find to this, does anyone know?
I have several conflicting sources now

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2003 10:06 PM
To: Luc Foisy; MYSQL-List (E-mail)
Subject: Re: myisamchk question (important)


At 9:54 -0400 8/19/03, Luc Foisy wrote:
Would anything happen to the database if I ran myisamchk --silent
/usr/data/mysql/*/*.MYI when I havent run FLUSH TABLES first?

Yes, you may have unflushed changes still in the server's buffers.
Running myisamchk in that case can make the tables *in*consistent
because the tables won't have in them what the server thinks.

Don't do it.


As it states in the documentation:
If mysqld is running, you must force a sync/close of all tables with
FLUSH TABLES and ensure that no one is using the tables while you
are running myisamchk. In MySQL Version 3.23 the easiest way to
avoid this problem is to use CHECK TABLE instead of myisamchk to
check tables.

Or is that only to avoid seeing:
myisamchk: warning: 1 clients is using or hasn't closed the table properly

Its very important that I get an answer soon... Please and Thank you


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified?  http://www.mysql.com/certification/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified?  http://www.mysql.com/certification/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow results with simple, well-indexed query

2003-08-21 Thread Allen Weeks
The only thing I can add is check you hardware and OS platform.

Cheers

-Original Message-
From: Jesse Sheidlower [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 11:44 AM
To: Cybot
Cc: [EMAIL PROTECTED]
Subject: Re: Slow results with simple, well-indexed query



On Thu, Aug 21, 2003 at 06:01:31PM +0200, Cybot wrote:
 Jesse Sheidlower wrote:

 I'm struggling with speed issues on some queries that
 I would have expected to be relatively fast. Perhaps
 even more frustratingly, when I've tried to break
 these down into their components, they still execute
 very slowly. I've looked over all the relevant suggestions
 for optimization and so forth, and there's nothing I can
 tell that I'm missing.
 
 An example of a query is to get all the words (the cg.cw
 field) in a particular alphabetical range that have been
 added in some timespan (the sref.cd field). The cg table
 has about 3M rows, and the sref table about 70,000; the
 intervening tables are all indexed on the relevant id
 fields:
 
 -
 mysql SELECT cg.cw FROM cg,q,cit,sref
 - WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id =
sref.id
 - AND cg.cw BETWEEN 't' AND 'tzzz'
 - AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)

 move your DATE before cw

 AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
 AND cg.cw BETWEEN 't' AND 'tzzz'

 cause sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH) should be faster
 than cg.cw BETWEEN 't' AND 'tzzz' and so the total rows are already
 limited when cg.cw BETWEEN 't' AND 'tzzz' will be executed

I assume that the optimizer would take care of this, but in any
case I gave it a try and it made no difference.

 also you can try an index with a length of 2 or 3 over cg.cw, this will
 result in smaller index and possible speed up things

I also tried this (the current index is 25 characters on a 100-character
field), and if anything it made things slower.

Anyone have any other ideas or analysis?

Thanks very much.

Jesse Sheidlower

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Master-Slave Replication

2003-08-21 Thread Miguel Perez
Here is a good URL, maybe it can help you to deploy your Master-Slave 
solution.
URL: http://mysql.us.themoes.org/doc/en/Replication_HOWTO.html

Greetings

Mikel

From: Sanya Shaik [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Master-Slave Replication Date: Thu, 21 Aug 2003 12:50:31 -0700 
(PDT)

Hi all,

 Thanks for the answers for Master-Master replication. Right now i want to 
try the Master-Slave replication first and then do a circular replication.

Unfortunately, I am facing problems with updating slave automatically.

I started the slave and loaded the data from the master, later any changes 
made to master are not reflected on slave.

Please HELP me in this regard.

-
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
_
MSN Fotos: la forma más fácil de compartir e imprimir fotos.  
http://photos.msn.es/support/worldwide.aspx

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Slow results with simple, well-indexed query

2003-08-21 Thread John Griffin
Can you post your DDL to go along with your DML?

-Original Message-
From: Allen Weeks [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 4:51 PM
To: Jesse Sheidlower; Cybot
Cc: [EMAIL PROTECTED]
Subject: RE: Slow results with simple, well-indexed query


The only thing I can add is check you hardware and OS platform.

Cheers

-Original Message-
From: Jesse Sheidlower [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 11:44 AM
To: Cybot
Cc: [EMAIL PROTECTED]
Subject: Re: Slow results with simple, well-indexed query



On Thu, Aug 21, 2003 at 06:01:31PM +0200, Cybot wrote:
 Jesse Sheidlower wrote:

 I'm struggling with speed issues on some queries that
 I would have expected to be relatively fast. Perhaps
 even more frustratingly, when I've tried to break
 these down into their components, they still execute
 very slowly. I've looked over all the relevant suggestions
 for optimization and so forth, and there's nothing I can
 tell that I'm missing.
 
 An example of a query is to get all the words (the cg.cw
 field) in a particular alphabetical range that have been
 added in some timespan (the sref.cd field). The cg table
 has about 3M rows, and the sref table about 70,000; the
 intervening tables are all indexed on the relevant id
 fields:
 
 -
 mysql SELECT cg.cw FROM cg,q,cit,sref
 - WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id =
sref.id
 - AND cg.cw BETWEEN 't' AND 'tzzz'
 - AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)

 move your DATE before cw

 AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
 AND cg.cw BETWEEN 't' AND 'tzzz'

 cause sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH) should be faster
 than cg.cw BETWEEN 't' AND 'tzzz' and so the total rows are already
 limited when cg.cw BETWEEN 't' AND 'tzzz' will be executed

I assume that the optimizer would take care of this, but in any
case I gave it a try and it made no difference.

 also you can try an index with a length of 2 or 3 over cg.cw, this will
 result in smaller index and possible speed up things

I also tried this (the current index is 25 characters on a 100-character
field), and if anything it made things slower.

Anyone have any other ideas or analysis?

Thanks very much.

Jesse Sheidlower

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doing differential backups

2003-08-21 Thread Miguel Perez
Thanxs for the URLS, I'll check them out.

Greetings

Mikel

_
Charla con tus amigos en línea mediante MSN Messenger:  
http://messenger.microsoft.com/es

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: myisamchk question (important)

2003-08-21 Thread Paul DuBois
At 16:32 -0400 8/21/03, Luc Foisy wrote:
Thank you Paul.
whew! So just checking will not do anything to the database, in an case?
I am not really caring if it returns the # users still connected/ 
or table not closed right warning. I am not really looking for that.
I am looking for warnings telling my that tables are corrupted...
Well, the thing is, if you don't tell the server to flush changes, I believe
there may be partially flushed changes that can cause myisamchk to get
confused and believe that there is corruption.  I don't recall that I have
ever seen this in practice when trying to see what breaks by performing
this unrecommended procedure :-) -- but I'm reluctant to say that it can
*never* happen.
Any reason not to use CHECK TABLE? You may find it a better alternative.



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 3:08 PM
To: Luc Foisy; MYSQL-List (E-mail)
Subject: RE: myisamchk question (important)
I should qualify my answer, to indicate something that may not apply
to the situation you have in mind.
If you're using myisamchk only to *check* tables, it operates in read-only
fashion.  The problems occur if you're using it to repair tables, because
then if you have both myisamchk and the server writing to the table
files, you're not going to like the results.
However, even with checking, if you don't flush the tables, you
may see spurious warnings of inconsistencies. (Due to unflushed changes,
for example.) These do not result in table damage.
At 14:37 -0400 8/21/03, Luc Foisy wrote:
If you read http://www.mysql.com/doc/en/Maintenance_regimen.html
they say something interesting that would contracdict other places
in the documentation
They actually recomend running myisamchk on a running instance of
mysqld. The method on that page (a method they use themselves) would
not even allow a flush tables to be called before.
Strange... How many answers can I find to this, does anyone know?
I have several conflicting sources now
-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2003 10:06 PM
To: Luc Foisy; MYSQL-List (E-mail)
Subject: Re: myisamchk question (important)
At 9:54 -0400 8/19/03, Luc Foisy wrote:
Would anything happen to the database if I ran myisamchk --silent
/usr/data/mysql/*/*.MYI when I havent run FLUSH TABLES first?
Yes, you may have unflushed changes still in the server's buffers.
Running myisamchk in that case can make the tables *in*consistent
because the tables won't have in them what the server thinks.
Don't do it.

As it states in the documentation:
If mysqld is running, you must force a sync/close of all tables with
FLUSH TABLES and ensure that no one is using the tables while you
are running myisamchk. In MySQL Version 3.23 the easiest way to
avoid this problem is to use CHECK TABLE instead of myisamchk to
check tables.
Or is that only to avoid seeing:
myisamchk: warning: 1 clients is using or hasn't closed the table properly
 Its very important that I get an answer soon... Please and Thank you


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL running out of date

2003-08-21 Thread Hans van Harten
Some make the laughing stock of MySQL with this code:
create database data_test ;
use data_test;
create table test3 (a date);
insert into test3 values (-1);
insert into test3 values ('1996-02-31');
insert into test3 values ('1996-67-31');
select * from test3;

I ran it on MYSQL-4.10-max and was not amused.
Anyone to comment??

HansH

Re: myisamchk question (important)

2003-08-21 Thread Sergei Golubchik
Hi!

On Aug 21, Luc Foisy wrote:
 Thank you Paul.
 whew! So just checking will not do anything to the database, in an case?

yes, but ONLY if you run myisamchk with --read-only flag.

 I am not really caring if it returns the # users still connected/ or
 table not closed right warning. I am not really looking for that.
 I am looking for warnings telling my that tables are corrupted...

You can get false positives - myisamchk may report the table as crashed
even if it is not. Just assume myisamchk checking the table, and when
it's half-way done MySQL writes something to the table.
The table will most definitely *look* corrupted.
 
Regards,
Sergei

-- 
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAID or not?

2003-08-21 Thread Per Andreas Buer
[EMAIL PROTECTED] (Lefevre, Steven) writes:

 I say go with RAID 5, on a controller card.
 ..

 You get better performance than mirroring or regular drive, because
 the data is spread out over your drives. It's not as good as disk
 striping, though.

Ehh. Wrong. That is not how it works. If you have RAID5 with 4 disks, as
we have here, one single write() will have the following effect.

1. The controller will have to read the whole stripe off the array. 3
reads from 3 diffrent discs.
2. Calculate the new checksum for the stripe.
3. Write the modified block back to the disk where it was changed
4. Updated the checksum

This works Ok for multimedia and file storage, where you write()-call
might be the size of a stripe or bigger. Then you can skip phase 1) on
the list above. 

Ask any DBA; they will all tell you to never_ use RAID 5 for databases
with dynamic content. Just don't.

As for performance, seek times tend to be higher on a RAID5 array then
on a mirror. The only thing which is good with RAID5 is read througput -
which might be important for full table scans, but not much else.


 So, all in all, RAID 5 gives fault tolerance and better performance.

Why do you think people use RAID1? 

 You can have the OS do the RAID, but that puts a lot of burden on the
 processor and OS. 

CPU is almost never an issue anymore - not for database servers, anyway.
The increase in CPU-usage is seldom noticable. I've seen software raid
(on Linux 2.4) outrun $2000+ RAID-cards. CPUs are many times faster than
the puny i960 or Strongarm CPU which are put on RAID controllers.

 I recommend getting a RAID card, and not a cheap one, either. Plan on
 spending ~$500.

If you get one. Get one with a battery and write-back cache. They will
give you kick-ass performance for those pesky fsync's.

-- 
Per Andreas Buer

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL running out of date

2003-08-21 Thread Rajesh Kumar
Hans van Harten unknowingly asked us:
Some make the laughing stock of MySQL with this code:
create database data_test ;
use data_test;
create table test3 (a date);
insert into test3 values (-1);
insert into test3 values ('1996-02-31');
insert into test3 values ('1996-67-31');
select * from test3;
I ran it on MYSQL-4.10-max and was not amused.
Anyone to comment??
Yeah, what's wrong with this? Absolutely expected results.

--
[ Rajesh Kumar ]
__
Meet the guy at http://www.meetRajesh.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Need help optimizing query, awfully slow on only 20000 records

2003-08-21 Thread Apollo (Carmel Entertainment)
Absolutely! I have multiple indexes. I think it might be a problem with ODBC

 Are your tables indexed? http://www.mysql.com/doc/en/MySQL_indexes.html
 
 Saqib Ali
 -
 http://www.xml-dev.com
 
 


-
Visit CARMEL MUSIC  ENTERTAINMENT website http://carmelme.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL running out of date

2003-08-21 Thread Adam Clauss
Same here.

Got 0's for the invalid dates, correct date for the other.  What is wrong?

Adam Clauss
[EMAIL PROTECTED]


 -Original Message-
 From: Rajesh Kumar [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, August 21, 2003 1:35 PM
 To: Hans van Harten
 Cc: [EMAIL PROTECTED]
 Subject: Re: MySQL running out of date
 
 
 Hans van Harten unknowingly asked us:
  Some make the laughing stock of MySQL with this code:
  create database data_test ;
  use data_test;
  create table test3 (a date);
  insert into test3 values (-1);
  insert into test3 values ('1996-02-31');
  insert into test3 values ('1996-67-31');
  select * from test3;
  
  I ran it on MYSQL-4.10-max and was not amused.
  Anyone to comment??
 
 Yeah, what's wrong with this? Absolutely expected results.
 
 -- 
 [ Rajesh Kumar ]
 __
 Meet the guy at http://www.meetRajesh.com/
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seeking advice on best table structure

2003-08-21 Thread Roger Baklund
* Rajesh Kumar
 Roger Baklund unknowingly asked us:

 What would be a good way to deal with the following...

No, I did not. Scott Haneda asked the question. I replied. :) And I
recommended considering the SET column.

 But for a couple of reasons, SET is not recommend.

 1. It introduces fragmentation, and indexes are pretty hard.

Fragmentation?

 2. It defies the universal rule of a normalised table:

No. This is not an universal rule, and the SET column type _should_ be used
when it is the best choice. IMO.

I suppose the rule you are referring to is the first normal form (1NF).

 You cannot, no matter what, and should not, store more than one

Either you can but shouldn't, or you can't... ;)

 value in a single cell of a database table.

And how would you store a string? A single char per row? ;)

This would depend on how you define one value. It is usefull to define a
string as one value, and not as a list of characters, because the latter
would be in conflict with 1NF. Another example is names: you would often
store Baklund, Roger in a single column, even if it is (at least) two
values: family name and given name. Similarly, a product number X-23/b4
may contain multiple values, but it is still usefull to store it in a
single column, and define it as the single value product#. Maybe the X-
prefix means this product is in some special category, you would still keep
a 'prod_cat' column, thus breaking the rules of redundancy. This is
normal, most databases of some size have such redundancies, it won't prevent
you from taking advantage of normalization, and you would still call the
database normalized. Normalization is not an exact science in the real
world, you have to use what works best.

If the five checkboxes are static, they will never change, then I see no
reason not to use a SET column, unless, of course, he needs to index on
this. And I don't think he needs to, because he said what he needed was
statistics for the whole table, thus he needs to read the entire table
anyways.

To stay within 1NF you can simply define a new term checkboxvalue, which
is an integer between 0 and 31, representing all possible combinations of
the five checkboxes. This integer can be stored in a SET column, which also
gives you a nice string interface to the bit manipulation arithmetic you
otherwise would need to perform. And you get both: you can still use bit
arithmetic if you like.

You _can_ use an index on a SET column, but only for exact matches and
ranges, not when searching for anyone who have checked 'car'. An exception
to this is the last item in the set, because that is assigned the highest
value. In the example in this thread, 'beetle' is the last value,
representing 16, so an indexed range check could be used to find anyone who
have or have not checked 'beetle': WHERE choiceflag = 16 or WHERE
choiceflag  16.

 Even if you did, it is going to be really hard later on to change the
 names of the SETS (this would contradict with the user's choice), and is
 going to be still harder to add another choice to your list.

Well, ALTER TABLE is not very hard... but not needing to is even easier, of
course. :) He said the list would not change.

 The best I would recommend, though it takes up more space in the table,
 is to have a separate column in another table with a one-to-one join,
 and each column as each choice, and each column with the ENUM type of
 true/false.

one-to-one?  why not five columns in the original table? And why not go all
the way with two new tables:

choices (choiceid tinyint,choice varchar(255))
user_choices (userid int,choiceid tinyint)

 Think before you choose, for you don't want to be sorry later.

That is allways a good advice. :)

--
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL running out of date

2003-08-21 Thread Brian Austin
As per the manual that is correct.
The only correct date will be 1996-02-31.
what is the problem?

-Original Message-
From: Hans van Harten [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 4:09 PM
To: [EMAIL PROTECTED]
Subject: MySQL running out of date 


Some make the laughing stock of MySQL with this code:
create database data_test ;
use data_test;
create table test3 (a date);
insert into test3 values (-1);
insert into test3 values ('1996-02-31');
insert into test3 values ('1996-67-31');
select * from test3;

I ran it on MYSQL-4.10-max and was not amused.
Anyone to comment??

HansH

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL running out of date

2003-08-21 Thread Hans van Harten
Adam Clauss and Rajesh Kumar wrote:
 Hans van Harten unknowingly asked us:
LOL
 Some make the laughing stock of MySQL with this code:
 create database data_test ;
 use data_test;
 create table test3 (a date);
 insert into test3 values (-1);
 insert into test3 values ('1996-02-31');
 insert into test3 values ('1996-67-31');
 select * from test3;
 I ran it on MYSQL-4.10-max and was not amused.
 Anyone to comment??
 Yeah, what's wrong with this? Absolutely expected results.
 Same here.
 Got 0's for the invalid dates, correct date for the other.  What is
 wrong?

I'ld say, NONE of them is correct ... Februari _31_st !!
Then I'ld expect at least a warning, rather an error, on each of the
samples!

HansH



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: '0' instead of no entry with GROUP BY

2003-08-21 Thread Yves Goergen
for what i understand of this problem, it should not be possible an easy way.
once i have found an article on this on the internet, don't know where it was anymore. 
(neither devshed nor phpbuilder)
but it was about JOINing the result with an 'integers table' that has all the needed 
values from 1 to, say, 31. so you can ensure the appearance of any day value from 1 to 
31.
of course, this won't work with 30-days-months or even februaries or leap years. in 
this case i suggest you cover the holes in your processing application logic, drawing 
a graph or whatever.

--
Yves Goergen
[EMAIL PROTECTED]
Please don't CC me (causes double mails)


-Ursprüngliche Nachricht- 
Von: Olaf van Zandwijk [EMAIL PROTECTED]
An: [EMAIL PROTECTED]
Gesendet: Donnerstag, 21. August 2003 21:38
Betreff: '0' instead of no entry with GROUP BY


 Hi everyone,
 
 I've got a very simple table which contains records of events. It's just 
 a table with 2 columns: an id and a timestamp. Every time this event 
 occurs, I insert a timestamp in the table.
 I use this query to extract the number of events on each separate day:
 
 SELECT COUNT(id) AS number, 
 FROM_UNIXTIME(UNIX_TIMESTAMP(stamp),%d-%m-%Y) AS date, 
 FROM_UNIXTIME(UNIX_TIMESTAMP(stamp),%Y%m%d) AS sort FROM koffiestats 
 GROUP BY date ORDER BY sort ASC
 
 This gives me something like this:
 +++--+
 | number | date   | sort |
 +++--+
 |  5 | 02-07-2003 | 20030702 |
 |  7 | 03-07-2003 | 20030703 |
 |  6 | 04-07-2003 | 20030704 |
 |  5 | 07-07-2003 | 20030707 |
 |  2 | 08-07-2003 | 20030708 |
 |  5 | 09-07-2003 | 20030709 |
 +++--+
 
 In this set of records, you see that there are days that nothing 
 happened (this is usually in weekends and holidays).The problem is, that 
 I want to see a '0' when no event occurred.
 
 Can I achieve this with a different query?
 
 Kind regards,
 \Olaf
 
 -- 
   __XXX__
(0-0)
 +--ooO--(_)--Ooo---+
 |  |
 | Olaf van Zandwijk|
 |  |
 | ICQ# 30231605|
 | PGP Public Key: http://www.vanzandwijk.net/pgp.txt   |
 +--+
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow results with simple, well-indexed query

2003-08-21 Thread Steven Roussey
 Executing just the search on the word table, with no joins to the
 table with the dates, is still slow:

Then it is not worth while to focus on anything else until you fix that.
Are the contents of this field always in lower case?

Is so, then change the column to a binary type. The explain says:

  rows: 318244
 Extra: Using where; Using filesort

That means that is sorting all 318,244 (est) records first, then going
down to the 3000th and giving you five records. Just a guess. See if
that helps then we can move on to the join.

--steve-



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL running out of date

2003-08-21 Thread Kevin Fries
I agree it's unfortunate that the dates get stored.  But some do seem to
prefer it this way.

To quote the manual at the bottom of:
http://www.mysql.com/doc/en/Using_DATE.html

If the date cannot be converted to any reasonable value, a 0 is
stored in the DATE field, which will be retrieved as -00-00. This is
both a speed and convenience issue as we believe that the database's
responsibility is to retrieve the same date you stored (even if the data
was not logically correct in all cases). We think it is up to the
application to check the dates, and not the server. 

Interestingly, use of date_add() and date_sub() on 'odd' dates such as
Feb 31 does produce sane results.
Subtract one from 2000 Feb 31, and you'll get 2000-03-01.

TO test the format of an input date, you might try:

mysql select date_add('2000-11-31', interval 0 day);
++
| date_add('2000-11-31', interval 0 day) |
++
| 2000-12-01 |
++
1 row in set (0.00 sec)

You'll be able to tell whether you've got a good date by comparing the
results.  And you can correct a weird-but-valid date by updating it
with this technique.

We may not like the way it works, but at least it's correct per the
manual.

 -Original Message-
 From: Hans van Harten [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, August 21, 2003 3:05 PM
 To: [EMAIL PROTECTED]
 Subject: Re: MySQL running out of date
 
 
 Adam Clauss and Rajesh Kumar wrote:
  Hans van Harten unknowingly asked us:
 LOL
  Some make the laughing stock of MySQL with this code:
  create database data_test ;
  use data_test;
  create table test3 (a date);
  insert into test3 values (-1);
  insert into test3 values ('1996-02-31');
  insert into test3 values ('1996-67-31');
  select * from test3;
  I ran it on MYSQL-4.10-max and was not amused.
  Anyone to comment??
  Yeah, what's wrong with this? Absolutely expected results.
  Same here.
  Got 0's for the invalid dates, correct date for the other.  What is 
  wrong?
 
 I'ld say, NONE of them is correct ... Februari _31_st !!
 Then I'ld expect at least a warning, rather an error, on each 
 of the samples!
 
 HansH
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to create a stop word file?

2003-08-21 Thread Steven Roussey
 how to separate each stop word in the list

A different word on each line.

-steve-



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to create a stop word file?

2003-08-21 Thread Paul DuBois
At 15:53 -0300 8/21/03, Cleber Hostalácio de Melo wrote:
Hi,

I need to change the stop word file used by the MySQL in fulltext seach.

The documentation (www.mysql.com/doc/en/Fulltext_Fine-tuning.html)
explains how to inform to MySQL the new stop word file through the
ft_stopword_file variable.
That section has a link to:

http://www.mysql.com/doc/en/SHOW_VARIABLES.html

which, under the ft_stopword_file entry, describes the file format.

But I could not find any reference to the layout of this file, for
example, how to separate each stop word in the list?
Does someone have any experience on that?

Thanks

Cleber.


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow results with simple, well-indexed query

2003-08-21 Thread Jesse Sheidlower
On Thu, Aug 21, 2003 at 03:36:54PM -0700, Steven Roussey wrote:
  Executing just the search on the word table, with no joins to the
  table with the dates, is still slow:
 
 Then it is not worth while to focus on anything else until you fix that.
 Are the contents of this field always in lower case?
 
 Is so, then change the column to a binary type. The explain says:
 
   rows: 318244
  Extra: Using where; Using filesort
 
 That means that is sorting all 318,244 (est) records first, then going
 down to the 3000th and giving you five records. Just a guess. See if
 that helps then we can move on to the join.

No, the contents can be of mixed case. Where does that leave things?

In a working environment I'd never be querying on this table alone,
it would always be joined in to other tables that would limit things
in some way, but these don't seem to be affecting things. The suggestions
other people have made to try to get it to do the smaller queries first
don't seem to be having much effect, unfortunately.

Jesse Sheidlower

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL running out of date

2003-08-21 Thread Peter Brawley
Interestingly, use of date_add() and date_sub() on 'odd' dates such as
Feb 31 does produce sane results.
Subtract one from 2000 Feb 31, and you'll get 2000-03-01.

This is sane!!??

PB

[mysql]




Re: MySQL running out of date

2003-08-21 Thread Hans van Harten
Kevin Fries wrote:
 I agree it's unfortunate that the dates get stored.  But some do seem
 to prefer it this way.

 To quote the manual at the bottom of:
 http://www.mysql.com/doc/en/Using_DATE.html

 If the date cannot be converted to any reasonable value, a 0 is
 stored in the DATE field, which will be retrieved as -00-00. This
 is both a speed and convenience issue as we believe that the
 database's responsibility is to retrieve the same date you stored
 (even if the data was not logically correct in all cases). We think
 it is up to the application to check the dates, and not the server.
It was my understanding a server should validate the date per type.

 Interestingly, use of date_add() and date_sub() on 'odd' dates such as
 Feb 31 does produce sane results.
Only partionally ...
  insert into test3 values ( date_add('1996-02-30', interval 0 day) );
  insert into test3 values ( date_add('1996-02-61', interval 0 day) );
  insert into test3 values ( date_add('1996-67-31', interval 0 day) );
  select * from test3;
Returns
  1996-03-01
  NULL  expected: April 1, 1996 ...
  NULL  expected: July 31, 2001 ...

 We may not like the way it works, but at least it's correct per the
manual.
Does it comply to ANSI or SQL92 or ... ??

HansH

...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL running out of date

2003-08-21 Thread Rajesh Kumar
Peter Brawley unknowingly asked us:
Interestingly, use of date_add() and date_sub() on 'odd' dates such as
Feb 31 does produce sane results.
Subtract one from 2000 Feb 31, and you'll get 2000-03-01.
This is sane!!??
This is where Unix Timestamps come into action (and perhaps rescue)!

To be sure that we're storing a sane value in our DB, we could use:

SELECT unix_timestamp('2002-02-31');

which provides a correct result.

Some interesting results:

mysql select from_unixtime(unix_timestamp('2000-11-31'),'%Y-%m-%d');
++
| from_unixtime(unix_timestamp('2000-11-31'),'%Y-%m-%d') |
++
| 2000-12-01 |
++
mysql select from_unixtime(unix_timestamp('2000-02-31'),'%Y-%m-%d');
++
| from_unixtime(unix_timestamp('2000-02-31'),'%Y-%m-%d') |
++
| 2000-03-02 |
++
And lastly, I don't know how this happens:

mysql select unix_timestamp('2002-102-31');
+---+
| unix_timestamp('2002-102-31') |
+---+
|1037026951 |
+---+
We could then probably eliminate all timestamps stored as 1969-12-31 as 
we know that they're illegal.

--
[ Rajesh Kumar ]
__
Meet the guy at http://www.meetRajesh.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Slow results with simple, well-indexed query

2003-08-21 Thread Steven Roussey
 No, the contents can be of mixed case. Where does that leave things?

**Index the length of the entire column.** It then should not need to
have to do the filesort. Actually the binary option would not have
really helped. The explain should say 'Using Index'. Get back to me on
this and tell me the results.

 In a working environment I'd never be querying on this table alone,
 it would always be joined in to other tables that would limit things
 in some way, but these don't seem to be affecting things. The
suggestions
 other people have made to try to get it to do the smaller queries
first
 don't seem to be having much effect, unfortunately.

Optimize the join once you know how to optimize its parts. One thing at
a time.

--steve-


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Imran Javed/Kamino is out of the office.

2003-08-21 Thread Imran Javed




I will be out of the office starting  21/08/2003 and will not return until
08/09/2003.

I will respond to your message when I return. If you have any urgent
queries please contact Stuart Mclean

==
Disclaimer Notice

This message (including attachments) is legally privileged and/or
confidential. If you are not the intended recipient, you are hereby
notified that any unauthorised disclosure, copying, distribution or use of
this information is strictly prohibited. If you have received this e-mail
in error, please notify us immediately by telephone or by e-mail. It is the
responsibility of the recipient(s) to ensure that this message is virus
free and we accept no liability for any loss or damage arising from its
receipt or use.
==


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Copying distinct data to a new table

2003-08-21 Thread Dan Jones
I'm attempting to normalize a database that was originally created as a
flat file.  I want to extract distinct values from a table and insert
them as new entries into a new table.  Unless I'm missing something,
INSERT doesn't allow you to SELECT data from another table for
insertion, and UPDATE doesn't allow you to create new rows.

I could do a SELECT DISTINCT INTO OUTFILE, then LOAD DATA INFILE but
that seems rather cumbersome.  Is there a straightforward way to do
this?




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to create a stop word file?

2003-08-21 Thread daniel
sorry to be vague but what is the ft_stopword_file i havent been able to be
up to speed on that variable.

 At 15:53 -0300 8/21/03, Cleber Hostalácio de Melo wrote:
Hi,

I need to change the stop word file used by the MySQL in fulltext
seach.

The documentation (www.mysql.com/doc/en/Fulltext_Fine-tuning.html)
explains how to inform to MySQL the new stop word file through the
ft_stopword_file variable.

 That section has a link to:

 http://www.mysql.com/doc/en/SHOW_VARIABLES.html

 which, under the ft_stopword_file entry, describes the file format.


But I could not find any reference to the layout of this file, for
example, how to separate each stop word in the list?

Does someone have any experience on that?

Thanks

Cleber.


 --
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com

 Are you MySQL certified?  http://www.mysql.com/certification/


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



subselect doesnt work

2003-08-21 Thread Daniel Rossi
hi there, i am trying to remove values from a list menu if the join table doesnt have 
keys when a key is selected for instance:
locations
locationID

locations_join
locationID shotlistID

SELECT SQL_NO_CACHE l.locationID , l.location FROM locations l LEFT JOIN 
locations_join lj ON l.locationID = lj.locationID WHERE l.locationID NOT IN (select 
locationID FROM locations_join WHERE shotlistID IN (5069)) ORDER BY l.location ASC

so when shotlistID is selected all the keys from the locations_join joined to the 
shotlistID would be remove from the locations list please help, i'm trying to do this 
in one query saving from getting all the keys into an array then checking if the 
values arent in the array when generating the list.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Copying distinct data to a new table

2003-08-21 Thread Rajesh Kumar
Dan Jones unknowingly asked us:
I'm attempting to normalize a database that was originally created as a
flat file.  I want to extract distinct values from a table and insert
them as new entries into a new table.  Unless I'm missing something,
INSERT doesn't allow you to SELECT data from another table for
insertion, and UPDATE doesn't allow you to create new rows.
I could do a SELECT DISTINCT INTO OUTFILE, then LOAD DATA INFILE but
that seems rather cumbersome.  Is there a straightforward way to do
this?
Yes, INSERT does allow you to SELECT data from another table.

http://www.mysql.com/doc/en/INSERT_SELECT.html

ex:

INSERT INTO tblTemp2 (fldID) SELECT tblTemp1.fldOrder_ID FROM tblTemp1 
WHERE tblTemp1.fldOrder_ID  100;

--
[ Rajesh Kumar ]
__
Meet the guy at http://www.meetRajesh.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Copying distinct data to a new table

2003-08-21 Thread Dan Jones
On Thu, 2003-08-21 at 20:09, Dan Jones wrote:
 I'm attempting to normalize a database that was originally created as a
 flat file.  I want to extract distinct values from a table and insert
 them as new entries into a new table.  Unless I'm missing something,
 INSERT doesn't allow you to SELECT data from another table for
 insertion, and UPDATE doesn't allow you to create new rows.
 
 I could do a SELECT DISTINCT INTO OUTFILE, then LOAD DATA INFILE but
 that seems rather cumbersome.  Is there a straightforward way to do
 this?

Sheesh.  Nevermind.

INSERT INTO table (column) SELECT DISTINCT column2 FROM table2;

If you want to figure something out yourself, post a question to a
mailing list.  You'll find the answer within 30 seconds of hitting
SEND.






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



PHP API Question.

2003-08-21 Thread Rajesh Kumar
Hello,

I'm using a PHP API with MySql.

I can use the mysql_num_rows() function to find out the number of rows 
in the resultset.

But does anyone know of a way to find out the number of seconds it took 
to execute a particular query using PHP?

Why I need this is because, I would like to execute the BENCHMARK() 
function using PHP, but there is not point to this, as I am not able to 
figure out the number of seconds it took.

Thanks.

--
[ Rajesh Kumar ]
__
Meet the guy at http://www.meetRajesh.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Master-Slave Replication

2003-08-21 Thread Dathan Vance Pattishall
Type show slave status to figure out what the problem is on the slave.

Type show full processlist on the master to see if the slave is
connected waiting for binlog updates.

Make sure the master is replicating..

Etc.

---Original Message-
--From: Sanya Shaik [mailto:[EMAIL PROTECTED]
--Sent: Thursday, August 21, 2003 12:51 PM
--To: [EMAIL PROTECTED]
--Subject: Master-Slave Replication
--
--Hi all,
--
-- Thanks for the answers for Master-Master replication. Right now i
want
--to try the Master-Slave replication first and then do a circular
--replication.
--
--Unfortunately, I am facing problems with updating slave
automatically.
--
--I started the slave and loaded the data from the master, later any
--changes made to master are not reflected on slave.
--
--Please HELP me in this regard.
--
--
---
--Do you Yahoo!?
--The New Yahoo! Search - Faster. Easier. Bingo.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PHP API Question.

2003-08-21 Thread Paul DuBois
At 17:56 -0400 8/21/03, Rajesh Kumar wrote:
Hello,

I'm using a PHP API with MySql.

I can use the mysql_num_rows() function to find out the number of 
rows in the resultset.

But does anyone know of a way to find out the number of seconds it 
took to execute a particular query using PHP?

Why I need this is because, I would like to execute the BENCHMARK() 
function using PHP, but there is not point to this, as I am not able 
to figure out the number of seconds it took.
BENCHMARK() returns 0, always.  The way to use it is to check the time
before issuing your query, issue the query, and check the time after the
query finishes.  Then take the difference between the two times.
This is how the mysql program asesses query times.

Thanks.

--
[ Rajesh Kumar ]
__
Meet the guy at http://www.meetRajesh.com/


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: PHP mysql_connect randomly failing

2003-08-21 Thread Jon Drukman
Antony Dovgal wrote:
mysql_error is not set when mysql_connect fails, because there is no 
actual mysql resource to get the error message from.
yes, there is no mysql resource at this moment.
just don't specify it and mysql_error() will tell something like Can't connect to local 
MySQL server through socket '/tmp/mysql.sock' (111).
that is true, but in this case, neither $php_errmsg nor mysql_error() 
return anything.  i print them both out when the connect fails and they 
are both just blank.

-jsd-



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAID or not?

2003-08-21 Thread Jon Drukman
Lefevre, Steven wrote:

I say go with RAID 5, on a controller card.

Mirroring just gives you backup, and you lose half your diskspace. It offers
no performance benefit, and actually the computer might have to work harder
to make sure the drives are in sync.
that is not true.  mirroring gives you double the read speed and half 
the write speed.  RAID5 gives you less than half the write speed.

that's why i said if your database app is mostly selects go for 
mirroring.  the OP said his app is about 50% select, so i say mirroring 
is a good choice.

and hey, what's wrong with having a backup?  the computer doesn't work 
any harder, it's all handled through the RAID controller card anyway.

-jsd-



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to create a stop word file?

2003-08-21 Thread Paul DuBois
At 10:12 +1000 8/22/03, [EMAIL PROTECTED] wrote:
sorry to be vague but what is the ft_stopword_file i havent been able to be
up to speed on that variable.
Read the page referenced below:

http://www.mysql.com/doc/en/SHOW_VARIABLES.html

It describes what the variable is for.


 At 15:53 -0300 8/21/03, Cleber Hostalácio de Melo wrote:
Hi,

I need to change the stop word file used by the MySQL in fulltext
seach.
The documentation (www.mysql.com/doc/en/Fulltext_Fine-tuning.html)
explains how to inform to MySQL the new stop word file through the
ft_stopword_file variable.
 That section has a link to:

  http://www.mysql.com/doc/en/SHOW_VARIABLES.html
 which, under the ft_stopword_file entry, describes the file format.

But I could not find any reference to the layout of this file, for
example, how to separate each stop word in the list?
Does someone have any experience on that?

Thanks

 Cleber.


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Copying distinct data to a new table

2003-08-21 Thread Martin Gainty
Glad that helped!
-M
- Original Message - 
From: Dan Jones [EMAIL PROTECTED]
To: MySQL Mailing List [EMAIL PROTECTED]
Sent: Thursday, August 21, 2003 5:46 PM
Subject: Re: Copying distinct data to a new table


 On Thu, 2003-08-21 at 20:09, Dan Jones wrote:
  I'm attempting to normalize a database that was originally created as a
  flat file.  I want to extract distinct values from a table and insert
  them as new entries into a new table.  Unless I'm missing something,
  INSERT doesn't allow you to SELECT data from another table for
  insertion, and UPDATE doesn't allow you to create new rows.
  
  I could do a SELECT DISTINCT INTO OUTFILE, then LOAD DATA INFILE but
  that seems rather cumbersome.  Is there a straightforward way to do
  this?
 
 Sheesh.  Nevermind.
 
 INSERT INTO table (column) SELECT DISTINCT column2 FROM table2;
 
 If you want to figure something out yourself, post a question to a
 mailing list.  You'll find the answer within 30 seconds of hitting
 SEND.
 
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow results with simple, well-indexed query

2003-08-21 Thread Jesse Sheidlower
On Thu, Aug 21, 2003 at 04:58:47PM -0700, Steven Roussey wrote:
  No, the contents can be of mixed case. Where does that leave things?
 
 **Index the length of the entire column.** It then should not need to
 have to do the filesort. Actually the binary option would not have
 really helped. The explain should say 'Using Index'. Get back to me on
 this and tell me the results.

Huh, I was told the exact opposite, that if most of the
entries are smaller than the maximum length of the field, you
should use an index about the size you expect most entries to
be. Why would you ever use a shorter index than the full column
length if it led to such performance degradation?

In any case, I reindexed cg.cw to the length of the entire column,
and the result is...

GOD! OK, sorry, I wasn't quite expecting this:

mysql SELECT cw FROM cg WHERE cw BETWEEN 's' AND 'szzz' ORDER BY cw LIMIT 3000,5;
+-+
| cw  |
+-+
| sacrifice hit   |
| sacrifice play  |
| sacrifice the earth |
| sacrifice throw |
| sacrifice to|
+-+
5 rows in set (0.02 sec)

Wow!

But what's the explanation for this huge improvement? Again, I
was always told the opposite, and the Manual itself says: If
it's very likely that a column has a unique prefix on the
first number of characters, it's better to only index this
prefix. MySQL supports an index on a part of a character
column. Shorter indexes are faster not only because they take
less disk space but also because they will give you more hits
in the index cache and thus fewer disk seeks. (At sec. 5.4.2.)

  In a working environment I'd never be querying on this table alone,
  it would always be joined in to other tables that would limit things
  in some way, but these don't seem to be affecting things. The
 suggestions
  other people have made to try to get it to do the smaller queries
 first
  don't seem to be having much effect, unfortunately.
 
 Optimize the join once you know how to optimize its parts. One thing at
 a time.

Hmm. When I returned to the multiple-table query that started
this thread, but with the full-column index, it took a 
staggering 1m 15s; rerunning it speeded it up to 3.51 sec
(the original was 7.30 sec), but still nothing like the
improvement that the single table change made just above. The
explain looks like this:

mysql EXPLAIN SELECT cg.cw FROM cg,q,cit,sref
- WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id
- AND cg.cw BETWEEN 't' AND 'tzzz'
- AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
- ORDER BY cg.cw
- LIMIT 1000,10;
+---++-+-+-+-++-+
| table | type   | possible_keys   | key | key_len | ref | rows   | Extra  
 |
+---++-+-+-+-++-+
| cg| range  | q_id,cw | cw  | 101 | NULL| 190550 | Using 
where |
| q | eq_ref | PRIMARY,cit_id  | PRIMARY |   4 | cg.q_id |  1 |
 |
| cit   | eq_ref | PRIMARY,sref_id | PRIMARY |   4 | q.cit_id|  1 |
 |
| sref  | eq_ref | PRIMARY,cd  | PRIMARY |   4 | cit.sref_id |  1 | Using 
where |
+---++-+-+-+-++-+
4 rows in set (0.00 sec)

Where do I go from here?

And thanks for all the thought people have been putting into this.

Jesse Sheidlower


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow results with simple, well-indexed query

2003-08-21 Thread Steven Roussey
 GOD! OK, sorry, I wasn't quite expecting this:
 Wow!

:)

 
 But what's the explanation for this huge improvement? Again, I
 was always told the opposite, and the Manual itself says: ...

Yes, and it is true (usually). But your EXPLAIN showed a filesort and
that is bad. What happens is that if the resultset is sorted only on the
first few characters (based on the index) of that column. Since you
requested an ORDER BY, it had to go back and fully sort the resultset.
If you have the index do the whole column, then this step is not needed.

Even better is that due to the limit, it can safely go right to the part
of the table it needs to, and once it gets the 5 rows, it is done
(rather then getting all of them for the sort step). Even better in this
particular case is that all the information needed is in the index (the
MYI file) so it did not even need to do a read on the data file (MYD).
Less disk access is a good thing...

 Hmm. When I returned to the multiple-table query that started
 this thread, 

And it was slow. Yeah, one thing at a time. It makes it easier for
people reading this list now or in the future (if it comes up in a
search result) if we go over things one item at a time.

Since I never saw the whole table definitions (the indexes in
particular), I'll have to try and guess through it. So try this:

ALTER TABLE cg add index(q_id,cw);

Tell me how that works and send the EXPLAIN.

The point here is that now you are doing a join and you are using both
columns to qualify the resultset. So we should use a composite index
rather than have individual ones (of which MySQL will choose only one).

Also, you can change line 
AND cg.cw BETWEEN 't' AND 'tzzz'
To 
AND cg.cw like 't%'
For better readability (how many zzz's are enough, eh?). Personal
preference.

--steve-


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to create a stop word file?

2003-08-21 Thread daniel
ft_stopword_file The file from which to read the list of stopwords for full-
text searches. All the words from the file will be used; comments are not
honored. By default, built-in list of stopwords is used (as defined in
`myisam/ft_static.c'). Setting this parameter to an empty string () will
disable stopword filtering. Note: FULLTEXT indexes must be rebuilt after
changing this variable. (This option is new for MySQL 4.0.10)

yes but what exactly is in the ft_static.c i dont have root access to find
the file, it should at least be on the fulltext search page can it help
with my punctuation problem :\



 At 10:12 +1000 8/22/03, [EMAIL PROTECTED] wrote:
sorry to be vague but what is the ft_stopword_file i havent been able
to be up to speed on that variable.

 Read the page referenced below:

 http://www.mysql.com/doc/en/SHOW_VARIABLES.html

 It describes what the variable is for.


  At 15:53 -0300 8/21/03, Cleber Hostalácio de Melo wrote:
Hi,

I need to change the stop word file used by the MySQL in fulltext
seach.

The documentation (www.mysql.com/doc/en/Fulltext_Fine-tuning.html)
explains how to inform to MySQL the new stop word file through the
ft_stopword_file variable.

  That section has a link to:

   http://www.mysql.com/doc/en/SHOW_VARIABLES.html

  which, under the ft_stopword_file entry, describes the file format.


But I could not find any reference to the layout of this file, for
example, how to separate each stop word in the list?

Does someone have any experience on that?

Thanks

  Cleber.


 --
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com

 Are you MySQL certified?  http://www.mysql.com/certification/


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PHP mysql_connect randomly failing

2003-08-21 Thread Daniel Rossi
funny i was just having the same problemo with mysql 3.22 on php3 the most painful 
system to debug i hate it to bits, anyway mysql_error doesnt get returned on 
permission or database connection errors, i wonder if that is soughted out in mysql4 
?? maybe this is a php.net question

 Jon Drukman [EMAIL PROTECTED] 08/22/03 11:18am 
Antony Dovgal wrote:
mysql_error is not set when mysql_connect fails, because there is no 
actual mysql resource to get the error message from.
 
 yes, there is no mysql resource at this moment.
 just don't specify it and mysql_error() will tell something like Can't connect to 
 local MySQL server through socket '/tmp/mysql.sock' (111).

that is true, but in this case, neither $php_errmsg nor mysql_error() 
return anything.  i print them both out when the connect fails and they 
are both just blank.

-jsd-




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAID or not?

2003-08-21 Thread Dan Nelson
In the last episode (Aug 21), Jon Drukman said:
 Lefevre, Steven wrote:
 I say go with RAID 5, on a controller card.
  Mirroring just gives you backup, and you lose half your diskspace.
  It offers no performance benefit, and actually the computer might
  have to work harder to make sure the drives are in sync.
 
 that is not true.  mirroring gives you double the read speed and half
 the write speed.  RAID5 gives you less than half the write speed.

Software raid5 gives you around 1/4 the write speed, to be exact. 
Hardware raid5 with battery-backed cache can completely remove the
penalty, by either waiting for an entire stripe of data to flush in one
operation, or delaying the extra I/O operations until the disk head
happens to be near that block anyway (or until the disk is otherwise
idle).  Make sure you max out the RAM in your raid card; it's cheap.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow results with simple, well-indexed query

2003-08-21 Thread Jesse Sheidlower
On Thu, Aug 21, 2003 at 06:58:29PM -0700, Steven Roussey wrote:
Jesse Sheidlower wrote:
 
  Hmm. When I returned to the multiple-table query that started
  this thread, 
 
 And it was slow. Yeah, one thing at a time. It makes it easier for
 people reading this list now or in the future (if it comes up in a
 search result) if we go over things one item at a time.
 
 Since I never saw the whole table definitions (the indexes in
 particular), I'll have to try and guess through it. So try this:

All the indexes were single indexes, partly because I haven't
yet made the effort to understand composite index. I guess it's
time ;-).

Here's the CREATEs, somewhat edited to remove parts not relevant
to this discussion, to save space:

CREATE TABLE `cg` (
  `q_id` int(10) unsigned NOT NULL default '0',
  `cw` varchar(100) default NULL,
  `exp` text,
  KEY `q_id` (`q_id`),
  KEY `cw` (`cw`),
  KEY `q_id_2` (`q_id`,`cw`),
  FULLTEXT KEY `exp` (`exp`)
) TYPE=MyISAM

CREATE TABLE `q` (
  `id` int(10) unsigned NOT NULL default '0',
  `cit_id` int(10) unsigned NOT NULL default '0',
  `q_tag` enum('q','qau','qca','qna','qsa') default NULL,
  `qt` text,
  PRIMARY KEY  (`id`),
  KEY `cit_id` (`cit_id`),
  FULLTEXT KEY `qt` (`qt`)
) TYPE=MyISAM 

CREATE TABLE `cit` (
  `id` int(10) unsigned NOT NULL default '0',
  `sref_id` int(10) unsigned NOT NULL default '0',
  `w` varchar(200) default NULL,
  PRIMARY KEY  (`id`),
  KEY `sref_id` (`sref_id`),
  FULLTEXT KEY `w` (`w`),
  ) TYPE=MyISAM 

CREATE TABLE `sref` (
  `id` int(10) unsigned NOT NULL default '0',
  `rdr` varchar(30) default NULL,
  `kbd` varchar(20) default NULL,
  `cd` date default NULL,
  PRIMARY KEY  (`id`),
  KEY `rdr` (`rdr`),
  KEY `kbd` (`kbd`),
  KEY `cd` (`cd`)
) TYPE=MyISAM

 ALTER TABLE cg add index(q_id,cw);

I did this, as is reflected in the CREATE above.

 Tell me how that works and send the EXPLAIN.

Unfortunately, it made no difference--the first execution was
about 1 m 15 sec, and one immediately thereafter was about 3.5 sec,
as before. The EXPLAIN shows:

mysql EXPLAIN SELECT cg.cw FROM cg,q,cit,sref
- WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id
- AND cg.cw LIKE 't%'
- AND sref.cd  DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
- ORDER BY cg.cw
- LIMIT 1000,10;
+---++-+-+-+-++-+
| table | type   | possible_keys   | key | key_len | ref | rows   | Extra  
 |
+---++-+-+-+-++-+
| cg| range  | q_id,cw,q_id_2  | cw  | 101 | NULL| 190550 | Using 
where |
| q | eq_ref | PRIMARY,cit_id  | PRIMARY |   4 | cg.q_id |  1 |
 |
| cit   | eq_ref | PRIMARY,sref_id | PRIMARY |   4 | q.cit_id|  1 |
 |
| sref  | eq_ref | PRIMARY,cd  | PRIMARY |   4 | cit.sref_id |  1 | Using 
where |
+---++-+-+-+-++-+

 The point here is that now you are doing a join and you are using both
 columns to qualify the resultset. So we should use a composite index
 rather than have individual ones (of which MySQL will choose only one).

What does this mean for regular searching? In most cases, there will be
some criteria entered that need to be searched on, and the id fields 
will also be needed for the joins. For example, in the database, one
might want to search based on cg.exp (fulltext), sref.rdr, sref.cd
(the date field), sref.kbd, cit.w, and various other ones I've edited
out of this display to save space, and often a combination of several
of these at once. How should I set up indexes for the potential 
searches that might be executed? 

(I should mention that this is a read-only database; it's built from
a parsed SGML file and is never added to directly, if that's an
issue.)

 Also, you can change line 
   AND cg.cw BETWEEN 't' AND 'tzzz'
 To 
   AND cg.cw like 't%'
 For better readability (how many zzz's are enough, eh?). Personal
 preference.

No, I agree, and it was originally LIKE 't%' and is still like
that in the actual code being generated by the query form. I changed
it to the BETWEEN because in some playing around it seemed to be 
faster this way, and in fact I was worried about having to
generate the 

BETWEEN \'$val\' AND \' . $val . \'

thing in my program. If this was a glitch of my badly-indexed
original, I'm glad to fix it.

Thanks again for taking the time to look this over.

Best,

Jesse

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



When to use NULL as default value

2003-08-21 Thread Charles Kline
Hi all,

I pretty much set a fields default value to NULL if it not a required 
field in my app. Just wondering if this is the way to go since I just 
sort of guessed at this method ;)

- Charles

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


How many records can a single MySql Table Hold.

2003-08-21 Thread Rupak Banerjee
Hi,
  We are using MySql version 3.27.53 on a Red Hat Linux platform version
7.2. For the past couple of months we are noticing that the performance
of the server has gone down very badly. Every, single insertion is
taking a hell lot of time.The particular table has only 150,000 records.
 Is there any remedy for this problem.
Thanks and regards,
Rupak Banerjee.


-- 
Visit http://www.brandemataram.com
Login To The Future




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



linking databases

2003-08-21 Thread Robert Morgan
Hi, is it possible to link databases within mysql? I have a access database that I 
need to import into mysql this db consists of 4 other dbs linked together sharing a 
common switchboard. Can I link the tables in one db to tables in another db?

thanks 
 Bob

Need For SPEED

2003-08-21 Thread Creigh Shank
Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution, PHPList, to create 
an e-mailing list for our 5.6 million book club members.  Unfortunately, 
the import speed for importing records (at record number 150,000 the rate 
is about 2,000 records per hour).  We're running on the following:

P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using ramfs), IDE drive 
(72,00 rpm)

So far we've moved the MySQL data files (var/lib/mysql), PHP /tmp and 
upload directories and PHPlist web site files to RAM Disk (still just 
testing - not yet dealing with data safety issues).  With all of this 
tuning we're still at only 2,000 records per hour for uploading.

We need to be at 100,000 records per hour (uploading and sending seem to 
run at about the same rate - we need to be able to send to all book club 
members in the same week).  Any suggestions?

Creigh

(We're planning to run the system on a server with dual Opterons, 8 Gbytes 
RAM and RAID-5 SCSI drives, but I don't think the additional system 
horsepower will solve our problem.)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: [PHP-DB] Need For SPEED

2003-08-21 Thread John Stoffel
 Creigh == Creigh Shank [EMAIL PROTECTED] writes:

Creigh Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution,
Creigh PHPList, to create an e-mailing list for our 5.6 million book
Creigh club members.  Unfortunately, the import speed for importing
Creigh records (at record number 150,000 the rate is about 2,000
Creigh records per hour).  We're running on the following:

Creigh P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using
Creigh ramfs), IDE drive (72,00 rpm)

Creigh So far we've moved the MySQL data files (var/lib/mysql), PHP
Creigh /tmp and upload directories and PHPlist web site files to RAM
Creigh Disk (still just testing - not yet dealing with data safety
Creigh issues).  With all of this tuning we're still at only 2,000
Creigh records per hour for uploading.

Creigh We need to be at 100,000 records per hour (uploading and
Creigh sending seem to run at about the same rate - we need to be
Creigh able to send to all book club members in the same week).  Any
Creigh suggestions?

What is your data source for the records?  Are they in a bunch of flat
files?  If so, cut out the overhead of Apache/PHP and use Perl with
the DBI modules to insert the records directly into your database.

Also, make sure you have the proper indexes for your database that can
be a killer.  Also, for that size of DB, I'd make sure you're using
InnoDB tables.  

What are your mysql settings like as well?  You can tune them up quite
a bit since your import will be the big problem at first, but then it
will mostly be just reads on the tables when you send out emails.

John
   John Stoffel - Senior Unix Systems Administrator - Lucent Technologies
 [EMAIL PROTECTED] - http://www.lucent.com - 978-952-7830

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



<    1   2