MyISAM and InnoDB co-existance

2005-09-13 Thread PRASHANT N
hi,

I need to have 2 DB in our development/production environment which require 
both InnoDB and MyISAM engines. This is bacasue, DB1 contains spatial/geom data 
which has to be on MyISAM engine and DB2 which needs transactional support has 
to be InnoDB. 

Now how to achieve this co-existance of both the MyISAM and InnoDB engines on 
the same server?

regards
shann


___ 
What is the largest island in the Mediterranean Sea? 

http://


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM and InnoDB co-existance

2005-09-13 Thread Jasper Bryant-Greene

PRASHANT N wrote:

hi,

I need to have 2 DB in our development/production environment which
require both InnoDB and MyISAM engines. This is bacasue, DB1 contains
spatial/geom data which has to be on MyISAM engine and DB2 which
needs transactional support has to be InnoDB.

Now how to achieve this co-existance of both the MyISAM and InnoDB
engines on the same server?


You don't need to do anything special. As long as you enable InnoDB 
support (MyISAM is in by default) then just use the TYPE=InnoDB option 
on CREATE TABLE or select InnoDB when creating the table in your chosen 
GUI admin tool. Tables for which you don't select that will be MyISAM.


--
Jasper Bryant-Greene
Freelance web developer
http://jasper.bryant-greene.name/

If you find my advice useful, please consider donating to a poor
student! You can choose whatever amount you think my advice was
worth to you. http://tinyurl.com/7oa5s

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM and InnoDB co-existance

2005-09-13 Thread PRASHANT N
hi,
thnaks for the speedy reply. I already have both the DBs in my server. is there 
any way/method that the DB2 which needs transaction support can be converted 
from myisam to innodb format without any hitch

regards
shann


___ 
Which famous author was once captured by pirates and sold as a slave?

http:// 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM and InnoDB co-existance

2005-09-13 Thread Jasper Bryant-Greene

PRASHANT N wrote:

thnaks for the speedy reply. I already have both the DBs in my
server. is there any way/method that the DB2 which needs transaction
support can be converted from myisam to innodb format without any
hitch


http://dev.mysql.com/doc/mysql/en/alter-table.html

I would strongly recommend that if you are running a MySQL server in a 
production environment you familiarise yourself with the above manual, 
or you *will* run in to problems.


--
Jasper Bryant-Greene
Freelance web developer
http://jasper.bryant-greene.name/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benchmark of MyISAM vs Innodb vs Innod without FKs?!

2005-09-13 Thread Gleb Paharenko
Hello.



Usually only benchmarks will show a real picture

for you. Create foreign keys, perform some tests. Then

temporary disable FKs using SET FOREIGN_KEY_CHECKS=0 and 

repeat the performance measurement. Super Smack is a good

tool for such kind of analysis. See:

  http://dev.mysql.com/doc/mysql/en/custom-benchmarks.html





Kevin Burton [EMAIL PROTECTED] wrote:

ere's some thing I've been thinking about.



I want to use INNODB without FKs. I don't need or want referential integrity 

in my app (due to a schema and performance issue).



Basically I just create FKs in my OR layer and my app enforces the rules. 

The column is still an _ID column so I visually know a FK when I see one but 

INNODB doesn't have to do any runtime checks on insert.



My question is whether INNODB will be faster without them. If so by how 

much. If it's close to the speed of MyISAM then I'll be a happy camper.



Thoughts?



Kevin



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error msg: ERROR 1030 (HY000) at line 52: Got error -1 from storage engine

2005-09-13 Thread Gleb Paharenko
Hello.



Not enough information to make a conclusion. Please provide OS and MySQL

version. Include the problematic query and tables definitions which are

affected by the query.





Logg, Connie A. [EMAIL PROTECTED] wrote:

 I am trying to reload/rebuild a table from the backup and I get this =

 message. I cannot find any references for it.

 

 [EMAIL PROTECTED] mysql-backup]# mysql -u root -p iepm  =

 /home/iepm/privatedir/mysql-backup/iepm-bw.cern.ch.iepm-abwedata.08-16-20=

 05

 Enter password:

 ERROR 1030 (HY000) at line 52: Got error -1 from storage engine

 [EMAIL PROTECTED] mysql-backup]#

 

 Connie Logg, Network Analyst

 Stanford Linear Accelerator Center

 ph: 650-926-2879=20

 Happiness is found along the way, not at the end of the road, and 'IF' =

 is the middle word in life.

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 4.0 examples of data with different encodings

2005-09-13 Thread Bastian Balthazar Bux
Hi all,
Willing to test an upgrading path from 4.0 to 4.1 database of MySQL.
Could someone provide a link or some slice of unload from a 4.0 with
encodings different from latin1 ?

Thanks in advance,
Francesco R.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Major Difference in response times when using Load Infile utility

2005-09-13 Thread Sujay Koduri
hi ,
 
I am using the Load Infile utility to load data from file to MySQL DB.
When trying to load different amounts of data, I observed a notable
difference in the time taken by that.
 
Test 1
 
Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.
 
Test2
 
Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.
 
Can some one explain why this difference is coming. Also it will be great if
someone can suggest how we can improve the performance of the first test.
 
Thank you
sujay


Re: Major Difference in response times when using Load Infile utility

2005-09-13 Thread Peter J Milanese
Are there indexes on the table? Could be that.

--Original Message--
From: Sujay Koduri
To: mysql
Sent: Sep 13, 2005 5:24 AM
Subject: Major Difference in response times when using Load Infile utility

hi ,

I am using the Load Infile utility to load data from file to MySQL DB.
When trying to load different amounts of data, I observed a notable
difference in the time taken by that.

Test 1

Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.

Test2

Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Can some one explain why this difference is coming. Also it will be great
if
someone can suggest how we can improve the performance of the first test.

Thank you
sujay


-
Sent from my NYPL BlackBerry Handheld.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Major Difference in response times when using Load Infile uti lity

2005-09-13 Thread Sujay Koduri

Yes, there are indexes on the table. Do you mean to say index is the
culprit.

sujay 

-Original Message-
From: Peter J Milanese [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 4:58 PM
To: Sujay Koduri; mysql
Subject: Re: Major Difference in response times when using Load Infile
utility

Are there indexes on the table? Could be that.

--Original Message--
From: Sujay Koduri
To: mysql
Sent: Sep 13, 2005 5:24 AM
Subject: Major Difference in response times when using Load Infile utility

hi ,

I am using the Load Infile utility to load data from file to MySQL DB.
When trying to load different amounts of data, I observed a notable
difference in the time taken by that.

Test 1

Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.

Test2

Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Can some one explain why this difference is coming. Also it will be great if
someone can suggest how we can improve the performance of the first test.

Thank you
sujay


-
Sent from my NYPL BlackBerry Handheld.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Major Difference in response times when using Load Infile utility

2005-09-13 Thread Brent Baisley
Indexing can play a big role in the load time differential. When  
loading .45 million rows, MySQL may be able to handle the sorting for  
indexes in memory. With 5.5 million, it's more likely to have to go  
to a temp file. You then could end up thrashing you disk, causing big  
slow downs. If you think about it, you'll be reading the import file  
from disk, MySQL will be writing to disk as it loads the database,  
and then the indexing will be reading and writing to disk for  
sorting. That's a lot of activity.


Disable or remove your indexes when doing a huge import, then enable  
or add them when you're done.



On Sep 13, 2005, at 7:24 AM, Sujay Koduri wrote:


hi ,

I am using the Load Infile utility to load data from file to MySQL DB.
When trying to load different amounts of data, I observed a notable
difference in the time taken by that.

Test 1

Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.

Test2

Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Can some one explain why this difference is coming. Also it will be  
great if
someone can suggest how we can improve the performance of the first  
test.


Thank you
sujay



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread Ryan Stille
Josh Chamas wrote:
 Ryan Stille wrote:
 Has anyone ever used this MSSQL-to-MySQL converter?  It's pretty
 reasonable at $40, and the demo output I got looked pretty good.  But
 I wanted to see if there is anything I should be weary about.
 http://www.convert-in.com/mss2sql.htm
 
 
 I have done a test migration before with it and it seemed fine.
 
 I would also stay tuned for our MySQL Migration Toolkit to
 support MSSQL migrations.  You can find it here:
 
http://www.mysql.com/products/tools/migration-toolkit/

I did go ahead and purchase the product. It worked pretty well, I had to
modify the output a little.  When exporting the data using this tool, it
added ALTER TABLE lines that changed the DATETIME fields to DATE fields.
I guess it saw that the time for all the dates was 0:00 and thought this
information was not necessary.

I did try the MySQL Migration Toolkit.  Some of the definitions it
generated were incorrect.  One thing I remember for sure is that it was
turning my TEXT(16) fields (which are pretty long) into VARCHAR(16),
which is pretty short.  But I understand it is still Alpha software.

Thanks,
-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread SGreen
Ryan Stille [EMAIL PROTECTED] wrote on 09/13/2005 08:42:05 AM:

 Josh Chamas wrote:
  Ryan Stille wrote:
  Has anyone ever used this MSSQL-to-MySQL converter?  It's pretty
  reasonable at $40, and the demo output I got looked pretty good.  But
  I wanted to see if there is anything I should be weary about.
  http://www.convert-in.com/mss2sql.htm
  
  
  I have done a test migration before with it and it seemed fine.
  
  I would also stay tuned for our MySQL Migration Toolkit to
  support MSSQL migrations.  You can find it here:
  
 http://www.mysql.com/products/tools/migration-toolkit/
 
 I did go ahead and purchase the product. It worked pretty well, I had to
 modify the output a little.  When exporting the data using this tool, it
 added ALTER TABLE lines that changed the DATETIME fields to DATE fields.
 I guess it saw that the time for all the dates was 0:00 and thought this
 information was not necessary.
 
 I did try the MySQL Migration Toolkit.  Some of the definitions it
 generated were incorrect.  One thing I remember for sure is that it was
 turning my TEXT(16) fields (which are pretty long) into VARCHAR(16),
 which is pretty short.  But I understand it is still Alpha software.
 
 Thanks,
 -Ryan
 

OK, you confused me. How can 16 be bigger than 16? For character-based 
fields, the number in the parentheses shows the length of the field, does 
it not? The first part, the text or char or varchar or ..., tells 
the engine what kind of data a field will contain.

How can TEXT(16) hold more data than VARCHAR(16) ?



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread Ryan Stille
 OK, you confused me. How can 16 be bigger than 16? For character-based
 fields, the number in the parentheses shows the length of the field,
 does it not? The first part, the text or char or varchar or ...,
 tells the engine what kind of data a field will contain. 

 How can TEXT(16) hold more data than VARCHAR(16) ? 

I'm not sure how the (16) is used on a TEXT field.  MS SQL has TEXT
fields and VARCHAR fields.  A VARCHAR(16) in MSSQL would be the same as
a VARCHAR(16) in MySQL.   But in my MSSQL database I have data with
thousands of characters in a TEXT(16) field.

-Ryan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Newbie MySQL/PHP question re output formatting

2005-09-13 Thread Bill Whitacre

I can get this to work just fine:



?php
$number = 23999.39;
print $;
print number_format($number, 2);

?



Comes out $23,999.39

I'd like use the number_format() thingie on an array returned from a  
mysql_query.


My current program snippet looks like:


$res = mysql_query(SELECT org, COUNT(*), SUM(annual_cost) AS  
cost FROM a05

GROUP BY org ORDER BY cost DESC,$dbh);

if (!$res) {
  echo mysql_errno().: . mysql_error ().;
  return 0;
}

print table border=1;

while ($thearray = mysql_fetch_array($res)) {

printf(trtd {$thearray[org]} /td
td align=right {$thearray[COUNT(*)]} /td
td align=right $ {$thearray[cost]} /td/tr);

}

print /table;



and works fine -- see http://ibbmonitor.com/sked_1.php, 3rd block  
of stuff down.


If I replace

{$thearray[cost]}

with

number_format({$thearray[cost]}, 2)

I get

$ number_format(7842554.24, 2)

in the cell where I would expect to get

$ 7,842,554.24

Any idea what I'm doing wrong?

Clearly, I don't understand arrays very well.

Thanks VERY much for any help on this.

bw

---
Bill Whitacre
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread emierzwa
I believe the (16) is the size of offset pointer used to locate the text
data elsewhere on the disk. They are all (16 bits)...Sybase does this as
well and is an odd (ok it's stupid) notation. This is one of those
things you need to adjust by hand when convert from one db to another.

Ed

-Original Message-
From: Ryan Stille [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 7:38 AM
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Subject: RE: Intelligent Converters product: MSSQL-to-MySQL

 OK, you confused me. How can 16 be bigger than 16? For character-based
 fields, the number in the parentheses shows the length of the field,
 does it not? The first part, the text or char or varchar or ...,
 tells the engine what kind of data a field will contain. 

 How can TEXT(16) hold more data than VARCHAR(16) ? 

I'm not sure how the (16) is used on a TEXT field.  MS SQL has TEXT
fields and VARCHAR fields.  A VARCHAR(16) in MSSQL would be the same as
a VARCHAR(16) in MySQL.   But in my MSSQL database I have data with
thousands of characters in a TEXT(16) field.

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie MySQL/PHP question re output formatting

2005-09-13 Thread Jigal van Hemert

Bill Whitacre wrote:

printf(trtd {$thearray[org]} /td
td align=right {$thearray[COUNT(*)]} /td
td align=right $ {$thearray[cost]} /td/tr);

If I replace
{$thearray[cost]}
with
number_format({$thearray[cost]}, 2)


Although this is a MySQL mailing list and your problem is not MySQL 
related, but a PHP question I'll give you a brief answer.


PHP does not evaluate functions inside a double quoted string, so you 
should use:

trtd .number_format($thearray['cost'],2). /tdtd align...

Furthermore, using $thearray[cost] is not advisable; PHP will try to 
find a constant named 'cost' and if it can't find one it will use the 
string itself as the value. Use a real string (quoted) instead of 
relying on this feature:

$thearray['cost'] or $thearray[cost]

Also you can use aliases in your query to avoid things like 
$thearray[COUNT(*)];

Use something like this in your query:
SELECT  COUNT(*) AS `count`  FROM 
and you can use $thearray['count'] instead

As a last pointer, printf() is pretty much useless here since you don't 
use any variable formatting features of printf() here. print() or echo() 
are more suitable in this case.


Regards, Jigal

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: cluster or replication

2005-09-13 Thread Alan Williamson
 I've 10 server in differents locations, I want to make a broadcast, I 
 would like to have always datas synchronized between this network, each 
 database have the same tables and same structure. Each insert or update 
 in database will be executed for each server within this broadcast, 
 what's the best choice ?

At the moment, replication is your best option.  Replication works very
nicely over wide-area-networks, where the bandwidth between each node
could be dramatically different.

The only constraint you'll have here is that there can only be one MASTER
that has to accept all the INSERT/UPDATE/DELETE (or anything that will
make the data change).  All other nodes would be considered SLAVEs and
be READ-ONLY.

Hope this helps.

a

-- 
 Alan Williamson, Technology Evangelist
 SpikeSource Inc.
 Daily OS News @ http://compiledby.spikesource.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie MySQL/PHP question re output formatting

2005-09-13 Thread Nuno Pereira

Bill Whitacre wrote:

I can get this to work just fine:



?php
$number = 23999.39;
print $;
print number_format($number, 2);

?



Comes out $23,999.39

I'd like use the number_format() thingie on an array returned from a  
mysql_query.


My current program snippet looks like:


$res = mysql_query(SELECT org, COUNT(*), SUM(annual_cost) AS  
cost FROM a05

GROUP BY org ORDER BY cost DESC,$dbh);

if (!$res) {
  echo mysql_errno().: . mysql_error ().;
  return 0;
}

print table border=1;

while ($thearray = mysql_fetch_array($res)) {

printf(trtd {$thearray[org]} /td
td align=right {$thearray[COUNT(*)]} /td
td align=right $ {$thearray[cost]} /td/tr);

}

print /table;



and works fine -- see http://ibbmonitor.com/sked_1.php, 3rd block  of 
stuff down.


If I replace

{$thearray[cost]}

with

number_format({$thearray[cost]}, 2)

I get

$ number_format(7842554.24, 2)


The issue is that PHP replaces $thearray[cost], with the contents of 
that variable (that is an array, it doesn't matter). But in the second 
case it replaces the same thing ($thearray[cost]), with the contents of 
the variable, but you want to place there the result of the function.

To do date change the first line from

printf(trtd number_format({$thearray[cost]}, 2) /td

to

printf(trtd .number_format({$thearray[cost]}, 2). /td


in the cell where I would expect to get

$ 7,842,554.24

Any idea what I'm doing wrong?

Clearly, I don't understand arrays very well.

Thanks VERY much for any help on this.

bw

---
Bill Whitacre
[EMAIL PROTECTED]




--
Nuno Pereira

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Major Difference in response times when using Load Infile utility

2005-09-13 Thread Alan Williamson
 Test 1
 Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.
  
 Test2
 Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Is this an InnoDB database by any chance?  If it is, and it is a clean
import, then disable the FOREIGN_KEY_CHECKS.  

  SET AUTOCOMMIT = 0;
  SET FOREIGN_KEY_CHECKS=0;

This is a small tip i picked up on the MySQL documentation that someone
had left in the comments and has been to date one of those tips that has
literally saved DAYS of my life.

a

ps Remember to put them back on again after you finish the import

  SET AUTOCOMMIT = 1;
  SET FOREIGN_KEY_CHECKS=1;

-- 
 Alan Williamson, Technology Evangelist
 SpikeSource Inc.
 Daily OS News @ http://compiledby.spikesource.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LVM-snapshot + mysqldump -- is this a reasonable backup strategy?

2005-09-13 Thread James G. Sack (jim)
quoting Alan Williamson [EMAIL PROTECTED] ..

 This recipe is intended to minimize the impact on ongoing database
 operations by inhibiting writes only during a relatively speedy
 operation (creating a snapshot). The long dump operation can ...

This seems to be a rather long winded way of doing this.  Why not 
replicate the database and therefore not have to bring it down ever.

I wrote a blog entry about this very thing, and had some interesting 
comments back on
  http://blog.spikesource.com/mysql_hotbackup.htm

Hope this helps,


Thanks for the input. 

It does seem more complicated than it ought to be, doesn't it?
The reason for trying to do it that way 
 (eg: 
flush tables with read lock, create snapshot, unlock
mount snapshot  run 2nd mysqld, mysqldump,
kill 2nd mysqld, umount snap, remove snap,
whew!)
is that I have only one server, and believe that the snapshot was
invented just for such a purpose -- namely: quickly capture a point-in-
time image of the database for a leisurely backup. 

In more words, locking is deemed required to achieve a consistent db
state for a backup operation, but the lock need only be held during the
short time required to create the snapshot. Then the much longer backup
(dump) operation can proceed by reading from the (effectively static) db
contained in the snapshot. 

Your suggestion is appreciated, Alan, but if I have only one server, it
seems that my question still remains: Am I missing something? Is there
something silly or unwise about this strategy? Do I have misconceptions
about making the database consistent in preparation for backup?

..jim


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Major Difference in response times when using Load Infile uti lity

2005-09-13 Thread Sujay Koduri

Hi

I am INNODB tables only. (I have only one big table) 
Autocommit I have already disabled.
And it doesn't have any foreign keys in it. Do I still need to set the
FOREIGN_KEY_CHECKS to '0'??

sujay
-Original Message-
From: Alan Williamson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 8:48 PM
To: mysql@lists.mysql.com
Subject: RE: Major Difference in response times when using Load Infile
utility

 Test 1
 Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.
  
 Test2
 Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Is this an InnoDB database by any chance?  If it is, and it is a clean
import, then disable the FOREIGN_KEY_CHECKS.  

  SET AUTOCOMMIT = 0;
  SET FOREIGN_KEY_CHECKS=0;

This is a small tip i picked up on the MySQL documentation that someone had
left in the comments and has been to date one of those tips that has
literally saved DAYS of my life.

a

ps Remember to put them back on again after you finish the import

  SET AUTOCOMMIT = 1;
  SET FOREIGN_KEY_CHECKS=1;

--
 Alan Williamson, Technology Evangelist
 SpikeSource Inc.
 Daily OS News @ http://compiledby.spikesource.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Migration 4.0 to 4.1 performance problem

2005-09-13 Thread Benjamin
Hi,

I host a big french website including forum (phpbb), search engine (mnogo),
album (smartor), blogs... around 780Mo in MySQL

Everything work fine with my old configuration :
- One SQL/mail server bi-Xeon, 2.6Ghz, 3Go :
FC4, MySQL 4.0, Postfix-Mysql, Courier-imap Mysql
- One front server vi-Xeon, 2.2Gh, 2Go :
...FC2, Apache 1.33, php 4.4.0

Yersterday, I decided to update from mysql 4.0 to mysql 4.1.
I simply use rpm -Uv MySQL-serverrpm

Since this update, I see many mysql_too_many_connection.
So I deciede to increase this setting in my my.cnf from 500 to 1000.

But I don't know why Mysql is so slow to return query.
The server keep a low load average around 1-2 and CPU's usage around 20%.

I also try to use Fedora RPM, But problem is still present.

Please help me, my visitors become crazy ;-)

Below, I paste my different data :
Thanks.
Ben


### /etc/my.cnf
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache = 60
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
#log=/home/logs/mysql_log
max_connections = 1200
log_error = /var/lib/mysql/mysql_error_log
old_passwords=1


### mysqlreport
MySQL 4.1.12 uptime 0 0:47:27 Tue Sep 13 00:06:40 2005

__ Key _
Buffer usage 18.51M of 384.00M %Used: 4.82
Write ratio 0.42
Read ratio 0.00

__ Questions ___
Total 2.39M 838.96/s
Slow 0 0.00/s %Total: 0.00 %DMS: 0.00
DMS 381.82k 134.11/s 15.99

__ Table Locks _
Waited 18.69k 6.57/s %Total: 3.88
Immediate 462.66k 162.51/s

__ Tables __
Open 512 of 512 %Cache: 100.00
Opened 895 0.31/s

__ Connections _
Max used 704 of 1200 %Max: 58.67
Total 275.89k 96.91/s

__ Created Temp 
Disk table 5.98k 2.10/s
Table 17.80k 6.25/s
File 7 0.00/s


 ### ./mysqlreport -dms -com 5
MySQL 4.1.12 uptime 0 9:29:17 Tue Sep 13 08:48:30 2005

__ Key _
Buffer usage 29.91M of 384.00M %Used: 7.79
Write ratio 0.33
Read ratio 0.00

__ Questions ___
Total 11.72M 343.26/s
Slow 0 0.00/s %Total: 0.00 %DMS: 0.00
DMS 1.80M 52.80/s 15.38
UPDATE 997.17k 29.19/s 8.50 55.29
SELECT 750.53k 21.97/s 6.40 41.61
INSERT 33.51k 0.98/s 0.29 1.86
DELETE 22.35k 0.65/s 0.19 1.24
REPLACE 0 0.00/s 0.00 0.00
Com_ 1.37M 39.98/s 11.65
change_db 1.36M 39.92/s 11.63
set_option 437 0.01/s 0.00
show_status 280 0.01/s 0.00
begin 274 0.01/s 0.00
commit 274 0.01/s 0.00

__ Table Locks _
Waited 36.09k 1.06/s %Total: 1.58
Immediate 2.25M 65.85/s

__ Tables __
Open 512 of 512 %Cache: 100.00
Opened 1.06k 0.03/s

__ Connections _
Max used 704 of 1200 %Max: 58.67
Total 1.28M 37.46/s

__ Created Temp 
Disk table 28.27k 0.83/s
Table 87.37k 2.56/s
File 19 0.00/s


### top
top - 00:02:46 up 5:38, 3 users, load average: 1.84, 1.71, 1.50
Tasks: 165 total, 1 running, 164 sleeping, 0 stopped, 0 zombie
Cpu(s): 9.8% us, 2.4% sy, 0.0% ni, 87.0% id, 0.4% wa, 0.1% hi, 0.3% si
Mem: 3115044k total, 3036040k used, 79004k free, 545064k buffers
Swap: 2031608k total, 0k used, 2031608k free, 620516k cached





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: cluster or replication

2005-09-13 Thread Cory Robin
Is it possible to have cluster and replication?  (ie clusters at each 
location and replication to sync each location)


Alan Williamson wrote:

I've 10 server in differents locations, I want to make a broadcast, I 
would like to have always datas synchronized between this network, each 
database have the same tables and same structure. Each insert or update 
in database will be executed for each server within this broadcast, 
what's the best choice ?
   



At the moment, replication is your best option.  Replication works very
nicely over wide-area-networks, where the bandwidth between each node
could be dramatically different.

The only constraint you'll have here is that there can only be one MASTER
that has to accept all the INSERT/UPDATE/DELETE (or anything that will
make the data change).  All other nodes would be considered SLAVEs and
be READ-ONLY.

Hope this helps.

a

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Major Difference in response times when using Load Infile uti lity

2005-09-13 Thread Sujay Koduri

Hi all,

I found that load infile should not take this much time(6 hrs) to load 5.5
million queries. Some people are saying it should not even take more than
10mins. So I think I am doing something wrong in my my.cnf file. I am Using
MySQL 4.1.13 version and 2.4.20 kernel on RH9. I am including the my.cnf as
well as the 'show table status' output here. Please let me know if any extra
info is needed.

M/c config : 2G RAM and Dual CPU 32-bit.

mysql show table status;
+++-++-++---
--+-+--+---++---
--+-++---+--
++--+
| Name   | Engine | Version | Row_format | Rows| Avg_row_length |
Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
Create_time | Update_time | Check_time | Collation |
Checksum | Create_options | Comment  |
+++-++-++---
--+-+--+---++---
--+-++---+--
++--+
| stats  | InnoDB |   9 | Dynamic| 4237002 |   1028 |
4357881856 |NULL |150749184 | 0 |   NULL |
2005-09-12 23:16:21 | NULL| NULL   | latin1_swedish_ci |
NULL || InnoDB free: 16188416 kB |
+++-++-++---
--+-+--+---++---
--+-++---+--
++--+
1 row in set (2.41 sec)

I am running the load file utility from the server itself. (server
config..2G RAM, Dual CPU, RH9, 2.4.20 kernel, alloted 20G of disk space,
both logs and data files are on same partition). I am including the my.cnf
file which I am using now. Can someone please tell me if I am setting
anything seriously wrong.

$$cat /etc/my.cnf
[mysqld]
user=mysql
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
default-table-type=innodb
init_connect='SET AUTOCOMMIT=0'
transaction-isolation = READ-COMMITTED
key_buffer = 250M
wait_timeout = 10
max_connections = 400
connect_timeout = 50
table_cache = 1024
max_allowed_packet = 4M
sort_buffer_size = 2M
read_buffer_size = 2M
binlog_cache_size = 1M
max_heap_table_size = 64M
max_user_connections = 1000
thread_concurrency = 4
query_cache_type = 0
query_cache_limit = 2M
query_cache_size = 32M
#thread_stack = 96k
tmp_table_size = 32M
log_error
log_warnings = 2
log_slow_queries
long_query_time = 2
log_long_format
tmpdir = /tmp

# *** INNODB Specific options ***
#This conf uses 65%(Between 50 and 80%) of RAM, so glibc crash #should not
be a problem.
innodb_buffer_pool_size = 700M
innodb_data_home_dir =
innodb_data_file_path =
/data/data1:2G;/data/data2:2G;/data/data3:2G;/data/data4:2G;/data/data5:2G;/
data/data6:2G;/data/data7:2G;/data/data8:2G;/data/data9:2G;/data/data10:2G:a
utoextend
innodb_fast_shutdown = 1
innodb_file_io_threads = 4
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size = 8M
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 90
innodb_log_file_size = 100M
innodb_log_files_in_group =5
innodb_log_group_home_dir = /logs

[mysql.server]
user=mysql
basedir=/var/lib

[mysqldump]
quick
max_allowed_packet=16M

[safe_mysqld]
default-table-type=innodb
init_connect='SET AUTOCOMMIT=0'
err-log=/var/log/mysqld.log

[mysqld_safe]
default-table-type=innodb
init_connect='SET AUTOCOMMIT=0'
err-log=/var/log/mysqld.log
open-files-limit = 4096

Thank you
sujay

-Original Message-
From: Alan Williamson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 8:48 PM
To: mysql@lists.mysql.com
Subject: RE: Major Difference in response times when using Load Infile
utility

 Test 1
 Amount of data - 5.5 million rows. Time Taken - 6+hrs Approximately.
  
 Test2
 Amount of data - 0.45 million rows. Time Taken - 2 mins approximately.

Is this an InnoDB database by any chance?  If it is, and it is a clean
import, then disable the FOREIGN_KEY_CHECKS.  

  SET AUTOCOMMIT = 0;
  SET FOREIGN_KEY_CHECKS=0;

This is a small tip i picked up on the MySQL documentation that someone had
left in the comments and has been to date one of those tips that has
literally saved DAYS of my life.

a

ps Remember to put them back on again after you finish the import

  SET AUTOCOMMIT = 1;
  SET FOREIGN_KEY_CHECKS=1;

--
 Alan Williamson, Technology Evangelist
 SpikeSource Inc.
 Daily OS News @ http://compiledby.spikesource.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql

Re: serial primary key produces two indexes

2005-09-13 Thread Gleb Paharenko
Hello.



SERIAL is an alias for BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE.

So you really specifying two keys (primary and unique). See:

  http://dev.mysql.com/doc/mysql/en/news-4-1-0.html

http://bugs.mysql.com/bug.php?id=13140







Kemin Zhou [EMAIL PROTECTED] wrote:

 I recently discovered that the following

 

 create table ttt (

id serial primary key,

txt text

 );

 

 show index from ttt

 

 is telling me that there is a primary ke on id column with BTREE

 and at the same time, there is another unique index on the id column.

 

 This is redundant. 

 

 if the id column had been specified as

 

 id integer auto_increment primary key,

 then there is only one primary key

 

 So it looks that there is a bug in the mysql source code.

 Could some exper please confirm my opinion?

 

 I am using version 4.1

 

 Kemin

 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: cluster or replication

2005-09-13 Thread Lorenzo De Vito
This isn't a constraint, this is a limit, the goal is to have a 
bi-directional replication but I've not found solutions to my problem 
using mysql. I've a control access system with 9 minitowers which 
controls people access via RFID, each minitower is provided with linux 
and mysql synchronized with a main server, each machine is identical to 
others and can alive without connecting to main server if any problem 
hardware or software close the connection with the rest of the network 
and align again datas if connection go up, something like a survivor 
system. So now you can understand that with this scenario mysql actual 
replication MASTER/SLAVE isn't a solution. (Perhaps I'll think to modify 
sources :)) )

The only constraint you'll have here is that there can only be one MASTER
that has to accept all the INSERT/UPDATE/DELETE (or anything that will
make the data change).  All other nodes would be considered SLAVEs and
be READ-ONLY.

Hope this helps.

a

  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migration 4.0 to 4.1 performance problem

2005-09-13 Thread Gleb Paharenko
Hello.



Sometimes, after an upgrade it is necessary to rebuild indexes

in the tables (I'm not sure if it always produces an error if

you haven't done it). In general - usual recommendations for

optimizing queries work in your case. Use the slow query log

to find slow queries. Check if indexes are used properly. 





Benjamin [EMAIL PROTECTED] wrote:

 Hi,

 

 I host a big french website including forum (phpbb), search engine (mnogo),

 album (smartor), blogs... around 780Mo in MySQL

 

 Everything work fine with my old configuration :

 - One SQL/mail server bi-Xeon, 2.6Ghz, 3Go :

 FC4, MySQL 4.0, Postfix-Mysql, Courier-imap Mysql

 - One front server vi-Xeon, 2.2Gh, 2Go :

 ...FC2, Apache 1.33, php 4.4.0

 

 Yersterday, I decided to update from mysql 4.0 to mysql 4.1.

 I simply use rpm -Uv MySQL-serverrpm

 

 Since this update, I see many mysql_too_many_connection.

 So I deciede to increase this setting in my my.cnf from 500 to 1000.

 

 But I don't know why Mysql is so slow to return query.

 The server keep a low load average around 1-2 and CPU's usage around 20%.

 

 I also try to use Fedora RPM, But problem is still present.

 

 Please help me, my visitors become crazy ;-)

 

 Below, I paste my different data :

 Thanks.

 Ben

 

 

 ### /etc/my.cnf

 [mysqld]

 port = 3306

 socket = /var/lib/mysql/mysql.sock

 skip-locking

 key_buffer = 384M

 max_allowed_packet = 1M

 table_cache = 512

 sort_buffer_size = 2M

 read_buffer_size = 2M

 read_rnd_buffer_size = 8M

 myisam_sort_buffer_size = 64M

 thread_cache = 60

 query_cache_size = 32M

 # Try number of CPU's*2 for thread_concurrency

 thread_concurrency = 8

 #log=/home/logs/mysql_log

 max_connections = 1200

 log_error = /var/lib/mysql/mysql_error_log

 old_passwords=1

 

 

 ### mysqlreport

 MySQL 4.1.12 uptime 0 0:47:27 Tue Sep 13 00:06:40 2005

 

 __ Key _

 Buffer usage 18.51M of 384.00M %Used: 4.82

 Write ratio 0.42

 Read ratio 0.00

 

 __ Questions ___

 Total 2.39M 838.96/s

 Slow 0 0.00/s %Total: 0.00 %DMS: 0.00

 DMS 381.82k 134.11/s 15.99

 

 __ Table Locks _

 Waited 18.69k 6.57/s %Total: 3.88

 Immediate 462.66k 162.51/s

 

 __ Tables __

 Open 512 of 512 %Cache: 100.00

 Opened 895 0.31/s

 

 __ Connections _

 Max used 704 of 1200 %Max: 58.67

 Total 275.89k 96.91/s

 

 __ Created Temp 

 Disk table 5.98k 2.10/s

 Table 17.80k 6.25/s

 File 7 0.00/s

 

 

 ### ./mysqlreport -dms -com 5

 MySQL 4.1.12 uptime 0 9:29:17 Tue Sep 13 08:48:30 2005

 

 __ Key _

 Buffer usage 29.91M of 384.00M %Used: 7.79

 Write ratio 0.33

 Read ratio 0.00

 

 __ Questions ___

 Total 11.72M 343.26/s

 Slow 0 0.00/s %Total: 0.00 %DMS: 0.00

 DMS 1.80M 52.80/s 15.38

 UPDATE 997.17k 29.19/s 8.50 55.29

 SELECT 750.53k 21.97/s 6.40 41.61

 INSERT 33.51k 0.98/s 0.29 1.86

 DELETE 22.35k 0.65/s 0.19 1.24

 REPLACE 0 0.00/s 0.00 0.00

 Com_ 1.37M 39.98/s 11.65

 change_db 1.36M 39.92/s 11.63

 set_option 437 0.01/s 0.00

 show_status 280 0.01/s 0.00

 begin 274 0.01/s 0.00

 commit 274 0.01/s 0.00

 

 __ Table Locks _

 Waited 36.09k 1.06/s %Total: 1.58

 Immediate 2.25M 65.85/s

 

 __ Tables __

 Open 512 of 512 %Cache: 100.00

 Opened 1.06k 0.03/s

 

 __ Connections _

 Max used 704 of 1200 %Max: 58.67

 Total 1.28M 37.46/s

 

 __ Created Temp 

 Disk table 28.27k 0.83/s

 Table 87.37k 2.56/s

 File 19 0.00/s

 

 

 ### top

 top - 00:02:46 up 5:38, 3 users, load average: 1.84, 1.71, 1.50

 Tasks: 165 total, 1 running, 164 sleeping, 0 stopped, 0 zombie

 Cpu(s): 9.8% us, 2.4% sy, 0.0% ni, 87.0% id, 0.4% wa, 0.1% hi, 0.3% si

 Mem: 3115044k total, 3036040k used, 79004k free, 545064k buffers

 Swap: 2031608k total, 0k used, 2031608k free, 620516k cached

 

 

 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie MySQL/PHP question re output formatting

2005-09-13 Thread Jasper Bryant-Greene

Nuno Pereira wrote:

To do date change the first line from

printf(trtd number_format({$thearray[cost]}, 2) /td

to

printf(trtd .number_format({$thearray[cost]}, 2). /td


I think you mean:

print( trtd . number_format( $thearray['cost'], 2 ) . /td );

As stated by a previous poster to this thread, you shouldn't rely on 
PHP's automagical conversion of nonexistent constants to strings. The 
extra curly braces { } would also likely cause an error in the function 
params, and are extraneous even if they don't.


--
Jasper Bryant-Greene
Freelance web developer
http://jasper.bryant-greene.name/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (61)'

2005-09-13 Thread Ed Collins
Description:
I get the following  message error: 'Can't connect to local MySQL
server through socket '/tmp/mysql.sock' (61)'  Everything has run
fine for along time.  This is running on OS X server 10.3 and ever
since the ran the last software update mysql has not be available.
 I have tried reinstalling the os x Installer package (Mac OS X
v10.3)  Standard 4.1.14 darwin 7.9.0-powerpc.dmg and that did not
help.

my .err files show the following info

050912 23:15:37  mysqld started
050912 23:15:39 Warning: Setting lower_case_table_names=2 because file
system f$
050912 23:16:16  InnoDB: Operating system error number 13 in a file
operation.
InnoDB: See http://dev.mysql.com/doc/mysql/en/InnoDB.html
InnoDB: for installation help.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
050912 23:16:19  mysqld ended

How-To-Repeat:
The problem is continuious  I have not been able to fix the problem
Fix:
None
Submitter-Id:  submitter ID
Originator:EdCollins
Organization:  Collins Consulting
 organization of PR author (multiple lines)
MySQL support: [none |
Synopsis:  mysql server not starting up OS x 10.3 server



-- 
**
Ed Collins
Collins Consulting
Authorized Apple Service Provider+ (AASP+)
Authorized Hewlett Packard Sales  Service
PO Box 194
Colby, KS 67701

Phone (785) 462-8352
email: [EMAIL PROTECTED]
web: http://www.collins.net
**



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (61)'

2005-09-13 Thread Michael Stassen

Ed Collins wrote:

Description:


I get the following  message error: 'Can't connect to local MySQL
server through socket '/tmp/mysql.sock' (61)'


This is the error the client gives when it cannot connect to the server, 
usually because the server isn't running.



Everything has run
fine for along time.  This is running on OS X server 10.3 and ever
since the ran the last software update mysql has not be available.


Some OS X updates in the past have incorrectly changed permissions on /tmp. 
 That's a possibility here.



 I have tried reinstalling the os x Installer package (Mac OS X
v10.3)  Standard 4.1.14 darwin 7.9.0-powerpc.dmg and that did not
help.


This shouldn't be necessary.


my .err files show the following info

050912 23:15:37  mysqld started
050912 23:15:39 Warning: Setting lower_case_table_names=2 because file
system f$
050912 23:16:16  InnoDB: Operating system error number 13 in a file
operation.


  perror 13
  OS error code  13:  Permission denied

Mysql does not have permission to access its data directory, perhaps because 
of the reinstall.



InnoDB: See http://dev.mysql.com/doc/mysql/en/InnoDB.html
InnoDB: for installation help.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
050912 23:16:19  mysqld ended



How-To-Repeat:


The problem is continuious  I have not been able to fix the problem


Fix:


None


Set correct permissions on /tmp:

  cd /tmp
  sudo chmod 1777 .

Now make sure the data directory has the correct permissions:

  cd /usr/local/mysql
  sudo chown -R mysql:mysql data

Then try to start again.


Submitter-Id:  submitter ID
Originator:EdCollins
Organization:  Collins Consulting


 organization of PR author (multiple lines)


MySQL support: [none |
Synopsis:  mysql server not starting up OS x 10.3 server


Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



About database criterion

2005-09-13 Thread 王 旭

Hello,every one.

Is there a current criterion about database function to evaluating a 
database?


_
免费下载 MSN Explorer:   http://explorer.msn.com/lccn/  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



query

2005-09-13 Thread N. Kavithashree

hello

i hv one query as explained below :

Table FIRST contains

This is
What is this
how
whom
color
white

Table SECOND contains

WHOM
THIS IS
WHAT
WHITE AND BLACK

i want the result as
This is
what is this
what
whom
white
WHITE AND BLACK


i want a query which searches both table for exact match as well as partly
match. Even if part of the string matches it should get displayed in the
result.

thanks in advance
kavitha

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]