MySQL Master and Slave Database Server

2008-04-11 Thread Kaushal Shriyan
Hi,

what are the different test cases to test MySQL Master and Slave Replication
Database Server.

Thanks and Regards

Kaushal


Re: MySQL 5.0.51a and SHOW ENGINES

2008-04-11 Thread Martijn Tonies
Hello Jim,


 On Thu, Apr 10, 2008 at 09:32:43AM +0200, Martijn Tonies wrote:
  It seems that SHOW ENGINES fails on MySQL 5.0.51a (community edition
  checked).
 
  Instead of returning the full data, the first two columns are cut off at
3
  characters,
  while the comment column is cut off at 26 characters.

 sounds like you are using the wrong value for the length of a utf8
 field, where the number of characters is being divided by the max
 character length. (10 / 3 = 3, 80 / 3 = 26)

 or it could be the server returning the wrong length. use mysql
 --column-type-info to see what it is returning.

That doesn't work with 5.0 as far as I can tell.

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle 
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Data transfer

2008-04-11 Thread Krishna Chandra Prajapati
Hi,

What ever queries are executed on 5 mysql server with multiple database
(more than one database on each mysql server). I have to reflect all the
changes on 1 mysql server (developement server for developers)

Initially, I thought to take take the queries from bin-log and execute on
development server. But, from the bin-log list. I am not able to identify
that which queries belong to which database.

Content of bin-log (multiple database)
# at 266708750
#080410  4:04:27 server id 1  end_log_pos 148   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_time_stamp_1 SET TIME_TOTAL='-999' WHERE USER_ID='8394042' AND
TASK='2.30';
# at 266708898
#080410  4:04:27 server id 1  end_log_pos 266708925 Xid = 22953623
COMMIT;
# at 266708925
#080410  4:04:27 server id 1  end_log_pos 162   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_cookie SET VALUE = '2.30_1207830154' WHERE USER_ID = '8394042'
AND PARAM = 'CURRENTTASK';
# at 266709087
#080410  4:04:27 server id 1  end_log_pos 266709114 Xid = 22953624
COMMIT;

What are the possible ways to perform this task.

--Thanks,
Krishna Chandra Prajapati


Two MySql servers, but very different performances for a SELECT JOIN

2008-04-11 Thread tmarly
Hi,

I have 2 MySql server instances. One which is 5.0.27/Debian, another
5.0.32/Solaris.

Both instances have the same data in the database.

And I'm doing a select:
SELECT media.* FROM media,country,content WHERE country.id='Germany' AND
country.detail_tid=content.tid AND content.id=media.content_id;

This  request takes less than a half second on one server, and takes 70 seconds
on another server.

The EXPLAIN results are attached to this mail. Its shows that there are
interpreted very differently on each server.

I checked the database structure, wich is exactly identical on both instance.

Moreover, one could think that this is the optimizer which does not interpret
the joint request in the right order in the version 5.0.32 compared to 5.0.27,
but I'm pretty sure that this application had worked in the past (good
performance on 5.0.27).

Do you know how could I found some clues ?

 Tristan


-- 
Tristan Marly
06.16.84.57.43
http://www.linkedin.com/in/tristanmarly
mysql explain select media.* from media,country,content where 
country.id='Germany' and country.detail_tid=content.tid and 
content.id=media.content_id;

on the 'fast' server:

++-+-+---+---++-+--+--+-+
| id | select_type | table   | type  | possible_keys | key| key_len | 
ref  | rows | Extra   |
++-+-+---+---++-+--+--+-+
|  1 | SIMPLE  | country | const | PRIMARY   | PRIMARY| 383 | 
const|1 | |
|  1 | SIMPLE  | content | ALL   | PRIMARY   | NULL   | NULL| 
NULL |  140 | Using where |
|  1 | SIMPLE  | media   | ref   | media_FI_2| media_FI_2 | 5   | 
integration.content.id   |  279 | Using where |
++-+-+---+---++-+--+--+-+

on the 'slow' server:
++-+-++---+-+-+++-+
| id | select_type | table   | type   | possible_keys | key | key_len | ref 
   | rows   | Extra   |
++-+-++---+-+-+++-+
|  1 | SIMPLE  | country | ref| PRIMARY   | PRIMARY | 383 | 
const  |  1 | Using where |
|  1 | SIMPLE  | media   | ALL| media_FI_2| NULL| NULL| 
NULL   | 180443 | |
|  1 | SIMPLE  | content | eq_ref | PRIMARY   | PRIMARY | 4   | 
integration.media.content_id   |  1 | Using where |
++-+-++---+-+-+++-+


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Data transfer

2008-04-11 Thread Ananda Kumar
does your development server have only one database or multiple database.

regards
anandkl


On 4/11/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Hi,

 What ever queries are executed on 5 mysql server with multiple database
 (more than one database on each mysql server). I have to reflect all the
 changes on 1 mysql server (developement server for developers)

 Initially, I thought to take take the queries from bin-log and execute on
 development server. But, from the bin-log list. I am not able to identify
 that which queries belong to which database.

 Content of bin-log (multiple database)
 # at 266708750
 #080410  4:04:27 server id 1  end_log_pos 148   Query
 thread_id=460143exec_time=0 error_code=0
 SET TIMESTAMP=1207825467;
 UPDATE user_time_stamp_1 SET TIME_TOTAL='-999' WHERE USER_ID='8394042' AND
 TASK='2.30';
 # at 266708898
 #080410  4:04:27 server id 1  end_log_pos 266708925 Xid = 22953623
 COMMIT;
 # at 266708925
 #080410  4:04:27 server id 1  end_log_pos 162   Query
 thread_id=460143exec_time=0 error_code=0
 SET TIMESTAMP=1207825467;
 UPDATE user_cookie SET VALUE = '2.30_1207830154' WHERE USER_ID = '8394042'
 AND PARAM = 'CURRENTTASK';
 # at 266709087
 #080410  4:04:27 server id 1  end_log_pos 266709114 Xid = 22953624
 COMMIT;

 What are the possible ways to perform this task.

 --Thanks,
 Krishna Chandra Prajapati



Query Confusion!

2008-04-11 Thread Barry
I am confused ( nothing new there), what I thought was a simple search 
is proving not to be so,


Can anyone tell me why this query for the word 'plus':
mysql SELECT *
- FROM booktitles
- WHERE MATCH (category , publisher , bookTitle , author)
- AGAINST (CONVERT( _utf8'plus'USING latin1 )
- IN BOOLEAN MODE)
- ORDER BY category;
Empty set (0.00 sec)

returns an empty result set, when this query:

mysql SELECT *
- FROM `booklist`.`booktitles`
- WHERE `id` LIKE '%plus%'
- OR `category` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `publisher` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `bookTitle` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `author` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `publishDate` LIKE '%plus%';
+-+--+---+---+-+-+
| id | category | publisher | bookTitle | author | publishDate |
+-+--+---+---+-+-+
| 39 | C++ | SAMS | C++ Primer Plus Fourth Edition | Stephen Prata | 2001 |
| 162 | Linux | Wiley | Ubuntu Linux Toolbox 1000 plus Commands for 
Ubuntu and Debian Power Users | Christopher Negus Fran�ois Caen | 2007 |
| 496 | C++ | Prentice Hall | C Plus Plus GUI Programming With Qt 4 2nd 
Edition | Jasmin Blanchette, Mark Summerfield | 2008 |

+-+--+---+---+-+-+
3 rows in set (0.00 sec)

provides the correct answer?

Thinking that it the first query wasn't picking up a four letter term, I 
ran this search for the word 'real'


mysql SELECT *
- FROM booktitles
- WHERE MATCH (category , publisher , bookTitle , author)
- AGAINST (CONVERT( _utf8'real'USING latin1 )
- IN BOOLEAN MODE)
- ORDER BY category;
+-+--+---+-+--+-+
| id | category | publisher | bookTitle | author | publishDate |
+-+--+---+-+--+-+
| 134 | Linux | Prentice Hall | Embedded Linux Primer: A Practical, 
Real-World Approach | Christopher Hallinan | 2006 |

+-+--+---+-+--+-+
1 row in set (0.00 sec)

and as you can see it came up with the correct result.

Thanks for looking

Barry

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Data transfer

2008-04-11 Thread Muthukumar-Fijilive
HI

Option 1

If you are using PHP, you can do this very simply using CRON task.

Make a field for to update either your first operation is successful (eg.
Update the field value to 1 ). Run a CRON job in particular interval if the
field updated as 1 then call your second operation query and make the field
value 0.


Option 2

If you are using MySql 5.0. check the trigger examples to perform the on
update ...

And follow the same idea like update a field value 1 and write a trigger for
upate.


I hope this will give you idea.


Thanks  regards,
Muthukumar Selvarasu,
Project Manager (Web Development),
Webmasters Ltd.




-Original Message-
From: Krishna Chandra Prajapati [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 11, 2008 6:59 PM
To: mysql; [EMAIL PROTECTED]
Subject: Data transfer

Hi,

What ever queries are executed on 5 mysql server with multiple database
(more than one database on each mysql server). I have to reflect all the
changes on 1 mysql server (developement server for developers)

Initially, I thought to take take the queries from bin-log and execute on
development server. But, from the bin-log list. I am not able to identify
that which queries belong to which database.

Content of bin-log (multiple database)
# at 266708750
#080410  4:04:27 server id 1  end_log_pos 148   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_time_stamp_1 SET TIME_TOTAL='-999' WHERE USER_ID='8394042' AND
TASK='2.30'; # at 266708898
#080410  4:04:27 server id 1  end_log_pos 266708925 Xid = 22953623
COMMIT;
# at 266708925
#080410  4:04:27 server id 1  end_log_pos 162   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_cookie SET VALUE = '2.30_1207830154' WHERE USER_ID = '8394042'
AND PARAM = 'CURRENTTASK';
# at 266709087
#080410  4:04:27 server id 1  end_log_pos 266709114 Xid = 22953624
COMMIT;

What are the possible ways to perform this task.

--Thanks,
Krishna Chandra Prajapati


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Data transfer

2008-04-11 Thread Muthukumar Selvarasu

HI

Option 1

If you are using PHP, you can do this very simply using CRON task.

Make a field for to update either your first operation is successful (eg.
Update the field value to 1 ). Run a CRON job in particular interval if the
field updated as 1 then call your second operation query and make the field
value 0.


Option 2

If you are using MySql 5.0. check the trigger examples to perform the on
update ...

And follow the same idea like update a field value 1 and write a trigger for
upate.


I hope this will give you idea.


Thanks  regards,
Muthukumar Selvarasu,
Project Manager (Web Development),
Webmasters Ltd.




-Original Message-
From: Krishna Chandra Prajapati [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 11, 2008 6:59 PM
To: mysql; [EMAIL PROTECTED]
Subject: Data transfer

Hi,

What ever queries are executed on 5 mysql server with multiple database
(more than one database on each mysql server). I have to reflect all the
changes on 1 mysql server (developement server for developers)

Initially, I thought to take take the queries from bin-log and execute on
development server. But, from the bin-log list. I am not able to identify
that which queries belong to which database.

Content of bin-log (multiple database)
# at 266708750
#080410  4:04:27 server id 1  end_log_pos 148   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_time_stamp_1 SET TIME_TOTAL='-999' WHERE USER_ID='8394042' AND
TASK='2.30'; # at 266708898
#080410  4:04:27 server id 1  end_log_pos 266708925 Xid = 22953623
COMMIT;
# at 266708925
#080410  4:04:27 server id 1  end_log_pos 162   Query
thread_id=460143exec_time=0 error_code=0
SET TIMESTAMP=1207825467;
UPDATE user_cookie SET VALUE = '2.30_1207830154' WHERE USER_ID = '8394042'
AND PARAM = 'CURRENTTASK';
# at 266709087
#080410  4:04:27 server id 1  end_log_pos 266709114 Xid = 22953624
COMMIT;

What are the possible ways to perform this task.

--Thanks,
Krishna Chandra Prajapati


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQl 5.0 optimization problem

2008-04-11 Thread Victor Danilchenko

Hi,

	I am trying to optimize our DB server. We have one table which has 1.3M 
entries, and the keys are GUIDs (so the key space is large). However, I 
have it all indexed. The performance was iffy, though, so I increased 
memory allocation, and the searches on the indexed fields seem to be OK 
now. Still, a simple count(field) on this table still takes, like, 2 
minutes! I am guessing i am missing something obvious, but I read 
through a few MySQL optimization guides, and it seems like i am covering 
my bases.


Here is the relevant slice of my my.cnf:

#
# * Fine Tuning
#
key_buffer  = 256M
max_allowed_packet  = 64M
thread_stack= 512K
thread_cache_size   = 32
#
# * Query Cache Configuration
#
query_cache_limit   = 32M
query_cache_size= 256M
query_cache_type= 1
table_cache = 512
sort_buffer_size= 32M


	I am running Ubuntu 7.04, with 1GB of RAM. The relevant fields are all 
indexed, but I can't imagine why a simple count() would take so long, 
when the actual query by value on the same field is effectively instant 
(after my cache setting expansion).


	Does anyone have an idea of what I am missing? Also, if you think any 
of the above settings seem wrong for a server with 1GB of RAM, please 
let me know.


--
Victor Danilchenko
Senior Software Engineer, AskOnline.net
[EMAIL PROTECTED] - 617-273-0119

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query runs very sloooow

2008-04-11 Thread Johan Solve
A late followup on this, so I top post to keep the history intact.

 The composite primary key was the problem. Or rather, the missing
 individual indexes for tag_id and ad_id.

 We also changed to INNER JOINs instead, but that didn't affect the performance.

 Thanks for all suggestions!




 On Wed, Mar 5, 2008 at 6:10 PM, Rob Wultsch [EMAIL PROTECTED] wrote:
  Also you have a composite key on for the prymary key in tag_keys .
   ad_id should probably be a seperate index for
   LEFT JOIN tag_ad_map tm2 ON tm1.ad_id = tm2.ad_id  to join well. The
   Index should be ignored because the left most portion of the the index
   is not used...
 
 
 
   On 3/5/08, Rob Wultsch [EMAIL PROTECTED] wrote:
From a brief glance:
1 *  seems odd to me. Is this an attempt at some sort of cast?
ORDER BY viktatantal DESC, RAND()
LIMIT 80
How many results would this return without the limit. The ORDER BY RAND()
will never help a query. All the possible results have to be computed...
   
Do you mean LEFT JOIN, or do you really mean INNER JOIN? If any of
those LEFT joins fail then the result is not excluded by the join.
until the where conditions come into play.
   
I would rebuild the query with inner joins and the first table being t2, I
would then rerun the explain and consider composite indexes.
   
Mysql is not know for self join tables well.
--
Rob Wultsch



   I need help to optimize this following query. It runs very slow and I cant
find any direct errors in it.
SELECT
1 * t1.termfreq as viktatantal,
t1.tag, t1.url FROM tag_keys t1
LEFT JOIN tag_ad_map tm1 ON t1.id = tm1.tag_id

   LEFT JOIN tag_ad_map tm2 ON tm1.ad_id = tm2.ad_id

   LEFT JOIN tag_keys t2 ON t2.id = tm2.tag_id
 WHERE
 t2.url = 'motor' AND t1.url != 'motor'
GROUP BY t1.id

   ORDER BY viktatantal DESC, RAND()
LIMIT 80
   

   Any help would be very appriciated!
   

   CREATE TABLE structure looks like this:
   
CREATE TABLE `tag_ad_map` (
 `ad_id` int(11) NOT NULL default '0',
 `tag_id` int(11) NOT NULL default '0',
 `termfreq` int(11) NOT NULL default '0',
 PRIMARY KEY  (`tag_id`,`ad_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
   
CREATE TABLE `tag_keys` (
 `id` int(11) NOT NULL auto_increment,
 `tag` varchar(32) collate utf8_swedish_ci NOT NULL default '',
 `idf` double NOT NULL default '0',
 `url` varchar(64) collate utf8_swedish_ci NOT NULL default '',
 `termfreq` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `url` (`url`)
) ENGINE=MyISAM AUTO_INCREMENT=11374 DEFAULT CHARSET=utf8
COLLATE=utf8_swedish_ci


-- 
Mvh
Johan Sölve

Montania System AB
Halmstad Stockholm Malmö
http://www.montania.se

Johan Sölve
Mobil +46 709-51 55 70
[EMAIL PROTECTED]

Kristinebergsvägen 17, S-302 41 Halmstad, Sweden
Telefon +46 35-136800 | Fax +46 35-136801

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Ben Clewett

Are you using MyIsam or InnoDB?  Or something else?

In either case the speed to get a COUNT() is largely down to the speed 
if your disks and size of disk caching.  A COUNT() forces the system to 
read every row in order to count them, and any large table is probably 
larger than your caches.


In some ways this is not so important, since it is unusual for a query 
to want to read every row of a table.  (I have 250GB tables which have 
excellent performance but would take minutes to count every row :)


It might be better to consider the type of queries you will be using, 
and the type of table, and optimise for that...


Ben

Victor Danilchenko wrote:

Hi,

I am trying to optimize our DB server. We have one table which has 
1.3M entries, and the keys are GUIDs (so the key space is large). 
However, I have it all indexed. The performance was iffy, though, so I 
increased memory allocation, and the searches on the indexed fields seem 
to be OK now. Still, a simple count(field) on this table still takes, 
like, 2 minutes! I am guessing i am missing something obvious, but I 
read through a few MySQL optimization guides, and it seems like i am 
covering my bases.


Here is the relevant slice of my my.cnf:

#
# * Fine Tuning
#
key_buffer  = 256M
max_allowed_packet  = 64M
thread_stack= 512K
thread_cache_size   = 32
#
# * Query Cache Configuration
#
query_cache_limit   = 32M
query_cache_size= 256M
query_cache_type= 1
table_cache = 512
sort_buffer_size= 32M


I am running Ubuntu 7.04, with 1GB of RAM. The relevant fields are 
all indexed, but I can't imagine why a simple count() would take so 
long, when the actual query by value on the same field is effectively 
instant (after my cache setting expansion).


Does anyone have an idea of what I am missing? Also, if you think 
any of the above settings seem wrong for a server with 1GB of RAM, 
please let me know.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Wm Mussatto
On Fri, April 11, 2008 06:47, Ben Clewett wrote:
 Are you using MyIsam or InnoDB?  Or something else?

 In either case the speed to get a COUNT() is largely down to the speed
 if your disks and size of disk caching.  A COUNT() forces the system to
 read every row in order to count them, and any large table is probably
 larger than your caches.

 In some ways this is not so important, since it is unusual for a query
 to want to read every row of a table.  (I have 250GB tables which have
 excellent performance but would take minutes to count every row :)

 It might be better to consider the type of queries you will be using,
 and the type of table, and optimise for that...

 Ben

 Victor Danilchenko wrote:
 Hi,

 I am trying to optimize our DB server. We have one table which has
 1.3M entries, and the keys are GUIDs (so the key space is large).
 However, I have it all indexed. The performance was iffy, though, so I
 increased memory allocation, and the searches on the indexed fields seem
 to be OK now. Still, a simple count(field) on this table still takes,
 like, 2 minutes! I am guessing i am missing something obvious, but I
 read through a few MySQL optimization guides, and it seems like i am
 covering my bases.

 Here is the relevant slice of my my.cnf:

 #
 # * Fine Tuning
 #
 key_buffer  = 256M
 max_allowed_packet  = 64M
 thread_stack= 512K
 thread_cache_size   = 32
 #
 # * Query Cache Configuration
 #
 query_cache_limit   = 32M
 query_cache_size= 256M
 query_cache_type= 1
 table_cache = 512
 sort_buffer_size= 32M


 I am running Ubuntu 7.04, with 1GB of RAM. The relevant fields are
 all indexed, but I can't imagine why a simple count() would take so
 long, when the actual query by value on the same field is effectively
 instant (after my cache setting expansion).

 Does anyone have an idea of what I am missing? Also, if you think
 any of the above settings seem wrong for a server with 1GB of RAM,
 please let me know.
If the field you are counting is the first field in an index I would think
it would go much faster (system will just use the index).  If some of your
counts are fast and some are slow (you said iffy) that might explain the
difference.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
I'm trying to delete some orphaned records from a table that has about 
150K records.  Here is my delete:


|CREATE TEMPORARY TABLE deleteids AS (
 SELECT subTable.ID from subTable
 LEFT OUTER JOIN parentTable ON subTable.ID = parentTable.ID
 WHERE  parentTable.ID IS NULL
);
DELETE FROM subTable WHERE ID IN (SELECT ID FROM deleteids);
DROP TABLE deleteids;|


The DELETE statement itself is whats taking so long.   About 90 seconds 
per 100 records.  Should it be so slow? Thats almost 1 second per 
record!  There are 10K abandoned records I need to delete.  Here is the 
table definition:


CREATE TABLE `subTable` (
 `ID` int(11) unsigned NOT NULL auto_increment,
 `DonorID` int(10) unsigned NOT NULL default '0',
 `MedConID` int(11) unsigned NOT NULL default '0',
 `MedConSubID` int(11) unsigned NOT NULL default '0',
 `FamilyID` int(11) unsigned NOT NULL default '0',
 `cbResult` tinyint(1) unsigned NOT NULL default '0',
 `deleted` tinyint(1) unsigned NOT NULL default '0',
 PRIMARY KEY  (`ID`),
 KEY `MedConID` (`MedConID`),
 KEY `MedConSubID` (`MedConSubID`),
 KEY `FamilyID` (`FamilyID`),
 KEY `DonorID` (`DonorID`),
 KEY `deleted` (`deleted`)
) ENGINE=MyISAM AUTO_INCREMENT=292088 DEFAULT CHARSET=latin1;



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Ben Clewett

Hi,

Can you please post your query?  I also need to know your table type as 
different settings effect different table types?


You are right that a SELECT COUNT(*) WHERE field = 'value' should hit 
the index, but does depend on your query.


You might also try EXPLAIN before your query, which will show the 
approximate number of rows you are hitting.  Look at SHOW TABLE STATUS 
which will give an idea of the size of the rows and indexs.


Marry one to the other and it will give an idea of the cache settings to 
get the query into cache.  But still depends a lot on the table type!


Ben


Wm Mussatto wrote:

On Fri, April 11, 2008 06:47, Ben Clewett wrote:

Are you using MyIsam or InnoDB?  Or something else?

In either case the speed to get a COUNT() is largely down to the speed
if your disks and size of disk caching.  A COUNT() forces the system to
read every row in order to count them, and any large table is probably
larger than your caches.

In some ways this is not so important, since it is unusual for a query
to want to read every row of a table.  (I have 250GB tables which have
excellent performance but would take minutes to count every row :)

It might be better to consider the type of queries you will be using,
and the type of table, and optimise for that...

Ben

Victor Danilchenko wrote:

Hi,

I am trying to optimize our DB server. We have one table which has
1.3M entries, and the keys are GUIDs (so the key space is large).
However, I have it all indexed. The performance was iffy, though, so I
increased memory allocation, and the searches on the indexed fields seem
to be OK now. Still, a simple count(field) on this table still takes,
like, 2 minutes! I am guessing i am missing something obvious, but I
read through a few MySQL optimization guides, and it seems like i am
covering my bases.

Here is the relevant slice of my my.cnf:

#
# * Fine Tuning
#
key_buffer  = 256M
max_allowed_packet  = 64M
thread_stack= 512K
thread_cache_size   = 32
#
# * Query Cache Configuration
#
query_cache_limit   = 32M
query_cache_size= 256M
query_cache_type= 1
table_cache = 512
sort_buffer_size= 32M


I am running Ubuntu 7.04, with 1GB of RAM. The relevant fields are
all indexed, but I can't imagine why a simple count() would take so
long, when the actual query by value on the same field is effectively
instant (after my cache setting expansion).

Does anyone have an idea of what I am missing? Also, if you think
any of the above settings seem wrong for a server with 1GB of RAM,
please let me know.

If the field you are counting is the first field in an index I would think
it would go much faster (system will just use the index).  If some of your
counts are fast and some are slow (you said iffy) that might explain the
difference.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Rob Wultsch
On Fri, Apr 11, 2008 at 6:47 AM, Ben Clewett [EMAIL PROTECTED] wrote:
 A COUNT() forces the system to read every row in order to count them...
That is not strictly the case.
A count(field) can use an index scan rather than a sequential scan,
which may or may not be faster. Also some count(field) can queries be
optimized away if a field is NOT NULL and the type is myisam.

Posting a
Show status;
might be useful.

DDL is always helpful...
-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
Also the table seems to be locked while running this delete... thats not 
going to be very good when I need to run it on production.   Is there a 
way to have it not be locked during this delete?


I'm thinking of creating a script to delete in 10 row increments until 
they are all gone.


Thinking about it now... its probably the IN clause, isn't it?  I've 
heard those are slow.  Hopefully someone will have a better idea.


Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Rob Wultsch
On Fri, Apr 11, 2008 at 8:49 AM, Ryan Stille [EMAIL PROTECTED] wrote:
 I'm trying to delete some orphaned records from a table that has about 150K
 records.  Here is my delete:

  |CREATE TEMPORARY TABLE deleteids AS (
   SELECT subTable.ID from subTable
   LEFT OUTER JOIN parentTable ON subTable.ID = parentTable.ID
   WHERE  parentTable.ID IS NULL
  );
  DELETE FROM subTable WHERE ID IN (SELECT ID FROM deleteids);
  DROP TABLE deleteids;|


  The DELETE statement itself is whats taking so long.   About 90 seconds per
 100 records.  Should it be so slow? Thats almost 1 second per record!  There
 are 10K abandoned records I need to delete.  Here is the table definition:

  CREATE TABLE `subTable` (
   `ID` int(11) unsigned NOT NULL auto_increment,
   `DonorID` int(10) unsigned NOT NULL default '0',
   `MedConID` int(11) unsigned NOT NULL default '0',
   `MedConSubID` int(11) unsigned NOT NULL default '0',
   `FamilyID` int(11) unsigned NOT NULL default '0',
   `cbResult` tinyint(1) unsigned NOT NULL default '0',
   `deleted` tinyint(1) unsigned NOT NULL default '0',
   PRIMARY KEY  (`ID`),
   KEY `MedConID` (`MedConID`),
   KEY `MedConSubID` (`MedConSubID`),
   KEY `FamilyID` (`FamilyID`),
   KEY `DonorID` (`DonorID`),
   KEY `deleted` (`deleted`)
  ) ENGINE=MyISAM AUTO_INCREMENT=292088 DEFAULT CHARSET=latin1;

Sub selects suck in mysql. I would rewrite this query to not use the
temp table, and probably using the multi table delete delete syntax. I
image a query like
DELETE subTable
FROM subTable
LEFT OUTER JOIN parentTable ON subTable.ID = parentTable.ID
WHERE  parentTable.ID IS NULL

should be quite a bit faster. I have had issues with the multie table
delete syntax, so YMMV.
-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Two MySql servers, but very different performances for a SELECT JOIN

2008-04-11 Thread Rob Wultsch
2008/4/11  [EMAIL PROTECTED]:
 Hi,

  I have 2 MySql server instances. One which is 5.0.27/Debian, another
  5.0.32/Solaris.

  Both instances have the same data in the database.

  And I'm doing a select:
  SELECT media.* FROM media,country,content WHERE country.id='Germany' AND
  country.detail_tid=content.tid AND content.id=media.content_id;

  This  request takes less than a half second on one server, and takes 70 
 seconds
  on another server.

  The EXPLAIN results are attached to this mail. Its shows that there are
  interpreted very differently on each server.

  I checked the database structure, wich is exactly identical on both instance.

  Moreover, one could think that this is the optimizer which does not interpret
  the joint request in the right order in the version 5.0.32 compared to 
 5.0.27,
  but I'm pretty sure that this application had worked in the past (good
  performance on 5.0.27).

  Do you know how could I found some clues ?

Shot in the dark, run
ANALYZE TABLE
on all the relevant tables on the slow server. If you just imported
them the indexes may not be well distributed. Please post ddl and
relevant SHOW INDEX info.

-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Jerry Schwartz
-Original Message-
From: Ryan Stille [mailto:[EMAIL PROTECTED]
Subject: Re: Why is this delete so slow? ( 90 seconds per 100 records)

Also the table seems to be locked while running this delete... thats not
going to be very good when I need to run it on production.   Is there a
way to have it not be locked during this delete?
[JS] Sorry, that violates the requirements of database consistency. What
would happen if somebody tried to update a record that you had just deleted?
You'd have to program a recovery mechanism into your application to handle
that eventuality.

I don't know if MySQL will let you delete without locking, I haven't delved
into locking because my predecessor used MyISAM tables. His solution was to
disallow deleting anything from the user interface.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Victor Danilchenko

First of all, my bad -- I forgot to mention that I use MyISAM.

mysql show table status from example like 'leads'\G
*** 1. row ***
   Name: leads
 Engine: MyISAM
Version: 10
 Row_format: Dynamic
   Rows: 1267995
 Avg_row_length: 224
Data_length: 284349972
Max_data_length: 281474976710655
   Index_length: 201081856
  Data_free: 0
 Auto_increment: NULL
Create_time: 2008-04-11 14:03:14
Update_time: 2008-04-11 14:04:26
 Check_time: 2008-04-11 14:07:51
  Collation: utf8_general_ci
   Checksum: NULL
 Create_options:
Comment:


Ben Clewett wrote:

Hi,

Can you please post your query?  I also need to know your table type as 
different settings effect different table types?


The query is simply:

 select count(email1) from leads;

The table structure is as follows:

mysql describe leads;
++--+--+-+-+---+
| Field  | Type | Null | Key | Default 
   | Extra |

++--+--+-+-+---+
| id | varchar(36)  | NO   | PRI | 
   | deleted| tinyint(1)   | NO   | | 0 
   | converted  | tinyint(1)   | NO   | 
 | 0   | date_entered   | datetime | NO 
  | | -00-00 00:00:00 |   |
| date_modified  | datetime | NO   | | -00-00 
00:00:00 |   |
| modified_user_id   | varchar(36)  | YES  | | NULL 
   | assigned_user_id   | varchar(36)  | YES  | MUL | NULL 
   | created_by | varchar(36)  | YES  | 
 | NULL| salutation | varchar(5)   | 
YES  | | NULL| first_name | 
varchar(25)  | YES  | | NULL| last_name 
 | varchar(25)  | YES  | MUL | NULL| title 
 | varchar(100) | YES  | | NULL| 
refered_by | varchar(100) | YES  | | NULL 
 | lead_source| varchar(100) | YES  | | NULL 
 | lead_source_description| mediumtext   | YES  | | 
NULL| status | varchar(100) | YES  | 
| NULL| status_description | mediumtext   | 
YES  | | NULL| department | 
varchar(100) | YES  | | NULL| reports_to_id 
 | varchar(36)  | YES  | | NULL| do_not_call 
 | char(3)  | YES  | | 0   | 
phone_home | varchar(25)  | YES  | | NULL 
 | phone_mobile   | varchar(25)  | YES  | | NULL 
 | phone_work | varchar(25)  | YES  | | 
NULL| phone_other| varchar(25)  | YES  | 
| NULL| phone_fax  | varchar(25)  | 
YES  | | NULL| email1 | 
varchar(100) | YES  | MUL | NULL| email2 
 | varchar(100) | YES  | MUL | NULL| email_opt_out 
 | char(3)  | YES  | | 0   | 
primary_address_street | varchar(150) | YES  | | NULL 
 | primary_address_city   | varchar(100) | YES  | | NULL 
 | primary_address_state  | varchar(100) | YES  | | 
NULL   | primary_address_postalcode | varchar(20)  | YES  | 
| NULL| primary_address_country| varchar(100) | 
YES  | | NULL| alt_address_street | 
varchar(150) | YES  | | NULL| alt_address_city 
 | varchar(100) | YES  | | NULL| 
alt_address_state  | varchar(100) | YES  | | NULL 
 | alt_address_postalcode | varchar(20)  | YES  | | NULL 
 | alt_address_country| varchar(100) | YES  | | 
NULL| description| mediumtext   | YES  | 
| NULL| account_name   | varchar(150) | 
YES  | | NULL| account_description| 
mediumtext   | YES  | | NULL| contact_id 
 | varchar(36)  | YES  | MUL | NULL| account_id 
 | varchar(36)  | YES  | MUL | NULL| 
opportunity_id | varchar(36)  | YES  | MUL | NULL 
 | opportunity_name   | varchar(255) | YES  | | NULL 
 | opportunity_amount | varchar(50)  | YES  | | 
NULL| campaign_id| varchar(36)  | YES  | 
| NULL| portal_name| varchar(255) | 
YES  | | NULL| portal_app | 
varchar(255) | YES  | | NULL| 

Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
Thanks for the help, I rewrote it as a subselect and it deleted all 10K 
records in two seconds.


DELETE subTable FROM subTable LEFT OUTER JOIN parentTable ON
subTable.parentID = parentTable.ID WHERE parentTable.ID IS NULL

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille

Ryan Stille wrote:
Thanks for the help, I rewrote it as a subselect and it deleted all 
10K records in two seconds.


DELETE subTable FROM subTable LEFT OUTER JOIN parentTable ON
subTable.parentID = parentTable.ID WHERE parentTable.ID IS NULL

-Ryan




Whoops, I meant that I rewrote it as a JOIN.  It originally was a subselect.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SQL query question for GROUP BY

2008-04-11 Thread Victor Danilchenko

Hi all,

	I trying to run a query where, after doing a UNION on two different 
SELECTs, I need to sort the result by username and log_date fields, and 
then grab the last entry for each username ('last' as determined by the 
ordering of the log_date field, which is a datetime).


	GROUP BY seems like an obvious choice; 'GROUP BY username', to be 
exact. However, this seems to produce not the last row's values, but 
ones from a random row in the group.


	I don't think the fact that I am doing this on a subquery is relevant, 
but just in case, I am including this info.


Here is what the query looks like, abridged:


SELECT id,username,log_date,event_type
FROM (SELECT * FROM a
  UNION
  SELECT * from b) as h
GROUP BY username


	Basically, what I need is the chronologically last event_type value for 
each user. I can achieve something similar by doing SELECT 
MAX(event_type) -- but I need the equivalent of SELECT LAST(event_type); 
last row instead of max-field-value row.


	I keep having a feeling that I am making this way more complicated than 
it has to be, and that there's a very simple way to return only the last 
row for each username; but i am at a loss as to how to do it.



--
Victor Danilchenko
Senior Software Engineer, AskOnline.net
[EMAIL PROTECTED] - 617-273-0119

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



����������� ����������

2008-04-11 Thread msworld
Now and then, shadow defined by necromancer find lice on dissident inside.


Re: SQL query question for GROUP BY

2008-04-11 Thread Rob Wultsch
On Fri, Apr 11, 2008 at 11:46 AM, Victor Danilchenko
[EMAIL PROTECTED] wrote:
 GROUP BY seems like an obvious choice; 'GROUP BY username', to be
 exact. However, this seems to produce not the last row's values, but ones
 from a random row in the group.
Under most databases your query is erroneous. Take a look at
http://lists.mysql.com/mysql/212084 .

 I don't think the fact that I am doing this on a subquery is
 relevant, but just in case, I am including this info.

 Here is what the query looks like, abridged:


  SELECT id,username,log_date,event_type
  FROM (SELECT * FROM a
   UNION
   SELECT * from b) as h
  GROUP BY username
Read 
http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/


Your are probably going to end up with a fairly ugly query (mostly
because of the union) with what you have a derived table which will
join against a and b again.

SELECT if(a2.id is NULL, b2.id, a2.id) ,a1.username ...
FROM
(SELECT username, MAX(log_date) as mlog_date
FROM (SELECT * FROM a
  UNION
SELECT * from b) as h
GROUP BY username) AS a1
LEFT JOIN a AS a2  ON a1.mlog_date = a2.log_date AND username...
LEFT JOIN b AS b2 ...

Any one have a suggestion for how to do with in a way that is not ugly as heck?
-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL query question for GROUP BY

2008-04-11 Thread Victor Danilchenko
	Oooh, this looks evil. It seems like such a simple thing. I guess 
creating max(log_date) as a field, and then joining on it, is a solution 
-- but my actual query (not the abridged version) is already half a page 
long.


	I think at this point, unless someone else suggests a better solution, 
this would be easier to do programatically -- skip the group altogether, 
and instead simply order the rows, and grab the last one for each 
username in code.


	I guess another alternative would be to use a View for the UNIONized 
query, but doesn't MySQL 'fake' views in 5.0 somehow?



Rob Wultsch wrote:

On Fri, Apr 11, 2008 at 11:46 AM, Victor Danilchenko
[EMAIL PROTECTED] wrote:

GROUP BY seems like an obvious choice; 'GROUP BY username', to be
exact. However, this seems to produce not the last row's values, but ones
from a random row in the group.

Under most databases your query is erroneous. Take a look at
http://lists.mysql.com/mysql/212084 .


I don't think the fact that I am doing this on a subquery is
relevant, but just in case, I am including this info.

Here is what the query looks like, abridged:


 SELECT id,username,log_date,event_type
 FROM (SELECT * FROM a
  UNION
  SELECT * from b) as h
 GROUP BY username

Read 
http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/


Your are probably going to end up with a fairly ugly query (mostly
because of the union) with what you have a derived table which will
join against a and b again.

SELECT if(a2.id is NULL, b2.id, a2.id) ,a1.username ...
FROM
(SELECT username, MAX(log_date) as mlog_date
FROM (SELECT * FROM a
  UNION
SELECT * from b) as h
GROUP BY username) AS a1
LEFT JOIN a AS a2  ON a1.mlog_date = a2.log_date AND username...
LEFT JOIN b AS b2 ...

Any one have a suggestion for how to do with in a way that is not ugly as heck?



--
Victor Danilchenko
Senior Software Engineer, AskOnline.net
[EMAIL PROTECTED] - 617-273-0119

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL query question for GROUP BY

2008-04-11 Thread Rob Wultsch
On Fri, Apr 11, 2008 at 1:01 PM, Victor Danilchenko
[EMAIL PROTECTED] wrote:
 Oooh, this looks evil. It seems like such a simple thing. I guess
 creating max(log_date) as a field, and then joining on it, is a solution --
 but my actual query (not the abridged version) is already half a page long.

 I think at this point, unless someone else suggests a better
 solution, this would be easier to do programatically -- skip the group
 altogether, and instead simply order the rows, and grab the last one for
 each username in code.

 I guess another alternative would be to use a View for the UNIONized
 query, but doesn't MySQL 'fake' views in 5.0 somehow?

I have used views to good results, however I have read not good things
about them. I would not be surprised if they worked well for this use.

I would also not be surprised if the merge storage engine was a better
option for you.

Possibly interesting:
http://www.mysqlperformanceblog.com/2007/08/12/mysql-view-as-performance-troublemaker/

-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question about mysql procedures

2008-04-11 Thread Joshua D. Drake
Hello,

Can MySQL functions/stored procedures access database data?

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question about mysql procedures

2008-04-11 Thread Rob Wultsch
On Fri, Apr 11, 2008 at 2:15 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
 Hello,

  Can MySQL functions/stored procedures access database data?

  Joshua D. Drake

Yes. Is there something in particular you are looking to do?


-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question about mysql procedures

2008-04-11 Thread Paul DuBois

At 2:15 PM -0700 4/11/08, Joshua D. Drake wrote:

Hello,

Can MySQL functions/stored procedures access database data?


Yes, with some limitations.  You will want to read this
section to see whether what you want to do is restricted:

http://dev.mysql.com/doc/refman/5.0/en/routine-restrictions.html

For example, in a stored function, you cannot modify a table
if the table is referenced by the statement that invokes the
function.

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Some way/tool to do this with database scripts backups

2008-04-11 Thread dr_pompeii

hello 

let me explain me more

think a table how this case

cliente is customer/client in english


 Field Type Null Key Default Extra
   ---  ---  --  --   
 idCliente varchar(11)  NO   PRI  
 razonCliente  varchar(80)  NO
 direccionCliente  varchar(80)  NO
 rucClientevarchar(11)  NO
 dniClientevarchar(9)   NO
 telefonoCliente   varchar(15)  NO
 celularClientevarchar(10)  NO
 emailCliente  varchar(50)  NO
 fechaNacimiento   date NO
 ciudadvarchar(20)  NO


ok
i would have 1 to N rows already inserted in the db,

and i can do a simple backup with some script/administration tool for any
database
(mysql/postgresql)


for instance A.sql
which would has this content


INSERT INTO `cliente`
(`idCliente`,`razonCliente`,`direccionCliente`,`rucCliente`,`dniCliente`,`telefonoCliente`,`celularCliente`,`emailCliente`,`fechaNacimiento`,`ciudad`)
VALUES 
 ('000273999','JOZEP E. VAN DEN OUWELAND','JIRON CONDE DE LEMOS # 226 -
PUNO','','000273999','','','','2008-02-15','PUNO'),
 ('00454507','ALBERTO JESUS MARTIN LAZO FERNANDEZ','URB. SEÑORIAL H-4 CAYMA
- AREQUIPA','','00454507','','','','2008-01-29','AREQUIPA'),
 .


the point is that table cliente has how you can see 10 columns for each
insertion written by the backup

the problem and my requirement is create a new sql file (called B.sql)

with this content


UPDATE cliente SET razonCliente='JOZEP E. VAN DEN OUWELAND', ciudad='PUNO'
WHERE idCliente='000273999'
UPDATE cliente SET razonCliente='ALBERTO JESUS MARTIN LAZO FERNANDEZ',
ciudad='AREQUIPA' WHERE idCliente='00454507'
.


so for each INSERT statement createad in A.sql i need generate a UPDATE
statement in B.sql
with the columns that i want to update, in this case 
razonCliente, ciudad

why i need create a B.sql?,
the reason is to have a batch with a lot UPDATE staments, 
so i can open the file and edit some row and a specific column that i need

for instance

UPDATE cliente SET razonCliente='JOZEP E. VAN DEN OUWELAND', ciudad='OTHER
CITY' WHERE idCliente='000273999'
UPDATE cliente SET razonCliente='I HAVE A NEW NAME', ciudad='AREQUIPA' WHERE
idCliente='00454507'
.


maybe i would need create a C.sql
in this way

UPDATE cliente SET ciudad='PUNO' WHERE idCliente='000273999'
UPDATE cliente SET ciudad='AREQUIPA' WHERE idCliente='00454507'
.

to only have the option to edit the column ciudad for some row

i hope  you see my point now

thanks in advanced
-- 
View this message in context: 
http://www.nabble.com/Some-way-tool-to-do-this-with-database-scripts-backups-tp16496072p16633902.html
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question about mysql procedures

2008-04-11 Thread Joshua D. Drake
On Fri, 11 Apr 2008 14:52:30 -0700
Rob Wultsch [EMAIL PROTECTED] wrote:

 On Fri, Apr 11, 2008 at 2:15 PM, Joshua D. Drake
 [EMAIL PROTECTED] wrote:
  Hello,
 
   Can MySQL functions/stored procedures access database data?
 
   Joshua D. Drake
 
 Yes. Is there something in particular you are looking to do?
 

Just gathering information :) Thanks!

Joshua D. Drake

 


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



' . . ' can't insert into a date field,but update can

2008-04-11 Thread wang shuming
Hi,
  table  table1 int1 int auto_increament ,
 date date  not null default '-00-00'

 1) insert into table1 (date) values('.  .  ')

 // shows Incorrect date value: '.  .  ' for column date' at row 1
// '.  .  ' == space(4)+.+space(2)+.+space(2)

2) update table1 set date=' .  .  '

  // this done .

I hope can  insert a '.  .  '  value to a date field .
Mysql 5.0.51

Regards!

Shuming Wang