Re: Problem with having

2013-09-23 Thread Ananda Kumar
select recipe_id,max(maxdatetime) from data_csmeta group by recipe_id
having recipe_id=19166;


On Mon, Sep 23, 2013 at 4:15 PM, shawn green shawn.l.gr...@oracle.comwrote:

 Hi Larry,


 On 9/23/2013 3:58 PM, Larry Martell wrote:

 On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
 narula...@gmail.comwrote:

  Hi,

 In your second query, you seem to have MIN(date_time), but you are
 talking about maximum. So your group by query is actually pulling the
 minimum date for this recipe.


 I pasted the wrong query in. I get the same results regardless of if I
 have
 MIN or MAX - I get the id of the max, but the date_time of the min.



  On Mon, Sep 23, 2013 at 3:35 PM, Larry Martell larry.mart...@gmail.com
 **wrote:

  I want to find the rows from a table that have the max date_time for
 each
 recipe. I know I've done this before with group by and having, but I
 can't
 seem to get it to work now. I get the correct row id, but not the
 correct
 date_time. I'm sure I'm missing something simple.

 For purposes of showing an example, I'll use one recipe, 19166.


 For that recipe here's the row I would want:

 mysql select id, MAX(date_time) from data_cstmeta  where recipe_id =
 19166;
 +-+---**--+
 | id  | MAX(date_time)  |
 +-+---**--+
 | 1151701 | 2013-02-07 18:38:13 |
 +-+---**--+
 1 row in set (0.01 sec)

 I would think this query would give me that - it gives me the correct
 id,
 but not the correct date_time:

 mysql SELECT id, date_time as MaxDateTime  FROM data_cstmeta  where
 recipe_id = 19166 group by recipe_id HAVING MIN(date_time);
 +-+---**--+
 | id  | MaxDateTime |
 +-+---**--+
 | 1151701 | 2010-12-13 16:16:55 |
 +-+---**--+
 1 row in set (0.01 sec)

 How can I fix this?

 Thanks!
 -larry


 You have to do a two-stage match. One stage to find the MAX() of a value
 for each recipe_id, the other to match that MAX() to one or more rows to
 give you the best ID values.

 Here's a subquery method of doing it.  There are many many others (google
 for groupwize maximum)

 SELECT a.id, b.MaxDateTime
 FROM data_cstmeta a
 INNER JOIN (
 SELECT MAX(date_time) MaxDateTime
 FROM data_cstmeta
 WHERE recipe_id = 19166
 ) b
   on b.MaxDateTime = a.date_time
 WHERE recipe_id = 19166;

 Yours,
 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: replication newbie questions

2013-08-28 Thread Ananda Kumar
Why don't u try snapshot backups, where the lock held for less duration. Or
can't u take mysql dumps during Night time when there is less bd activity

On Thursday, August 29, 2013, Ed L. mysql@bluepolka.net wrote:

 Mysql newbie here, looking for some help configuring 5.0.45 master-slave
replication.  Here's my scenario...

 We have a heavily loaded 30gb 5.0.45 DB we need to replicate via
master-slave configuration to a new, beefier server running same mysql
5.0.45, and then cutover to the new server.  Due to extreme SAN congestion
and a grossly overloaded master server, our DB dumps take 5.5 hours.  But
we cannot afford that much downtime or locking during the replication
transition; we can manage 10-15 minutes, but more is very problematic.

 I understand that FLUSH TABLES WITH READ LOCK will lock the tables for
the duration of the 5.5 hour dump.  Is this true?

 If so, we'd like to dump/initialize/sync slave WITHOUT any locking
anything the master for more than a few seconds if at all possible.  Will
this give us the dump we need?

 mysqldump --single-transaction --master-data --all-databases

 Thank you in advance for any help.

 Ed



Re: replication newbie questions

2013-08-28 Thread Ananda Kumar
if i u have LVM's then lock is held only for the duration of taking
snapshot, which would be few min, if there is very less activity on the db.


On Wed, Aug 28, 2013 at 3:08 PM, Ed L. mysql@bluepolka.net wrote:

  On 8/28/13 2:00 PM, Ananda Kumar wrote:


 Why don't u try snapshot backups, where the lock held for less duration.
 Or can't u take mysql dumps during Night time when there is less bd activity


 I neglected to mention these systems are both CentOS linux systems.

 Unfortunately, the 5.5 hour dumps are already done during the least busy
 times.

 Regarding snapshots, how long are snapshot locks held?  These are ext4
 filesystems.  Assuming the lock is not held for long, what's the
 recommended way to do snapshots on ext4?

 Thanks,
 Ed



 On Thursday, August 29, 2013, Ed L. mysql@bluepolka.net wrote:
 
  Mysql newbie here, looking for some help configuring 5.0.45 master-slave
 replication.  Here's my scenario...
 
  We have a heavily loaded 30gb 5.0.45 DB we need to replicate via
 master-slave configuration to a new, beefier server running same mysql
 5.0.45, and then cutover to the new server.  Due to extreme SAN congestion
 and a grossly overloaded master server, our DB dumps take 5.5 hours.  But
 we cannot afford that much downtime or locking during the replication
 transition; we can manage 10-15 minutes, but more is very problematic.
 
  I understand that FLUSH TABLES WITH READ LOCK will lock the tables for
 the duration of the 5.5 hour dump.  Is this true?
 
  If so, we'd like to dump/initialize/sync slave WITHOUT any locking
 anything the master for more than a few seconds if at all possible.  Will
 this give us the dump we need?
 
  mysqldump --single-transaction --master-data --all-databases
 
  Thank you in advance for any help.
 
  Ed
 





Re: Temporary Tables with Triggers Problem

2013-05-29 Thread Ananda Kumar
can you please share the code of the trigger. Any kind of error your getting


On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins neil.tompk...@googlemail.com
 wrote:

 Hi,

 I've a trigger that writes some data to a temporary table; and at the end
 of the trigger writes all the temporary table data in one insert to our
 normal Innodb table.

 However, for some reason the trigger isn't copying the table from the
 temporary table to the Innodb table.  If I write in the trigger the inserts
 to the Innodb table, it works fine.

 Any ideas why.  I'm running MySQL 5.6.

 Thanks
 Neil



Re: Temporary Tables with Triggers Problem

2013-05-29 Thread Ananda Kumar
did u check if data is getting inserted into tempHotelRateAvailability


On Wed, May 29, 2013 at 7:21 PM, Neil Tompkins neil.tompk...@googlemail.com
 wrote:

 This is my Trigger which doesn't seem to work; but doesn't cause a error

 DROP TEMPORARY TABLE IF EXISTS tempHotelRateAvailability;

 CREATE TEMPORARY TABLE tempHotelRateAvailability(AuditTrailId
 varchar(36),UserId bigint(20),ActionType
 enum('INSERT','UPDATE','DELETE'),TableName varchar(36),RowKey
 varchar(255),FieldName varchar(36),OldValue text,NewValue text);
 IF NEW.RoomsToSell  OLD.RoomsToSell THEN
 INSERT INTO tempHotelRateAvailability VALUES
 (UUID(),NEW.LastChangedBy,'UPDATE','HotelRateAvailability',
 CONCAT(OLD.RoomID,'|',OLD.Day),'RoomsToSell',OLD.RoomsToSell,NEW.RoomsToSell);
 END IF;

 IF SELECT COUNT(*) FROM tempHotelRateAvailability  0 THEN
 INSERT INTO AuditTrail
 SELECT tempHotelRateAvailability.* FROM tempHotelRateAvailability;

 END IF;

 DROP TEMPORARY TABLE tempHotelRateAvailability;

 However if I use this call in the Trigger and change a value in the table
 it works fine;

 INSERT INTO AuditTrail
 (AuditTrailId,UserId,ActionType,TableName,RowKey,FieldName,OldValue,NewValue,
 LoggedOn)
  VALUES (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',1,2,
 NOW());


 On Wed, May 29, 2013 at 2:49 PM, Ananda Kumar anan...@gmail.com wrote:

 can you please share the code of the trigger. Any kind of error your
 getting


 On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins 
 neil.tompk...@googlemail.com wrote:

 Hi,

 I've a trigger that writes some data to a temporary table; and at the end
 of the trigger writes all the temporary table data in one insert to our
 normal Innodb table.

 However, for some reason the trigger isn't copying the table from the
 temporary table to the Innodb table.  If I write in the trigger the
 inserts
 to the Innodb table, it works fine.

 Any ideas why.  I'm running MySQL 5.6.

 Thanks
 Neil






Re: Temporary Tables with Triggers Problem

2013-05-29 Thread Ananda Kumar
But, does it work inside the trigger. If not, then based on the logic,
there will not be any data, and data goes not get inserted from temp table
to innodb table


On Wed, May 29, 2013 at 7:29 PM, Neil Tompkins neil.tompk...@googlemail.com
 wrote:

 I took the following lines of code slightly modified and it returned some
 data using a normal Query Editor

 CREATE TEMPORARY TABLE tempHotelRateAvailability(AuditTrailId
 varchar(36),UserId bigint(20),ActionType varchar(36),TableName
 varchar(36),RowKey varchar(255),FieldName varchar(36),OldValue
 varchar(255),NewValue varchar(255), LoggedOn TIMESTAMP);
 INSERT INTO tempHotelRateAvailability VALUES
 (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',1,2, NOW());
 INSERT INTO tempHotelRateAvailability VALUES
 (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',4,4, NOW());
 SELECT * FROM tempHotelRateAvailability;


 On Wed, May 29, 2013 at 2:57 PM, Ananda Kumar anan...@gmail.com wrote:

 did u check if data is getting inserted into tempHotelRateAvailability


 On Wed, May 29, 2013 at 7:21 PM, Neil Tompkins 
 neil.tompk...@googlemail.com wrote:

 This is my Trigger which doesn't seem to work; but doesn't cause a error

  DROP TEMPORARY TABLE IF EXISTS tempHotelRateAvailability;

 CREATE TEMPORARY TABLE tempHotelRateAvailability(AuditTrailId
 varchar(36),UserId bigint(20),ActionType
 enum('INSERT','UPDATE','DELETE'),TableName varchar(36),RowKey
 varchar(255),FieldName varchar(36),OldValue text,NewValue text);
 IF NEW.RoomsToSell  OLD.RoomsToSell THEN
 INSERT INTO tempHotelRateAvailability VALUES
 (UUID(),NEW.LastChangedBy,'UPDATE','HotelRateAvailability',
 CONCAT(OLD.RoomID,'|',OLD.Day),'RoomsToSell',OLD.RoomsToSell,NEW.RoomsToSell);
 END IF;

 IF SELECT COUNT(*) FROM tempHotelRateAvailability  0 THEN
 INSERT INTO AuditTrail
 SELECT tempHotelRateAvailability.* FROM
 tempHotelRateAvailability;
 END IF;

 DROP TEMPORARY TABLE tempHotelRateAvailability;

 However if I use this call in the Trigger and change a value in the
 table it works fine;

 INSERT INTO AuditTrail
 (AuditTrailId,UserId,ActionType,TableName,RowKey,FieldName,OldValue,NewValue,
 LoggedOn)
  VALUES (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',1,2,
 NOW());


 On Wed, May 29, 2013 at 2:49 PM, Ananda Kumar anan...@gmail.com wrote:

 can you please share the code of the trigger. Any kind of error your
 getting


 On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins 
 neil.tompk...@googlemail.com wrote:

 Hi,

 I've a trigger that writes some data to a temporary table; and at the
 end
 of the trigger writes all the temporary table data in one insert to our
 normal Innodb table.

 However, for some reason the trigger isn't copying the table from the
 temporary table to the Innodb table.  If I write in the trigger the
 inserts
 to the Innodb table, it works fine.

 Any ideas why.  I'm running MySQL 5.6.

 Thanks
 Neil








Re: Mesaure query speed and InnoDB pool

2013-04-16 Thread Ananda Kumar
Does your query use proper indexes.
Does your query scan less number blocks/rows
can you share the explain plan of the sql


On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich 
ilya.kazakev...@jetbrains.com wrote:

 Hello,

 I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it reads
 data from disk and about 2 seconds when data already exists in pool. And it
 may take 10 seconds when _some_ pages are on disk and some are in pool.
 So, what is the best way to test query performance? I have several ideas:
 * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual time
 * Set pool as small as possible to reduce its effect on query speed
 * Set pool larger than my db and run query to load all data into pool and
 measure speed then

 How do you measure your queries' speed?

 Ilya Kazakevich


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Retrieve most recent of multiple rows

2013-03-14 Thread Ananda Kumar
not all the rows, only the distinct q_id,


On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 --

 *From: *Ananda Kumar anan...@gmail.com
 *Subject: *Re: Retrieve most recent of multiple rows



 select qid,max(atimestamp) from kkk where qid in (select distinct qid from
 kkk) group by qid;


 What use is that where statement? It just says to use all the rows in the
 table.


 --
 Unhappiness is discouraged and will be corrected with kitten pictures.



Re: Re: Retrieve most recent of multiple rows

2013-03-14 Thread Ananda Kumar
Mr. stefan,
Thanks for let me know about basic of sql.
But if you see my earlier response, i have given the correct sql

select * from tab where anwer_timestamp in (select max(anwer_timestamp)
from tab where q_id in (select distinct q_id from tab) group by q_id);

You missed reading my earlier email, where my basic on sql is good enough.
I wrongly typed the sql and missed one where condition in my later
response.

Also, please understand, people on this forum are trying to help each
other. Some might give the required solution with exact requirement, while
some give just hints, which helps the requester in solving the issue.

Please be appreciative of the people trying to provide some kind of
solution.

hope this does not sound too harsh on u




On Thu, Mar 14, 2013 at 3:44 PM, Stefan Kuhn stef...@web.de wrote:

  not all the rows, only the distinct q_id,
 The subquery will give all distinct q_ids, but the select from
 in(subquery) will give everything, because each row must have a q_id which
 is returned by the subquery. The query after all says select all rows
 where the q_id is one of the q_ids occuring in the table and this, of
 course, gives everything.
 Also your select qid, max(atimestamp) is not doing what I think you want
 to do. I think you want all raws where atimestamp is somehow a maximum. Now
 in your query max(atimesamp) will return one single value and this will be
 used as a sort of constant. So if max(atimestamp) is e. g. 5, you get
 effictively select qid, 5 from ... Qou will get all rows with the constant
 becoming a row. You want to filter rows, so your condition must be in where
 (this is a basic concept of sql). Somehow (this is not correct sql, just a
 hint) it must be like select qid from kkk where atimestamp=max(atimestamp).
 If you want to filter the condition must be in where, a function on a
 column behins select will not filter.
 I think you need to get a basic understanding of sql first, sorry if that
 sounds harsh
 Stefan


 On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:

  --
 
  *From: *Ananda Kumar anan...@gmail.com
  *Subject: *Re: Retrieve most recent of multiple rows
 
 
 
  select qid,max(atimestamp) from kkk where qid in (select distinct qid
 from
  kkk) group by qid;
 
 
  What use is that where statement? It just says to use all the rows in the
  table.
 
 
  --
  Unhappiness is discouraged and will be corrected with kitten pictures.
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Retrieve most recent of multiple rows

2013-03-13 Thread Ananda Kumar
this is the data i tested on
   AIDQID ANS SCRID ATIMESTAMP
-- -- -- --
---
 1 10 male3 13-MAR-13 02.03.20.00 PM
 2 10 male3 13-MAR-13 02.03.28.00 PM
 3 11 male3 13-MAR-13 02.03.33.00 PM
 4 10 male3 13-MAR-13 02.03.36.00 PM
 5 11 male3 13-MAR-13 02.03.41.00 PM
 6 12 male3 13-MAR-13 02.03.48.00 PM
 6 11 male3 13-MAR-13 02.04.04.00 PM

7 rows selected.

select qid,max(atimestamp) from kkk where qid in (select distinct qid from
kkk) group by qid;

   QID MAX(ATIMESTAMP)
--
---
11 13-MAR-13 02.04.04.00 PM
10 13-MAR-13 02.03.36.00 PM
12 13-MAR-13 02.03.48.00 PM




On Wed, Mar 13, 2013 at 7:28 PM, Ananda Kumar anan...@gmail.com wrote:

 can you please share the sql that you executed to fetch the above data


 On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 - Original Message -
  From: Norah Jones nh.jone...@gmail.com
  Subject: Retrieve most recent of multiple rows
 
  4  10Male3  1363091019
  5  11Male3  1363091020
  6  12Male3  1363091020
  7  11Male3  1363091025
 
 
  4  10Male3  1363091019
  6  12Male3  1363091020
  7  11Male3  1363091020

 The data in your result does not match the data in your source. Please
 clarify :-)


 --
 Unhappiness is discouraged and will be corrected with kitten pictures.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





Re: Retrieve most recent of multiple rows

2013-03-13 Thread Ananda Kumar
select * from tab where anwer_timestamp in (select max(anwer_timestamp)
from tab where q_id in (select distinct q_id from tab) group by q_id);


On Wed, Mar 13, 2013 at 6:48 PM, Norah Jones nh.jone...@gmail.com wrote:

 I have a table which looks like this:

 answer_id  q_id  answer  qscore_id  answer_timestamp
 1  10Male3  1363091016
 2  10Male3  1363091017
 3  11Male3  1363091018
 4  10Male3  1363091019
 5  11Male3  1363091020
 6  12Male3  1363091020
 7  11Male3  1363091025

 So I have multiple answers for the same questions (q_id). I want to be
 able to retrieve only ONE answer per question and that be the most recent
 answer.
 There should be THREE rows returned, which are all the most recent
 answered for that q_id:

 4  10Male3  1363091019
 6  12Male3  1363091020
 7  11Male3  1363091020




Re: Retrieve most recent of multiple rows

2013-03-13 Thread Ananda Kumar
can you please share the sql that you executed to fetch the above data


On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 - Original Message -
  From: Norah Jones nh.jone...@gmail.com
  Subject: Retrieve most recent of multiple rows
 
  4  10Male3  1363091019
  5  11Male3  1363091020
  6  12Male3  1363091020
  7  11Male3  1363091025
 
 
  4  10Male3  1363091019
  6  12Male3  1363091020
  7  11Male3  1363091020

 The data in your result does not match the data in your source. Please
 clarify :-)


 --
 Unhappiness is discouraged and will be corrected with kitten pictures.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: How to verify mysqldump files

2012-11-07 Thread Ananda Kumar
you can use checksum to make sure there are not corruption in the file

On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gary,

 It is always a good practice to test the whole solution backup/restore.
 So nothing is better than testing a restore, actually it should be a
 periodic procedure.
 As for the validity of the file usually is delegated to the operating
 system.
 If you want to check it yourself you may create an algorithm that analyses
 some patterns in the dump file to recognize that it is correct,
 starting may be from one that is working as 'valid' sample.

 Cheers

 Claudio



 2012/11/7 Gary listgj-my...@yahoo.co.uk

  Can anyone suggest how I could verify that the files created by
  mysqldump are okay? They are being created for backup purposes, and
  the last thing I want to do is find out that the backups themselves are
  in some way corrupt.
 
  I know I can check the output of the command itself, but what if.. I
  don't know... if there are problems with the disc it writes to, or
  something like that. Is there any way to check whether the output file
  is valid in the sense that it is complete and syntactically correct?
 
  --
  GaryPlease do NOT send me 'courtesy' replies off-list.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 


 --
 Claudio



Re: Dumping drupal databases

2012-10-30 Thread Ananda Kumar
why dont u create a softlink

On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson t...@akwebsoft.com wrote:

 * Reindl Harald h.rei...@thelounge.net [121030 08:49]:
  The drupal mysql datafiles are located at
   /Applications/drupal-7.15-0/mysql/data
  
   as opposed to /opt/local/var/db/mysql5 for
   'customary' mysql.
 
  this crap is outside your mysqldata
  I don't know what you mean by crap. Sorry.
  Actually I do ... you're about to embark on a rant about Mac.
  Go for it! :)

  and that is why it was suggested that you teporary change
  the datadir of your mysqld to /Applications/drupal-7.15-0/mysql/data
  or if this are only MyISAM-atbles FOR SURE you can simply move
  the database folders in your running mysqld-datadir
   What would you suggest as a fool-proof method to accomplish this?
   I hate to tweak the 'original' my.cnf, I could add a ~/.my.cnf
   but how would I do that without clobbering my 'non-drupal' mysql?

   I want to reiterate that cp -p -r might actually be the solution,
   but have not tried that approach in the past

  typical MacOSX-style to throw things around the whole
  machine without any logical structure like applications
  in the userhome.. this is the price someone has to
  pay for a poor operating system without package management
   Oh don't get me started! Grr! If truth be told, I'm a long-time
   linux user and new to OS X.

   You are preaching to the converted my friend.

   I'm not certain I like the way acquaire did the setup. I
   didn't have this problem when I played with drupal on ubuntu.

 
 Great minds run in the same gutter.
   -- Alaskus Curmudgeous
 

 --
 Tim
 tim at tee jay forty nine dot com or akwebsoft dot com
 http://www.akwebsoft.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: error 13

2012-10-19 Thread Ananda Kumar
i mean ...do u connect just like

mysql -u username -p

or

mysql -u username -p password -h hostname

On Thu, Oct 18, 2012 at 11:07 PM, kalin ka...@el.net wrote:


 not sure i follow. what do you mean connect?!

 there is a server (mysqld) and a client (mysql). the client gets to the
 server via the socket. like on any other unix machine.

 how did i connect mysql to what exactly?



 On 10/18/12 6:42 AM, Ananda Kumar wrote:

 how did u connect mysql on your laptop

 On Thu, Oct 18, 2012 at 1:19 AM, kalin ka...@el.net
 mailto:ka...@el.net wrote:


 thanks amanda...  the local worked for some reason...

 in about 10 years of using mysql i have never had to do that.

 i still wonder why would the thing not stat it.

 file and dir are both 777 and owned by mysql...

 one thing i noticed thou is that the actual user is not mysql but
 _mysql. that's how is stored in passwd and that's how it's
 represented by ls -al.

 the process thou was started with --user=mysql...

 but i still don't get the necessity of local. i have never used it
 before.

 this is all on os x - 10.8.2...




 On 10/17/12 1:25 PM, Ananda Kumar wrote:

 also try using load data local infile 'file path' and see if it
 works

 On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar
 anan...@gmail.com mailto:anan...@gmail.com
 mailto:anan...@gmail.com mailto:anan...@gmail.com wrote:

  does both directory have permission 777


  On Wed, Oct 17, 2012 at 9:27 PM, Rick James
 rja...@yahoo-inc.com mailto:rja...@yahoo-inc.com
  mailto:rja...@yahoo-inc.com

 mailto:rja...@yahoo-inc.com** wrote:

  SELinux ?

-Original Message-
From: Lixun Peng [mailto:pengli...@gmail.com
 mailto:pengli...@gmail.com
  mailto:pengli...@gmail.com mailto:pengli...@gmail.com
 ]
Sent: Tuesday, October 16, 2012 9:03 PM
To: kalin
Cc: Michael Dykman; mysql@lists.mysql.com
 mailto:mysql@lists.mysql.com
  mailto:mysql@lists.mysql.com
 mailto:mysql@lists.mysql.com**
Subject: Re: error 13
   
Hi,
   
you can switch to mysql user su - mysql, and then
 stat this
  file.
if you can't access, mysql user have not permissions.
   
On Wed, Oct 17, 2012 at 11:49 AM, kalin
 ka...@el.net mailto:ka...@el.net
  mailto:ka...@el.net mailto:ka...@el.net wrote:
   



 On 10/16/12 10:49 PM, Michael Dykman wrote:

 is your mysql client on the same host as the
 mysql server?
  if not,
 google the docs for 'local infile'


 yes. my laptop.





 On 2012-10-16 10:45 PM, Lixun Peng
 pengli...@gmail.com mailto:pengli...@gmail.com
  mailto:pengli...@gmail.com

 mailto:pengli...@gmail.com wrote:

 Hi,

 What's the dir permissions?

 For example, if we have a file in /a/b/file, dir
 a is 644,
  even
 though file is 777, we can't access file, too.


 On Wed, Oct 17, 2012 at 10:31 AM, kalin
 ka...@el.net mailto:ka...@el.net
  mailto:ka...@el.net mailto:ka...@el.net wrote:



 hi all..

 this hasn't happe...

 --
 Senior MySQL Developer @ Taobao.com

 Mobile Phone: +86 18658156856 (Hangzhou)
 Gtalk: penglixun(at)gmail.com http://gmail.com
 http://gmail.com

 Twitter: http://www.twitter.com/plinux
 Blog: http://www.penglixun.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql


   
   
--
Senior MySQL Developer @ Taobao.com
   
Mobile Phone: +86

Re: error 13

2012-10-18 Thread Ananda Kumar
how did u connect mysql on your laptop

On Thu, Oct 18, 2012 at 1:19 AM, kalin ka...@el.net wrote:


 thanks amanda...  the local worked for some reason...

 in about 10 years of using mysql i have never had to do that.

 i still wonder why would the thing not stat it.

 file and dir are both 777 and owned by mysql...

 one thing i noticed thou is that the actual user is not mysql but _mysql.
 that's how is stored in passwd and that's how it's represented by ls -al.

 the process thou was started with --user=mysql...

 but i still don't get the necessity of local. i have never used it
 before.

 this is all on os x - 10.8.2...




 On 10/17/12 1:25 PM, Ananda Kumar wrote:

 also try using load data local infile 'file path' and see if it works

 On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar anan...@gmail.com
 mailto:anan...@gmail.com wrote:

 does both directory have permission 777


 On Wed, Oct 17, 2012 at 9:27 PM, Rick James rja...@yahoo-inc.com
 mailto:rja...@yahoo-inc.com wrote:

 SELinux ?

   -Original Message-
   From: Lixun Peng [mailto:pengli...@gmail.com
 mailto:pengli...@gmail.com]
   Sent: Tuesday, October 16, 2012 9:03 PM
   To: kalin
   Cc: Michael Dykman; mysql@lists.mysql.com
 mailto:mysql@lists.mysql.com
   Subject: Re: error 13
  
   Hi,
  
   you can switch to mysql user su - mysql, and then stat this
 file.
   if you can't access, mysql user have not permissions.
  
   On Wed, Oct 17, 2012 at 11:49 AM, kalin ka...@el.net
 mailto:ka...@el.net wrote:
  
   
   
   
On 10/16/12 10:49 PM, Michael Dykman wrote:
   
is your mysql client on the same host as the mysql server?
 if not,
google the docs for 'local infile'
   
   
yes. my laptop.
   
   
   
   
   
On 2012-10-16 10:45 PM, Lixun Peng pengli...@gmail.com
 mailto:pengli...@gmail.com wrote:
   
Hi,
   
What's the dir permissions?
   
For example, if we have a file in /a/b/file, dir a is 644,
 even
though file is 777, we can't access file, too.
   
   
On Wed, Oct 17, 2012 at 10:31 AM, kalin ka...@el.net
 mailto:ka...@el.net wrote:
   
   
   
hi all..
   
this hasn't happe...
   
--
Senior MySQL Developer @ Taobao.com
   
Mobile Phone: +86 18658156856 (Hangzhou)
Gtalk: penglixun(at)gmail.com http://gmail.com

Twitter: http://www.twitter.com/plinux
Blog: http://www.penglixun.com
   
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql
   
   
  
  
   --
   Senior MySQL Developer @ Taobao.com
  
   Mobile Phone: +86 18658156856 (Hangzhou)
   Gtalk: penglixun(at)gmail.com http://gmail.com

   Twitter: http://www.twitter.com/plinux
   Blog: http://www.penglixun.com






Re: error 13

2012-10-17 Thread Ananda Kumar
does both directory have permission 777

On Wed, Oct 17, 2012 at 9:27 PM, Rick James rja...@yahoo-inc.com wrote:

 SELinux ?

  -Original Message-
  From: Lixun Peng [mailto:pengli...@gmail.com]
  Sent: Tuesday, October 16, 2012 9:03 PM
  To: kalin
  Cc: Michael Dykman; mysql@lists.mysql.com
  Subject: Re: error 13
 
  Hi,
 
  you can switch to mysql user su - mysql, and then stat this file.
  if you can't access, mysql user have not permissions.
 
  On Wed, Oct 17, 2012 at 11:49 AM, kalin ka...@el.net wrote:
 
  
  
  
   On 10/16/12 10:49 PM, Michael Dykman wrote:
  
   is your mysql client on the same host as the mysql server? if not,
   google the docs for 'local infile'
  
  
   yes. my laptop.
  
  
  
  
  
   On 2012-10-16 10:45 PM, Lixun Peng pengli...@gmail.com wrote:
  
   Hi,
  
   What's the dir permissions?
  
   For example, if we have a file in /a/b/file, dir a is 644, even
   though file is 777, we can't access file, too.
  
  
   On Wed, Oct 17, 2012 at 10:31 AM, kalin ka...@el.net wrote:
  
  
  
   hi all..
  
   this hasn't happe...
  
   --
   Senior MySQL Developer @ Taobao.com
  
   Mobile Phone: +86 18658156856 (Hangzhou)
   Gtalk: penglixun(at)gmail.com
   Twitter: http://www.twitter.com/plinux
   Blog: http://www.penglixun.com
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 
  --
  Senior MySQL Developer @ Taobao.com
 
  Mobile Phone: +86 18658156856 (Hangzhou)
  Gtalk: penglixun(at)gmail.com
  Twitter: http://www.twitter.com/plinux
  Blog: http://www.penglixun.com



Re: error 13

2012-10-17 Thread Ananda Kumar
also try using load data local infile 'file path' and see if it works

On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar anan...@gmail.com wrote:

 does both directory have permission 777


 On Wed, Oct 17, 2012 at 9:27 PM, Rick James rja...@yahoo-inc.com wrote:

 SELinux ?

  -Original Message-
  From: Lixun Peng [mailto:pengli...@gmail.com]
  Sent: Tuesday, October 16, 2012 9:03 PM
  To: kalin
  Cc: Michael Dykman; mysql@lists.mysql.com
  Subject: Re: error 13
 
  Hi,
 
  you can switch to mysql user su - mysql, and then stat this file.
  if you can't access, mysql user have not permissions.
 
  On Wed, Oct 17, 2012 at 11:49 AM, kalin ka...@el.net wrote:
 
  
  
  
   On 10/16/12 10:49 PM, Michael Dykman wrote:
  
   is your mysql client on the same host as the mysql server? if not,
   google the docs for 'local infile'
  
  
   yes. my laptop.
  
  
  
  
  
   On 2012-10-16 10:45 PM, Lixun Peng pengli...@gmail.com wrote:
  
   Hi,
  
   What's the dir permissions?
  
   For example, if we have a file in /a/b/file, dir a is 644, even
   though file is 777, we can't access file, too.
  
  
   On Wed, Oct 17, 2012 at 10:31 AM, kalin ka...@el.net wrote:
  
  
  
   hi all..
  
   this hasn't happe...
  
   --
   Senior MySQL Developer @ Taobao.com
  
   Mobile Phone: +86 18658156856 (Hangzhou)
   Gtalk: penglixun(at)gmail.com
   Twitter: http://www.twitter.com/plinux
   Blog: http://www.penglixun.com
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 
  --
  Senior MySQL Developer @ Taobao.com
 
  Mobile Phone: +86 18658156856 (Hangzhou)
  Gtalk: penglixun(at)gmail.com
  Twitter: http://www.twitter.com/plinux
  Blog: http://www.penglixun.com





Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
try this command and see if you can get more info about the error

show innodb status\G

On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

 Hi All

 I am hoping someone can point me in the right direction.

 We have a mysql 5.0 database which is giving us hassles when trying to
 create a temporary table.

 The creation works perfectly on the slave machine as well as all our
 other db's, however on the one specific main server we are getting the
 message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)


  We have been trying to figure this out for more than a week now with
 no luck.

  When looking in the mysql error.log file, the innodb states that
 there is either another mysqld running (confirmed and not the case) or it
 is a filesystem permissions problem.


   We have checked several times and the permissions on the /tmp
 filesystem is the same as on all servers.


 Any suggestions would be greatly appreciated as google didnt help much
 either.


 Regards


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
can you trying setting sort_buffer_size to big value at your session level
and create the table

On Mon, Sep 10, 2012 at 2:54 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

  Hi

 We tried that as well, however the databases are quite busy and either
 other transactions overwrite the info, or there is nothing logged.

 We even tried running the create statement and immediately running
 Show innodb status, but nothing for that statement.

 Regards





 On 09/10/2012 11:05 AM, Ananda Kumar wrote:

 try this command and see if you can get more info about the error

  show innodb status\G

 On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

 Hi All

 I am hoping someone can point me in the right direction.

 We have a mysql 5.0 database which is giving us hassles when trying
 to create a temporary table.

 The creation works perfectly on the slave machine as well as all our
 other db's, however on the one specific main server we are getting the
 message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)


  We have been trying to figure this out for more than a week now with
 no luck.

  When looking in the mysql error.log file, the innodb states that
 there is either another mysqld running (confirmed and not the case) or it
 is a filesystem permissions problem.


   We have checked several times and the permissions on the /tmp
 filesystem is the same as on all servers.


 Any suggestions would be greatly appreciated as google didnt help
 much either.


 Regards


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql






Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
start with 500MB and try

On Mon, Sep 10, 2012 at 3:31 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

  Hi, the sort_buffer_size was set to 8Mb as well as 32M  for the session
 (currently 1M) and retried with same result.





 On 09/10/2012 11:55 AM, Ananda Kumar wrote:

 can you trying setting sort_buffer_size to big value at your session level
 and create the table

 On Mon, Sep 10, 2012 at 2:54 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

  Hi

 We tried that as well, however the databases are quite busy and
 either other transactions overwrite the info, or there is nothing logged.

 We even tried running the create statement and immediately running
 Show innodb status, but nothing for that statement.

 Regards





 On 09/10/2012 11:05 AM, Ananda Kumar wrote:

 try this command and see if you can get more info about the error

  show innodb status\G

 On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

 Hi All

 I am hoping someone can point me in the right direction.

 We have a mysql 5.0 database which is giving us hassles when trying
 to create a temporary table.

 The creation works perfectly on the slave machine as well as all our
 other db's, however on the one specific main server we are getting the
 message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)


  We have been trying to figure this out for more than a week now
 with no luck.

  When looking in the mysql error.log file, the innodb states that
 there is either another mysqld running (confirmed and not the case) or it
 is a filesystem permissions problem.


   We have checked several times and the permissions on the /tmp
 filesystem is the same as on all servers.


 Any suggestions would be greatly appreciated as google didnt help
 much either.


 Regards


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql








Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
this temp table will hold how many rows, what would be its size.

On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

 Hi,
 We confirmed that the /tmp directory permissions is set to rwxrwxrwxt
 and is owned by root , the same as all our other servers.

 There is also about 60Gb of free space on the filesystem where /tmp
 resides.

 Regards





 On 09/10/2012 01:11 PM, Rik Wasmus wrote:

 the message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)

 Basics first:
 1) Is the /tmp directory write  readable for the user mysql runs as?
 2) Has the /tmp directory enough free space?



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
did u check if there any firewall settings, forbidding you to create files,
check if  SELinux  is disabled

On Mon, Sep 10, 2012 at 6:08 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

  Hi


 at the moment this does not really matter to us.

 we have even tried to create a temp table with only one field in order
 to insert one row for testing, but we are currently not able to create any
 temporary tables whatsoever as even the simplest form of table still gives
 the same error.

 Regards



 On 09/10/2012 02:33 PM, Ananda Kumar wrote:

 this temp table will hold how many rows, what would be its size.

 On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

 Hi,
 We confirmed that the /tmp directory permissions is set to rwxrwxrwxt
 and is owned by root , the same as all our other servers.

 There is also about 60Gb of free space on the filesystem where /tmp
 resides.

 Regards





 On 09/10/2012 01:11 PM, Rik Wasmus wrote:

  the message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)

 Basics first:
 1) Is the /tmp directory write  readable for the user mysql runs as?
 2) Has the /tmp directory enough free space?



 --
   MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql






Re: Temporary table creation fails

2012-09-10 Thread Ananda Kumar
 please share the command ur using to create the temp table

On Mon, Sep 10, 2012 at 6:11 PM, Machiel Richards - Gmail 
machiel.richa...@gmail.com wrote:

  no selinux , checked this as well.

 We generally dont use selinux and disable it completely from installation.

 I have also gone through the firewall settings and that is only rules for
 connections.





 On 09/10/2012 02:40 PM, Ananda Kumar wrote:

 did u check if there any firewall settings, forbidding you to create
 files, check if  SELinux  is disabled

 On Mon, Sep 10, 2012 at 6:08 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

  Hi


 at the moment this does not really matter to us.

 we have even tried to create a temp table with only one field in
 order to insert one row for testing, but we are currently not able to
 create any temporary tables whatsoever as even the simplest form of table
 still gives the same error.

 Regards



 On 09/10/2012 02:33 PM, Ananda Kumar wrote:

 this temp table will hold how many rows, what would be its size.

 On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail 
 machiel.richa...@gmail.com wrote:

 Hi,
 We confirmed that the /tmp directory permissions is set to
 rwxrwxrwxt and is owned by root , the same as all our other servers.

 There is also about 60Gb of free space on the filesystem where /tmp
 resides.

 Regards





 On 09/10/2012 01:11 PM, Rik Wasmus wrote:

  the message ERROR 1005 (HY000): Can't create table
 '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)

 Basics first:
 1) Is the /tmp directory write  readable for the user mysql runs as?
 2) Has the /tmp directory enough free space?



 --
   MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql








Re: Replication setup hep

2012-08-02 Thread Ananda Kumar
if the server is offline , what kind of operation happens on it.

On Thu, Aug 2, 2012 at 11:31 AM, Pothanaboyina Trimurthy 
skd.trimur...@gmail.com wrote:

 Hi everyone
  i have 4 mysql servers out of those one server will
 be online always and the remaining will be offline and online. the
 operations takes place at the offline servers i want to replicate the
 data from the offline servers to the online server. for this can any
 one help me which kind of replication suits for this kind of
 situation.

 thanks in advance

 --
 3murthy

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: restrict Connect Time

2012-07-23 Thread Ananda Kumar
you can set this is in application server.
You can also set this parameter in my.cnf
wait_timeout=120 in seconds.
But the above parameter is only for inactive session


On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote:

 Hi list,
 is there a switch where i can restrict the connect/execution time for a
 query ?

 re,
  wh

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: restrict Connect Time

2012-07-23 Thread Ananda Kumar
you can check the slow query log, this will give you all the sql's which
are taking more time to execute

On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote:



 Am 23.07.2012 15:47, schrieb Ananda Kumar:
  you can set this is in application server.
  You can also set this parameter in my.cnf
  wait_timeout=120 in seconds.
  But the above parameter is only for inactive session
 


 acutualy i want to catch scripts running wild.

 re,
  wh

 
  On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote:
 
  Hi list,
  is there a switch where i can restrict the connect/execution time for a
  query ?
 
  re,
   wh
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 



Re: restrict Connect Time

2012-07-23 Thread Ananda Kumar
why dont u setup a staging env, which is very much similar to your
production and tune all long running sql

On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote:



 Am 23.07.2012 16:10, schrieb Ananda Kumar:
  you can check the slow query log, this will give you all the sql's which
  are taking more time to execute
 

 Yes but you will see the results only when the query is finished.
 my first idea was to use something like this:
 select * from information_schema.processlist where state like 'executing'
 and time  1000 ;

 unfortunately time i cumulative and i would kill long running processes
 that we have also.
 i guess i will make some assumptions about the statement and kill the rest.

 re,
  wh


  On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 15:47, schrieb Ananda Kumar:
  you can set this is in application server.
  You can also set this parameter in my.cnf
  wait_timeout=120 in seconds.
  But the above parameter is only for inactive session
 
 
 
  acutualy i want to catch scripts running wild.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote:
 
  Hi list,
  is there a switch where i can restrict the connect/execution time for
 a
  query ?
 
  re,
   wh
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 



Re: restrict Connect Time

2012-07-23 Thread Ananda Kumar
so. its more of inactive connections, right.
What do you mean by NEVER LOGOUT

On Mon, Jul 23, 2012 at 8:17 PM, walter harms wha...@bfs.de wrote:



 Am 23.07.2012 16:37, schrieb Ananda Kumar:
  why dont u setup a staging env, which is very much similar to your
  production and tune all long running sql
 

 They are tuned and they are fast :) but the never logout and therefore
 the time get accumulated.

 re,
  wh


  On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 16:10, schrieb Ananda Kumar:
  you can check the slow query log, this will give you all the sql's
 which
  are taking more time to execute
 
 
  Yes but you will see the results only when the query is finished.
  my first idea was to use something like this:
  select * from information_schema.processlist where state like
 'executing'
  and time  1000 ;
 
  unfortunately time i cumulative and i would kill long running processes
  that we have also.
  i guess i will make some assumptions about the statement and kill the
 rest.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 15:47, schrieb Ananda Kumar:
  you can set this is in application server.
  You can also set this parameter in my.cnf
  wait_timeout=120 in seconds.
  But the above parameter is only for inactive session
 
 
 
  acutualy i want to catch scripts running wild.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote:
 
  Hi list,
  is there a switch where i can restrict the connect/execution time
 for
  a
  query ?
 
  re,
   wh
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 
 
 



Re: restrict Connect Time

2012-07-23 Thread Ananda Kumar
not very clear. You use connection pooling, right.
If the same session/thread is held for long time, it means the thread is
not getting closed even after doing its job, and adding to execution time.

On Mon, Jul 23, 2012 at 9:05 PM, walter harms wha...@bfs.de wrote:



 Am 23.07.2012 16:58, schrieb Ananda Kumar:
  so. its more of inactive connections, right.
  What do you mean by NEVER LOGOUT
 

 The programms watch certain states in the database,
 the connect automatic at db startup, disconnecting
 is an error case.

 re,
  wh


  On Mon, Jul 23, 2012 at 8:17 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 16:37, schrieb Ananda Kumar:
  why dont u setup a staging env, which is very much similar to your
  production and tune all long running sql
 
 
  They are tuned and they are fast :) but the never logout and therefore
  the time get accumulated.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 16:10, schrieb Ananda Kumar:
  you can check the slow query log, this will give you all the sql's
  which
  are taking more time to execute
 
 
  Yes but you will see the results only when the query is finished.
  my first idea was to use something like this:
  select * from information_schema.processlist where state like
  'executing'
  and time  1000 ;
 
  unfortunately time i cumulative and i would kill long running
 processes
  that we have also.
  i guess i will make some assumptions about the statement and kill the
  rest.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote:
 
 
 
  Am 23.07.2012 15:47, schrieb Ananda Kumar:
  you can set this is in application server.
  You can also set this parameter in my.cnf
  wait_timeout=120 in seconds.
  But the above parameter is only for inactive session
 
 
 
  acutualy i want to catch scripts running wild.
 
  re,
   wh
 
 
  On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de
 wrote:
 
  Hi list,
  is there a switch where i can restrict the connect/execution time
  for
  a
  query ?
 
  re,
   wh
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 
 
 
 
 



Re: Finding Rows With Common Items

2012-07-12 Thread Ananda Kumar
SQL  select * from orddd;

   ORDERID PRODID
-- --
 2  5
 1  3
 1  2
 2  7
 1  5

SQL select prodid,count(*) from orddd group by PRODID having count(*)  1;

PRODID   COUNT(*)
-- --
 5  2



On Thu, Jul 12, 2012 at 11:42 PM, Shawn Green shawn.l.gr...@oracle.comwrote:

 On 7/12/2012 1:49 PM, John Nichel wrote:

 Hi all,

 Lets say I have a table with two colums: 'orderid' and 'productid'.  This
 table contains line items for purchases in a store where a purchase could
 be 1 or more line items (1 or more rows).  A typical order may look like
 this:

 orderid | productid
 12345   | 9876
 12345   | 6789
 12345   | 7698

 Is there a simple way to query the table to pull orders which have 2 or
 more products in common?  For example, if order 12345 has 5 line items and
 order 12348 has 7 line items, I would like the query to return these two
 orders if they have products 9876 and 6789 in common, and I would also
 like it to return orders 23456, 65432 and 34567 where they have 8796, 6789
 and 4456 in common.  I hope I'm explaining this well...I know what I'm
 trying to accomplish in my head, but we all know that doesn't always
 translate too well in an email.  :)  For the record, this is a MySQL
 4.1.22 db.  TIA


 Basically you want an index on (productid, orderid) on the main table to
 do the lookup faster. As you may want to do this query only once, make a
 list of all of the products that have particpated in more than one order
 then periodically update this list.

 One way to create this list is with the following query:

 CREATE TABLE multipleordprods (key (productid,ordercount), key(ordercount,
 productid))
 SELECT productid, count(orderid) ordercount
 FROM your table name here
 GROUP BY productid HAVING ordercount 1;

 Now we have a list of all productid values that participated in more than
 1 order. You can now sort this table by number of orders or by product. You
 can join this table back to your original table to get a list of the
 orderid for any one product. This summary table is the key to drilling into
 your data.

 You can also add more columns to this table or create other summary tables
 using combinations of time or price or any other dimensions you want to use
 to slice and dice your data. This is the core principal to designing a data
 warehouse for online analytical processing (OLAP).

 Yours,
 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: why does select * from table oder by indexed_field not use key?

2012-07-11 Thread Ananda Kumar
column used in the order by caluse, should be the first column in the
select statement to make the index work

On Wed, Jul 11, 2012 at 3:16 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 11.07.2012 11:43, schrieb Ewen Fortune:
  Hi,
 
  On Wed, Jul 11, 2012 at 10:31 AM, Reindl Harald h.rei...@thelounge.net
 wrote:
  the mysql query optimizer is somehow stupid
 
  Its not stupid - remember its not trying to find the best index,
  rather its trying to find the least costly plan
  to return the data in the quickest manner.
 
  For the optimizer in this case it believes its faster to do a full
  table scan with filesort rather than read from the index
  and have to scan the entire table anyway.
 
  Quick test shows it is indeed faster to do a full table scan.
 
  mysql show profiles;
 
 +--++--+
  | Query_ID | Duration   | Query
  |
 
 +--++--+
  |1 | 0.32261700 | SELECT SQL_NO_CACHE * FROM
  cms1_quickbar_groups force key (qbq_key) ORDER BY qg_sort ASC |
  |2 | 0.24592100 | SELECT SQL_NO_CACHE * FROM
  cms1_quickbar_groups ORDER BY qg_sort ASC |
 
 +--++--+
  2 rows in set (0.00 sec)

 this may be true for small data where it does not matter at all
 but if this would be a large table it would cause a lot of I/O




Re: Query take too long time - please help!

2012-07-10 Thread Ananda Kumar
can u show the explain plan for your query

On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera darekmaci...@gmail.comwrote:

 Hello,

 I have table:

 mysql DESCRIBE books;

 |id  |int(255)  | NO   | PRI |
 NULL  | auto_increment |
 | idu   | int(255)  | NO   | MUL | NULL
 | ksd   | char(15)  | YES  | | NULL
 | idn   | int(1)| NO   | MUL | NULL
 | text1   | text  | NO   | | NULL
 | ips | int(1)| NO   | MUL | NULL
 | ips2| int(1)| NO   | MUL | NULL
 | date | timestamp | NO   | | CURRENT_TIMESTAMP
 | date2   | date  | NO   | | NULL
 | text2| text  | NO   | | NULL
 | text3| text  | NO   | | NULL

 I have indexes in this table:

 mysql SHOW INDEX FROM uzytkownicy_strona_kody;

 | books |  0 | PRIMARY|1 | id  | A
 |  369625 | NULL | NULL   |  | BTREE  |
 |
 | books |  1 | idx_idu|1 | idu  | A
  |  184812 | NULL | NULL   |  | BTREE  |
 |
 | books |  1 | idx_id |1 | id | A
|  369625 | NULL | NULL   |  | BTREE  |
 |
 | books |  1 | idx_idn|1 | idn  | A
  |   8 | NULL | NULL   |  | BTREE  |
 |
 | books |  1 | idx_ips  |1 | ips  | A   |
  8 | NULL | NULL   |  | BTREE  | |
 | books |  1 | idx_ips2 |1 | ips2 | A  |
 8 | NULL | NULL   |  | BTREE  | |

 Every books have unique: 'ksd'. There are about 370.000 records in this
 table.

 But this SELECT take too long time:

 mysql SELECT * FROM books WHERE ksd ='A309CC47B7';
 1 row in set (2.59 sec)


 Table is in InnoDB engine. I added to my.cnf:   innodb_buffer_pool_size =
 512MB

 Any suggestions? Help, please..

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Query take too long time - please help!

2012-07-10 Thread Ananda Kumar
you are using a function-LOWER, which will not make use of the unique key
index on ksd.
Mysql does not support function based index, hence your query is doing a
FULL TABLE scan and taking more time.

On Tue, Jul 10, 2012 at 4:46 PM, Darek Maciera darekmaci...@gmail.comwrote:

 2012/7/10 Ananda Kumar anan...@gmail.com:
  can u show the explain plan for your query
 

 Thanks, for reply!

 Sure:

 mysql EXPLAIN SELECT * FROM books WHERE LOWER(ksd)=LOWER('4204661375');

 ++-+-+--+---+--+-+--++-+
 | id | select_type | table   | type | possible_keys |
 key  | key_len | ref  | rows   | Extra   |

 ++-+-+--+---+--+-+--++-+
 |  1 | SIMPLE  | books  | ALL  | NULL  |
 NULL | NULL| NULL | 378241 | Using where |

 ++-+-+--+---+--+-+--++-+
 1 row in set (0.00 sec)

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Indexing about 40 Billion Entries

2012-06-20 Thread Ananda Kumar
looks like the value that you give for myisam_max_sort_size is not enough
for the index creation and hence it doing a REPAIR WITH KEYCACHE

Use the below query to set the min values required for myisam_max_sort_size
to avoid repair with keycache


select
a.index_name as index_name,
ceil(
(select count(*) from `db_name`.`table_name`) *(
@@global.myisam_data_pointer_size +
sum(ifnull(1 + `a`.`sub_part` *
(`b`.`character_octet_length`/`b`.`character_maximum_length`),
ifnull(1 + `b`.`character_octet_length`,
case
when `b`.`data_type` = 'tinyint' then 1
when `b`.`data_type` = 'smallint' then 2
when `b`.`data_type` = 'mediumint' then 3
when `b`.`data_type` = 'int' then 4
when `b`.`data_type` = 'bigint' then 8
when `b`.`data_type` = 'float' then 4
when `b`.`data_type` = 'double' then 8
when `b`.`data_type` = 'real' then 8
when `b`.`data_type` = 'bit' then 8
when `b`.`data_type` = 'date' then 3
when `b`.`data_type` = 'datetime' then 8
when `b`.`data_type` = 'timestamp' then 4
when `b`.`data_type` = 'time' then 3
when `b`.`data_type` = 'year' then 1
when `b`.`data_type` = 'enum' then 2
when `b`.`data_type` = 'set' then 8
when `b`.`data_type` = 'decimal' then 8

end
)

)
+ if(`a`.`nullable`='YES',1,0)
+ if(`b`.`character_octet_length` =255,2,0)
))/1048576)*1048576  as `index_size`
from
`information_schema`.`statistics` `a`,
`information_schema`.`columns` `b`
where
`a`.`table_name`=`b`.`table_name` and
`a`.`table_schema`=`b`.`table_schema` and
`a`.`column_name`=`b`.`column_name` and
`a`.`table_schema`='db_name' and
`a`.`table_name`='table_name'

group by `a`.`index_name`
order by `index_size` desc limit 1;

On Wed, Jun 20, 2012 at 3:15 PM, Christian Koetteritzsch 
ckoetteritz...@e-humanities.net wrote:

 Hi guys,

 As the title says I'm trying to index 40 billion entries with two indexes
 on a server with 16 cores and 128GB RAM. The table is the one below and it
 is a myisam table. The *.myd file is about 640GB

 DROP TABLE IF EXISTS `l4_link`;
 CREATE TABLE  `l4_link` (
  `ruid1` int NOT NULL,
  `ruid2` int NOT NULL,
  `overlap` int NOT NULL
 );

 I need an index for ruid1 and for ruid2.

 The status for this table is the following:

   Name: l4_link
 Engine: MyISAM
Version: 10
 Row_format: Fixed
   Rows: 39806500262
  Avg_row_length: 17
Data_length: 676710504454
 Max_data_length: 4785074604081151
   Index_length: 1024
  Data_free: 0
  Auto_increment: NULL
Create_time: 2012-06-19 14:51:29
Update_time: 2012-06-19 16:26:35
 Check_time: NULL
  Collation: utf8_general_ci
   Checksum: NULL
  Create_options:
Comment:

 The variables for myisam are the following:
 mysql show global variables like '%myisam%';
 +-**---+--**
 -+
 | Variable_name   | Value
  |
 +-**---+--**
 -+
 | myisam_data_pointer_size  | 6 |
 | myisam_max_sort_file_size | 9223372036853727232  |
 | myisam_mmap_size | 18446744073709551615|
 | myisam_recover_options | BACKUP   |
 | myisam_repair_threads   | 1 |
 | myisam_sort_buffer_size| 8388608  |
 | myisam_stats_method| nulls_unequal |
 | myisam_use_mmap | OFF
|
 +-**--+---**
 -+
 8 rows in set (0.00 sec)

 The temp folder has about 16tb free space.

 When I start the indexing process, it copies the 640Gb into a temp file
 and then starts with repair with keycache.
 On the internet I found that if it says repair with keycache you shold
 increase the myisam_max_sort_file_size, but this didn't work.
 It still says repair with keycache after it copied the data to the temp
 dir.

 I hope you have any idea how to fix this.

 Thanks in advance.

 Christian


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysqldump not escaping single quotes in field data

2012-06-16 Thread Ananda Kumar
 mysqldump --databases test --tables ananda  test.dmp
mysql show create table ananda\G;
*** 1. row ***
   Table: ananda
Create Table: CREATE TABLE `ananda` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)


On Sat, Jun 16, 2012 at 3:36 AM, Rick James rja...@yahoo-inc.com wrote:

 Are you using an abnormal CHARACTER SET or COLLATION?
 SHOW CREATE TABLE
 Show us the args to mysqldump.

  -Original Message-
  From: James W. McNeely [mailto:j...@newcenturydata.com]
  Sent: Friday, June 15, 2012 10:19 AM
  To: mysql@lists.mysql.com
  Subject: mysqldump not escaping single quotes in field data
 
  My backups from a mysqldump process are useless, because the dump files
  are not escaping single quotes in the data in the fields.
 
  So, O'Brien kills it - instead of spitting out 'O\'Brien'
  it spits out
  'O'Brien'
 
  I don't see anywhere in the documentation about mysqldump where you can
  tweak this kind of thing.
 
  We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.
 
  I tried to enter this into the Oracle support center but all of the
  navigational and SR tabs are gone. Maybe our accounting dept. forgot to
  pay the bill or something.
 
  Thanks,
 
  Jim McNeely
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: console input

2012-06-15 Thread Ananda Kumar
Did you try using IGNORE keyword while using the LOAD DATAFILE command.
This will ignore duplicate rows from getting inserted and proceed further.

On Fri, Jun 15, 2012 at 11:05 AM, Keith Keller 
kkel...@wombat.san-francisco.ca.us wrote:

 On 2012-06-14, Gary Aitken my...@dreamchaser.org wrote:
 
  So...  I wanted to read the data line at a time and use a plain INSERT
 statement.  That way I could check for duplicate keys and discover where
 the duplicate records are.  However, I can't find a way to read input from
 the console or a file.  What am I missing?  I know I could write a java or
 C++ program to do this, but it seems like overkill for what should be a
 trivial task.

 I don't know of any stock mysql program that does such a thing.  You
 could write a Perl or Python program for this task; this program would
 probably be fairly short.

 --keith


 --
 kkel...@wombat.san-francisco.ca.us



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysqldump not escaping single quotes in field data

2012-06-15 Thread Ananda Kumar
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below

LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(5,'O\'Brien');
/*!4 ALTER TABLE `ananda` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;


Import it back to database

Database changed
mysql drop table ananda;
Query OK, 0 rows affected (0.00 sec)

mysql --database test  test.dmp

mysql select * from ananda;
+--+-+
| id   | name|
+--+-+
|1 | ananda  |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|5 | O'Brien |
+--+-+

May be u want to upgrade you database

On Fri, Jun 15, 2012 at 10:48 PM, James W. McNeely
j...@newcenturydata.comwrote:

 My backups from a mysqldump process are useless, because the dump files
 are not escaping single quotes in the data in the fields.

 So, O'Brien kills it - instead of spitting out
 'O\'Brien'
 it spits out
 'O'Brien'

 I don't see anywhere in the documentation about mysqldump where you can
 tweak this kind of thing.

 We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.

 I tried to enter this into the Oracle support center but all of the
 navigational and SR tabs are gone. Maybe our accounting dept. forgot to pay
 the bill or something.

 Thanks,

 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: NoSQL help

2012-06-14 Thread Ananda Kumar
can u share the sql, explain plan, indexes etc,
show full processlist out put when the sql's are running

On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
manivanna...@spanservices.com wrote:

 I tried with myisam engine also. But it also taking more time to generate
 the report. In my database I am having 8 innodb tables and at the same time
 I am joining 4 tables to get the report.

 I am maintaining 60days records because the user will try to generate the
 report out of 60 days in terms of second, minute, hourly, weekly and
 Monthly report also.

 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 12:32 AM
 To: Rick James
 Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
 Subject: Re: NoSQL help

 Did you try with myisam tables.
 They are supposed to be good for reporting requirement
 On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.commailto:
 rja...@yahoo-inc.com wrote:
 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S. manivanna...@spanservices.commailto:
 manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on a
  wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only need
  to update it for a handful. Similar tricks are possible for a lot of
  typical reporting stuff - you don't need to re-calculate data for past
  months over and over again, for instance - and that's what makes your
  reports run fast.
 
 
  Just my 2 cents :-)
  /johan
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives

Re: NoSQL help

2012-06-14 Thread Ananda Kumar
As seen below,
Full table scan is happening on table ibf.
Can share the indexes on this table and also the complete sql

On Thu, Jun 14, 2012 at 4:17 PM, Manivannan S. 
manivanna...@spanservices.com wrote:

id

 select_type

 table

 type

 possible_keys

 key

 key_len

 ref

 rows

 Extra

 1

 SIMPLE

 ibf

 ALL

 ibf_MsgId

 \N

 \N

 \N

 160944

 1

 SIMPLE

 pl

 ref

 idx_unique_key_ib_msg\,index_message_id\,index_message_processing_status**
 **

 idx_unique_key_ib_msg

 180

 reports.ibf.Message_Id\,const

 1

 Using where; Using index

 1

 SIMPLE

 tl

 ref

 idx_unique_key_ib_text\,index_message_id

 idx_unique_key_ib_text

 153

 reports.pl.Message_Id

 1

 1

 SIMPLE

 xl

 ref

 idx_unique_key_ib_xml\,index_message_id

 idx_unique_key_ib_xml

 153

 reports.pl.Message_Id

 1

 Using where

 ** **

 Sorry for the previous mail….. this is my execution plan for 1.5 million
 records….

 ** **

 *From:* Ananda Kumar [mailto:anan...@gmail.com]
 *Sent:* Thursday, June 14, 2012 3:33 PM

 *To:* Manivannan S.
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: NoSQL help

 ** **

 can u share the sql, explain plan, indexes etc,

 show full processlist out put when the sql's are running

 On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
 manivanna...@spanservices.com wrote:

 I tried with myisam engine also. But it also taking more time to generate
 the report. In my database I am having 8 innodb tables and at the same time
 I am joining 4 tables to get the report.

 I am maintaining 60days records because the user will try to generate the
 report out of 60 days in terms of second, minute, hourly, weekly and
 Monthly report also.

 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 12:32 AM
 To: Rick James
 Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
 Subject: Re: NoSQL help


 Did you try with myisam tables.
 They are supposed to be good for reporting requirement

 On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.commailto:
 rja...@yahoo-inc.com wrote:
 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-

  From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.

  Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -

   From: Manivannan S. manivanna...@spanservices.commailto:
 manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those

Re: Aborted clients

2012-06-13 Thread Ananda Kumar
is iptables service running on db server, if yes, trying stopping it and
check

On Wed, Jun 13, 2012 at 5:04 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 2012/6/13 Johan De Meersman vegiv...@tuxera.be

 
  - Original Message -
   From: Claudio Nanni claudio.na...@gmail.com
  
   @Johan, you say I'm having trouble with clients aborting, but for
   some reason they don't get logged.
 
  Ah, it *did* start logging, now, and they come from multiple
 applications,
  too.
 
  120612 12:19:09 [Warning] Aborted connection 13019149 to db: 'music'
 user:
  'music' host: 'viaprod1' (Got an error reading communication packets)
  120612 13:13:52 [Warning] Aborted connection 13020111 to db: 'epg' user:
  'epg' host: 'viaprod1' (Got timeout reading communication packets)
  120612 14:21:10 [Warning] Aborted connection 13021624 to db: 'music'
 user:
  'music' host: 'viaprod1' (Got an error reading communication packets)
 
  Am I wrong in thinking this looks more like a hardware/network problem?
 
 
 
 Not at all.
 Just to close completely the code 'option', are you sure the codebase is
 completely different? since they still come from the same host.
 In this way so we can totally exclude code 'bad' habit.

 Then network can be a problem for sure, usually when there are firewalls in
 between,
 also when I had similar problems a network change took place, like changing
 switches or some configuration.

 Can you count the hops between MySQL and the app server?

 Dank Je Wel ;)

 Claudio






  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 



 --
 Claudio



Re: NoSQL help

2012-06-13 Thread Ananda Kumar
Did you try with myisam tables.
They are supposed to be good for reporting requirement

On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.com wrote:

 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S. manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on a
  wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only need
  to update it for a handful. Similar tricks are possible for a lot of
  typical reporting stuff - you don't need to re-calculate data for past
  months over and over again, for instance - and that's what makes your
  reports run fast.
 
 
  Just my 2 cents :-)
  /johan
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql




Re: Aborted clients

2012-06-12 Thread Ananda Kumar
is there anything you can see in /var/log/messages

On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 Johan,

 Print out warnings such as Aborted connection... to the error log.
 the dots are not telling if they comprise Aborted clients as well.
 I find the MySQL error log extremely poor, as far as I know it is one of
 the MySQL features (like authentication) stuck to the dawn of MySQL times.
 Very hard to debug non basic things like your issue.
 From what I have experienced usually Aborted connection  means wrong
 credentials while Aborted clients means the client (typically PHP) did not
 close the connection properly.
 Do you have any chance to check if the code is closing the connections to
 the mysql database?

 Cheers

 Claudio

 2012/6/12 Johan De Meersman vegiv...@tuxera.be

  Yo,
 
  I'm having trouble with clients aborting, but for some reason they don't
  get logged.
 
  The documentation at http://preview.tinyurl.com/27w9a4x clearly states
  If a client successfully connects but later disconnects improperly or is
  terminated, the server increments the Aborted_clients status variable,
 and
  logs an Aborted connection message to the error log.
 
  The log_warnings variable has been set; originally to 1 and later to 2
  because another bit of the doc says  If the value is greater than 1,
  aborted connections are written to the error log.
 
  The error.log I'm looking at is the one that is currently opened by the
  MySQL daemon, as shown by lsof - and does have entries about
  non-replication-safe queries I'd been doing several days ago.
 
  And, yet, I see the Aborted_clients counter increase, but never find any
  entries in the errorlog - which is annoying, because now I don't even
 know
  which application is misbehaving.
 
  This is MySQL 5.1.50-community-log on Suse 11.1 64-bit.
 
  Does anyone have an idea why the aborted clients don't get logged, and
 how
  to fix it?
 
  thx,
  Johan
 
  --
 
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 



 --
 Claudio



Re: Aborted clients

2012-06-12 Thread Ananda Kumar
or you can check application logs to see why the client lost connectivity
from the app

On Tue, Jun 12, 2012 at 5:12 PM, Ananda Kumar anan...@gmail.com wrote:

 is there anything you can see in /var/log/messages


 On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 Johan,

 Print out warnings such as Aborted connection... to the error log.
 the dots are not telling if they comprise Aborted clients as well.
 I find the MySQL error log extremely poor, as far as I know it is one of
 the MySQL features (like authentication) stuck to the dawn of MySQL times.
 Very hard to debug non basic things like your issue.
 From what I have experienced usually Aborted connection  means wrong
 credentials while Aborted clients means the client (typically PHP) did not
 close the connection properly.
 Do you have any chance to check if the code is closing the connections to
 the mysql database?

 Cheers

 Claudio

 2012/6/12 Johan De Meersman vegiv...@tuxera.be

  Yo,
 
  I'm having trouble with clients aborting, but for some reason they don't
  get logged.
 
  The documentation at http://preview.tinyurl.com/27w9a4x clearly states
  If a client successfully connects but later disconnects improperly or
 is
  terminated, the server increments the Aborted_clients status variable,
 and
  logs an Aborted connection message to the error log.
 
  The log_warnings variable has been set; originally to 1 and later to 2
  because another bit of the doc says  If the value is greater than 1,
  aborted connections are written to the error log.
 
  The error.log I'm looking at is the one that is currently opened by the
  MySQL daemon, as shown by lsof - and does have entries about
  non-replication-safe queries I'd been doing several days ago.
 
  And, yet, I see the Aborted_clients counter increase, but never find any
  entries in the errorlog - which is annoying, because now I don't even
 know
  which application is misbehaving.
 
  This is MySQL 5.1.50-community-log on Suse 11.1 64-bit.
 
  Does anyone have an idea why the aborted clients don't get logged, and
 how
  to fix it?
 
  thx,
  Johan
 
  --
 
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 



 --
 Claudio





Re: i need advice on redundancy of mysql server.

2012-06-11 Thread Ananda Kumar
when u say redudency.
Do u just want replication like master-slave, which will be active-passive
or
Master-master which be active-active.

master-slave, will work just a DR, when ur current master fails you can
failover the slave, with NO LOAD balancing.

Master-master allows load balancing.

On Mon, Jun 11, 2012 at 7:56 PM, Joey L mjh2...@gmail.com wrote:

 I am running a site with about 50gig myisam databases which are the
 backend to different websites.
 I can not afford any downtime and the data is realtime.

 What is the best method for this setup? master-master or master-slave?

 What are the best utilities to create and maintain this setup?  as far
 as load balancing between the two physical servers that i am running.
 I am currently working with percona utilities - is there something better ?
 what would you use to load balance mysql ? what would you use to load
 balance apache.


 thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Architectural Help

2012-05-24 Thread Ananda Kumar
is the central database server just ONE server, to which all your 50 data
center app connects

On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:

 Hi All,


 I need architectural help for our requirement,


 We have nearly 50 data centre through out different cities from these data
 center application connect to central database server currently, there are
 conectivity and nework flcutions issues for different data center, so we
 comeup with solution each data center we should have local database server
 which will keep syncing with other server so that application doesnt fail ,
 User data can be updated in any of server and should reflect in every
 server.  Application consists of write/read/delete operations,


 Current writes each day central server 1million.


 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.


 How can we achive this ? solution needs very much real time data accepting
 nework lags.


 Solution

 Collect all changes in other 49 server into 1 central server(How can we
 collect data)


 49 keeps updating data into local database from central server(Using
 Repliation Can be done)



 --Anupam



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Ananda Kumar
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.

Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings

regards
anandkl

On Wed, May 23, 2012 at 3:45 PM, Andrew Moore eroomy...@gmail.com wrote:

 Yu,

 The upgrade to 5.5 that Jonny advises should NOT your first action. If
 MySQL is mis-configured on 5.0 it will likely be misconfigured on 5.1 and
 5.5. Test your application thoroughly on the new version before heeding
 that advice. Read the change logs and known bugs. Running the upgrade might
 seem painless but if you have some legacy feature in place then things will
 not work how you may expect them to.

 Review your needs and see if a switch to innodb storage engine will give
 you any performance gain. The locking differences alone might make this
 worthwhile. TEST it.

 You did not state your data and index size. You will benefit from having
 enough RAM so that your 'working' data set fits to memory. This isn't
 possible/practical for large data but if you have a 5G dataset and 8G
 available memory you might not need to rush out and spend money.

 If you're heavily using MyISAM, review and tune the MyISAM related buffers.
 If you are working mostly with InnoDB tune those variables. Measure, change
 measure again. It might be an iterative process but you will learn lots
 along the way.

 Good luck.

 Andy

 On Wed, May 23, 2012 at 5:44 AM, Tsubasa Tanaka yoku0...@gmail.com
 wrote:

  Hello, Yu-san,
  (へろへろな英語で申し訳ないです)
 
  Can I think that you already tweaked Index on the tables?
  if you yet,please create apt indexes.
 
  MyISAM caches only Index without data.
  i take way for decreasing disk seek,
   1) create more indexes on the tables,if the tables doesn't update quite
  often.
including data into index forcibly.
this makes slow for insert and update,and this is dirty idea,i think.
 (よくSELECTされるカラムをINDEXに含めてしまいます。
 ただし、SELECT * FROMで呼ばれることが多い場合には使えない上に
 かなり美しくない策です。。)
   2) tune filesystem and disk drive parameter for datadir.
MyISAM table's data caches only in the filesystem cache.
But i regret that i don't have knowledge around filesystem.
 
  あまり力になれなくて申し訳ないです。
 
  regards,
 
 
  ts. tanaka//
 
  2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
   Hello Tsubasa.
  
   Thank you for the reply. (返信ありがとうございます。)
  
   Our high loaded DB are both INNODB and MyISAM.
   Espicially , on MyISAM.
  
   I will consider the tuning of innodb_buffer_pool_size as well.
  
   Do you know the tips for how to tune the disk access for MyISAM?
  
   Thanks,
   Yu
  
   Tsubasa Tanaka さんは書きました:
  Hello,
  
  I seem your mysqld doesn't use enough memory.
  
  Date   Time  CPU%  RSS VSZ
  2012/5/22  21:00:39  109   294752  540028
  
  if your mysqld uses InnoDB oftenly,
  edit innodb_buffer_pool_size in you my.cnf.
  
  
 
 http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size
  
  
  table_cache
  thread_cache_size
  tmp_table_size
  max_heap_table_size
  
  but made not much difference.
  
  It is solution for only sql's large result,i think.
  if you doesn't recognize that problem causes large result,
  you should approach other way,too.
  
  regards,
  
  
  ts. tanaka//
  
  2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
   Also following is the free command result.
  
   total   used   free sharedbuffers
  cached
   Mem:   81623807843676 318704  0  95632
   5970892
   -/+ buffers/cache:17771526385228
   Swap:  8032492  235608008932
  
   Thanks,
   Yu
  
  
   Yu Watanabe さんは書きました:
  Hello all.
  
  I would like to ask for advice with performance tuning with MySQL.
  
  Following are some data for my server.
  
  CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
  Memory : 8GB
  OS : RHEL 4.4 x86_64
  MySQL  : MySQL 5.0.50sp1-enterprise
  
  Attached file
  # my.cnf.txt  : my.cnf information
  # mysqlext_20120522131034.log : variable and status information from
  mysqladmin
  
  I have 2 database working with high load.
  
  I wanted to speed up my select and update queries not by
  optimizing the query itself but tuning the my.cnf.
  
  I have referred to following site,
  http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
  
  and read Hiperformance Mysql vol.2 ,
  and increased the following values,
  
  table_cache
  thread_cache_size
  tmp_table_size
  max_heap_table_size
  
  but made not much difference.
  
  According to the ps and sar result
  
  *1 PS result
  Date   Time  CPU%  RSS VSZ
  2012/5/22  21:00:39  109   294752  540028
  
  *2 SAR
  Average CPU user 25%
  sys  5%
  io   3%
  
  I assume that MySQL can work more but currently not.
  
  I am considersing to off load 1 high load database to
  seperate process and make MySQL work in multiple process.
  

Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
do u have one file per table or just one system tablespace datafile.

On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav
kish...@railsfactory.orgwrote:

 Thanks for the reply, but in my case the datafile is growing 1 GB per day
 with only 1 DB (apart from mysql / information_schema / test) and the size
 of the DB is just 600MB, where records get updated / deleted / added and on
 an average it maintains 600MB only. Now the datafile is increased to 30GB
 from the past 30 days, do you have any idea how to reduce this ?

 Also just wondering what does the datafile contains actually and why can't
 it gets decreased ?

 *thanks  regards,
 __*
 Kishore Kumar Vaishnav
 *
 *
 On Tue, May 22, 2012 at 1:40 PM, Claudio Nanni claudio.na...@gmail.com
 wrote:

  Kishore,
  No, as already explained, it is not possible, Innodb datafiles *never*
  shrink.
 
  Cheers
 
  Claudio
  On May 22, 2012 10:05 AM, Kishore Vaishnav kish...@railsfactory.org
  wrote:
 
  Hi,
 
  I understand that if I set the  innodb_file_per_table then once the
 table
  is drop the datafile will also be lost. But is there a way where I
  truncate
  the table and the datafile shrinks itself ?
 
  *thanks  regards,
  __*
  Kishore Kumar Vaishnav
  *
 
  *
  On Mon, May 21, 2012 at 6:43 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
   - Original Message -
From: Manivannan S. manivanna...@spanservices.com
   
How to reduce the ibdata1 file size in both LINUX and WINDOWS
machine.
  
   This is by design - you cannot reduce it, nor can you remove added
   datafiles.
  
   If you want to shrink the ibdata files, you must stop all connections
 to
   the server, take a full backup, stop the server, remove the datafiles
  (and
   maybe change the config), restart the server (will take time to
 recreate
   emtpy datafiles) and then import the backup.
  
   For new tables, you can turn on the option innodb_file_per_table -
 then
   every (new) table gets it's own datafile; and when you drop the table,
  that
   datafile also gets deleted.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 



Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
Yes,
Looks like there is lot of fragmentation.
Is this production or dev/qa.

You need to take a dump,
drop database, make changes to parameter file to have file per table and
restore the dump, so that each table will have its own .idb files, and you
can easily manage the space.

regards
anandkl

On Tue, May 22, 2012 at 2:58 PM, Kishore Vaishnav
kish...@railsfactory.orgwrote:

 Right now one tablespace datafile. But does it matters if i have one file
 per table.

 *thanks  regards,
 __*
 Kishore Kumar Vaishnav
 *
 *
  On Tue, May 22, 2012 at 2:56 PM, Ananda Kumar anan...@gmail.com wrote:

 do u have one file per table or just one system tablespace datafile.

 On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav 
 kish...@railsfactory.org wrote:

 Thanks for the reply, but in my case the datafile is growing 1 GB per day
 with only 1 DB (apart from mysql / information_schema / test) and the
 size
 of the DB is just 600MB, where records get updated / deleted / added and
 on
 an average it maintains 600MB only. Now the datafile is increased to 30GB
 from the past 30 days, do you have any idea how to reduce this ?

 Also just wondering what does the datafile contains actually and why
 can't
 it gets decreased ?

 *thanks  regards,
 __*
 Kishore Kumar Vaishnav
 *
 *
 On Tue, May 22, 2012 at 1:40 PM, Claudio Nanni claudio.na...@gmail.com
 wrote:

  Kishore,
  No, as already explained, it is not possible, Innodb datafiles *never*
  shrink.
 
  Cheers
 
  Claudio
  On May 22, 2012 10:05 AM, Kishore Vaishnav kish...@railsfactory.org
 
  wrote:
 
  Hi,
 
  I understand that if I set the  innodb_file_per_table then once the
 table
  is drop the datafile will also be lost. But is there a way where I
  truncate
  the table and the datafile shrinks itself ?
 
  *thanks  regards,
  __*
  Kishore Kumar Vaishnav
  *
 
  *
  On Mon, May 21, 2012 at 6:43 PM, Johan De Meersman 
 vegiv...@tuxera.be
  wrote:
 
   - Original Message -
From: Manivannan S. manivanna...@spanservices.com
   
How to reduce the ibdata1 file size in both LINUX and WINDOWS
machine.
  
   This is by design - you cannot reduce it, nor can you remove added
   datafiles.
  
   If you want to shrink the ibdata files, you must stop all
 connections to
   the server, take a full backup, stop the server, remove the
 datafiles
  (and
   maybe change the config), restart the server (will take time to
 recreate
   emtpy datafiles) and then import the backup.
  
   For new tables, you can turn on the option innodb_file_per_table -
 then
   every (new) table gets it's own datafile; and when you drop the
 table,
  that
   datafile also gets deleted.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 






Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
single file will not release the space to OS, it will continue to hold the
space got from drop or truncate and used only for the database

On Tue, May 22, 2012 at 3:20 PM, Kishore Vaishnav
kish...@railsfactory.orgwrote:

 Hi Reindl Harald,

 Does this means that if we have a single tablespace with file per table and
 doing the optimization will reduce the size of the datafile size ? If yes,
 then why this not possible on the datafile (one single file) too ?
 *
 *
 *thanks  regards,*
 *__*
 Kishore Kumar Vaishnav
 *
 *
 On Tue, May 22, 2012 at 3:07 PM, Reindl Harald h.rei...@thelounge.net
 wrote:

  as multiple answered, yes it matters!
 
  there is no way to reduce the size of a single tablespace
  with file per table you can shrink the files with
  optimize table tblname which is in fact a ALTER TABLE
  without real changes
 
  Am 22.05.2012 11:28, schrieb Kishore Vaishnav:
   Right now one tablespace datafile. But does it matters if i have one
 file
   per table.
  
   On Tue, May 22, 2012 at 2:56 PM, Ananda Kumar anan...@gmail.com
 wrote:
  
   do u have one file per table or just one system tablespace datafile.
  
   On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav 
   kish...@railsfactory.org wrote:
  
   Thanks for the reply, but in my case the datafile is growing 1 GB per
  day
   with only 1 DB (apart from mysql / information_schema / test) and the
  size
   of the DB is just 600MB, where records get updated / deleted / added
  and
   on
   an average it maintains 600MB only. Now the datafile is increased to
  30GB
   from the past 30 days, do you have any idea how to reduce this ?
  
   Also just wondering what does the datafile contains actually and why
  can't
   it gets decreased ?
  
   *thanks  regards,
   __*
   Kishore Kumar Vaishnav
   *
   *
   On Tue, May 22, 2012 at 1:40 PM, Claudio Nanni 
  claudio.na...@gmail.com
   wrote:
  
   Kishore,
   No, as already explained, it is not possible, Innodb datafiles
 *never*
   shrink.
  
   Cheers
  
   Claudio
   On May 22, 2012 10:05 AM, Kishore Vaishnav 
  kish...@railsfactory.org
   wrote:
  
   Hi,
  
   I understand that if I set the  innodb_file_per_table then once the
   table
   is drop the datafile will also be lost. But is there a way where I
   truncate
   the table and the datafile shrinks itself ?
  
   *thanks  regards,
   __*
   Kishore Kumar Vaishnav
   *
  
   *
   On Mon, May 21, 2012 at 6:43 PM, Johan De Meersman 
  vegiv...@tuxera.be
   wrote:
  
   - Original Message -
   From: Manivannan S. manivanna...@spanservices.com
  
   How to reduce the ibdata1 file size in both LINUX and WINDOWS
   machine.
  
   This is by design - you cannot reduce it, nor can you remove added
   datafiles.
  
   If you want to shrink the ibdata files, you must stop all
   connections to
   the server, take a full backup, stop the server, remove the
  datafiles
   (and
   maybe change the config), restart the server (will take time to
   recreate
   emtpy datafiles) and then import the backup.
  
   For new tables, you can turn on the option innodb_file_per_table -
   then
   every (new) table gets it's own datafile; and when you drop the
   table,
   that
   datafile also gets deleted.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
  
  
  
  
  
  
 
  --
 
  Reindl Harald
  the lounge interactive design GmbH
  A-1060 Vienna, Hofmühlgasse 17
  CTO / CISO / Software-Development
  p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
  icq: 154546673, http://www.thelounge.net/
 
  http://www.thelounge.net/signature.asc.what.htm
 
 



Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.

For more benefits read the doc

On Tue, May 22, 2012 at 4:45 PM, Johan De Meersman vegiv...@tuxera.bewrote:


 - Original Message -
  From: Pothanaboyina Trimurthy skd.trimur...@gmail.com
 
  hi sir,

 Please keep the list in CC, others may benefit from your questions, too.

   can we see any  performance related improvements if we use
  innodb_file_per_table other than using a single ibdatafile for all
  innodb databases. Please let me know the difference sir.

 No, that shouldn't yield any particular performance benefit, as far as I'm
 aware.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
yes, Barracuda is limited to FILE_PER_TABLE.

Yes, true there is CPU cost, but very less.
To gain some you have to loss some.

On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 --

 *From: *Ananda Kumar anan...@gmail.com


 yes, there some new features you can use to improve performance.
 If you are using mysql 5.5 and above, with files per table, you can enable
 BARACUDA file format, which in turn provides data compression
 and dynamic row format, which will reduce IO.


 True, but Barracuda isn't limited to file-per-table, is it?

 Also, while it's true that compression will lighten your I/O load, do not
 forget that it does so at the cost of additional CPU load. There's no such
 thing as a free lunch :-)



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.

On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman vegiv...@tuxera.bewrote:



 - Original Message -
  From: Reindl Harald h.rei...@thelounge.net
 
  interesting because i have here a dbmail-server with no CPU load and
  innodb with compression enabled since 2009 (innodb plugin in the past)

 Ah, this is a mixed-use server that also receives data from several Cacti
 installs.

  [--] Data in InnoDB tables: 6G (Tables: 49)
 [--] Data in InnoDB tables: 17G (Tables: 276)

  [--] Up for: 5d 0h 44m 10s (455M q [1K qps], 50K conn, TX: 36B, RX: 13B)
 [--] Up for: 11d 23h 27m 20s (200M q [193.511 qps], 8M conn, TX: 132B, RX:
 35B)

  [--] Reads / Writes: 90% / 10%
 [--] Reads / Writes: 18% / 82%

 I guess it's reasonable that I get a lot more CPU overhead from
 compression, as you get a lot of reads from decompressed blocks in the
 cache :-)



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Reducing ibdata1 file size

2012-05-22 Thread Ananda Kumar
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing

On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar anan...@gmail.com wrote:

 Is you system READ intensive or WRITE intensive.
 If you have enable compression for WRITE intensive data, then CPU cost
 will be more.


 On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman vegiv...@tuxera.bewrote:



 - Original Message -
  From: Reindl Harald h.rei...@thelounge.net
 
  interesting because i have here a dbmail-server with no CPU load and
  innodb with compression enabled since 2009 (innodb plugin in the past)

 Ah, this is a mixed-use server that also receives data from several Cacti
 installs.

  [--] Data in InnoDB tables: 6G (Tables: 49)
 [--] Data in InnoDB tables: 17G (Tables: 276)

  [--] Up for: 5d 0h 44m 10s (455M q [1K qps], 50K conn, TX: 36B, RX: 13B)
 [--] Up for: 11d 23h 27m 20s (200M q [193.511 qps], 8M conn, TX: 132B,
 RX: 35B)

  [--] Reads / Writes: 90% / 10%
 [--] Reads / Writes: 18% / 82%

 I guess it's reasonable that I get a lot more CPU overhead from
 compression, as you get a lot of reads from decompressed blocks in the
 cache :-)



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





Re: Foreign key and uppercase / lowercase values

2012-05-16 Thread Ananda Kumar
why are not using any where condition in the update statment

On Wed, May 16, 2012 at 1:24 PM, GF gan...@gmail.com wrote:

 Good morning,
 I have an application where the user ids were stored lowercase.
 Some batch import, in the user table some users stored  a uppercase
 id, and for some applicative logic, in other tables that have a
 foreign key to the user table, their user ids are stored lowercase.
 MySQL didn't throw any error probalby because the collation used is
 case insensitive.
 My problem is that the application is Java and java strings are case
 sensitive, so now I want to set user ids to lowercase EVERYWHERE.

 I supposed that I could execute with ease these commands:
 - update mytable1 set USER_ID = LOWER(USER_ID);
 - update mytable2 set USER_ID = LOWER(USER_ID);
 - update mytable3 set USER_ID = LOWER(USER_ID);

 But for some tables I got some Foreign key constraint to throw an
 error. (butwhy they didn't throw an error on the insert but just on
 the update???)
 And if I try to disable foreign key checks during these updates, I get
 some duplicate key errors where USER_ID is a part of composite key
 with other columns. (but I don't have any data that might cause a real
 duplicate key error just changing the case of one column)

 Have you any idea how to solve this situation without
 stopping/recreating the DB? (it's a production environment)
 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Mysql is toying me... why sometimes an insert or update can be slow!? I getting bald cuz this

2012-05-14 Thread Ananda Kumar
is accountid a number or varchar column

On Sat, May 12, 2012 at 7:38 PM, Andrés Tello mr.crip...@gmail.com wrote:

 While doning a batch process...

 show full processlist show:

 | 544 | prod | 90.0.0.51:51262 | tmz2012 | Query   |6 |
 end  | update `account` set `balance`= 0.00 +
 '-4000' where accountid='2583092'

 No other process, lo locking no nothing...

 so you take this same query... run it isolated, and the mufu... is
 just...f fast!


 update `account` set `balance`= 0.00 + '-4000' where accountid='2583092'
 Query OK, 0 rows affected (0.00 sec)
 Rows matched: 1  Changed: 0  Warnings: 0


 ARRRG!  I have seen this type of query take as long as 100+ seconds.. and I
 don't have a F*** clue...



Re: Mysql is toying me... why sometimes an insert or update can be slow!? I getting bald cuz this

2012-05-14 Thread Ananda Kumar
If numeric, then why are u using quotes. With quotes, mysql will ignore the
index and do a full table scan

On Mon, May 14, 2012 at 7:31 PM, Andrés Tello mr.crip...@gmail.com wrote:



 Yes, I'm using indexes, accountid is the primary key, and is numeric and
 autoincrement.  The process doing the deadlock is no longer done...

 The structure of the inserted database has changed.
 Originaly it was a single table with 219millions rows, now I partitioned
 the hable in... 60 tables, 1 for each month for 5 years..

 I gain a lot of speed with the partitioning and sql adjustments, but I
 think I lost a lot of speed at the insert...

 The database stats are like 95% reading 5% writting...but that 5% is
 mostly done in a batch process who happends to be done inside a time
 window...

 I know I need to do more profiling, but... at least for now dev team is
 updating the batch process from long secuencial process with huge slow
 inserts, to small parallel task with burst of inserts...




 On Mon, May 14, 2012 at 8:18 AM, Ananda Kumar anan...@gmail.com wrote:

 is accountid a number or varchar column


 On Sat, May 12, 2012 at 7:38 PM, Andrés Tello mr.crip...@gmail.comwrote:

 While doning a batch process...

 show full processlist show:

 | 544 | prod | 90.0.0.51:51262 | tmz2012 | Query   |6 |
 end  | update `account` set `balance`= 0.00 +
 '-4000' where accountid='2583092'

 No other process, lo locking no nothing...

 so you take this same query... run it isolated, and the mufu... is
 just...f fast!


 update `account` set `balance`= 0.00 + '-4000' where accountid='2583092'
 Query OK, 0 rows affected (0.00 sec)
 Rows matched: 1  Changed: 0  Warnings: 0


 ARRRG!  I have seen this type of query take as long as 100+ seconds..
 and I
 don't have a F*** clue...






Re: Mysql is toying me... why sometimes an insert or update can be slow!? I getting bald cuz this

2012-05-14 Thread Ananda Kumar
I used to have these issues in mysql version 5.0.41.



On Mon, May 14, 2012 at 8:13 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 - Original Message -
  From: Ananda Kumar anan...@gmail.com
 
  If numeric, then why are u using quotes. With quotes, mysql will
  ignore the index and do a full table scan

 Will it? Common sense dictates that it would convert to the column's
 native type before comparing; and a quick explain seems to confirm this.

 That being said, it *is* better to use the actual column type from the
 start, simply to avoid the cost of implicit conversions.


 mysql desc user;

 +-+--+--+-+---+-+
 | Field   | Type | Null | Key | Default   | Extra
   |

 +-+--+--+-+---+-+
 | id  | int(11)  | NO   | PRI | NULL  |
 auto_increment  |
 [...]
 16 rows in set (0.04 sec)

 mysql explain select * from user where id = 1;

 ++-+---+---+---+-+-+---+--+---+
 | id | select_type | table | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-+---+---+---+-+-+---+--+---+
 |  1 | SIMPLE  |  user | const | PRIMARY   | PRIMARY |
 4   | const |1 |   |

 ++-+---+---+---+-+-+---+--+---+
 1 row in set (0.07 sec)

 mysql explain select * from user where id = '1';

 ++-+---+---+---+-+-+---+--+---+
 | id | select_type | table | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-+---+---+---+-+-+---+--+---+
 |  1 | SIMPLE  |  user | const | PRIMARY   | PRIMARY |
 4   | const |1 |   |

 ++-+---+---+---+-+-+---+--+---+
 1 row in set (0.00 sec)


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Why is creating indexes faster after inserting massive data rows?

2012-05-07 Thread Ananda Kumar
which version of mysql  are you using.

Is this secondary index.?



On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang zzgang_2...@yahoo.com.cnwrote:

 hi all:

 I have a question:

 Creating indexes after inserting massive data rows is faster than before
 inserting data rows.
 Please tell me why.



Re: update query

2012-04-30 Thread Ananda Kumar
Do you just want to replace current value in client column to NEW.
You can write a stored proc , with a cursor and loop through the cursor,
update each table.

regards
anandkl

On Mon, Apr 30, 2012 at 2:47 PM, Pothanaboyina Trimurthy 
skd.trimur...@gmail.com wrote:

 Hi all,
  i have one database with 120 tables and each table contains one
 common column that is client now i want to update all the tables
 column client = NEW. is it possible to write a single query to
 update this one.

 please help me.

 thanks in advance

 Thanks  Kind Regards,
 Trimurthy.p

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Why dont you create a new table where id  2474,
rename the original table to _old and the new table to actual table name.

or
You need to write a stored proc to loop through rows and delete, which will
be faster.

Doing just a simple delete statement, for deleting huge data will take
ages.

regards
anandkl

On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma adarsh.sha...@orkash.comwrote:


 Dear all,

 Today I need to delete some records in  70 GB tables.
 I have 4 tables in mysql database.

 my delete command is :-

 delete from metadata where id2474;

 but it takes hours to complete.

 One of my table structure is as :-

 CREATE TABLE `metadata` (
  `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `id` bigint(20) DEFAULT NULL,
  `url` varchar(800) DEFAULT NULL,
  `meta_field` varchar(200) DEFAULT NULL,
  `meta_value` varchar(2000) DEFAULT NULL,
  `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too takes time.



 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?**unsub=anan...@gmail.comhttp://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Create PROCEDURE qrtz_purge() BEGIN
   declare l_id bigint(20);
   declare NO_DATA INT DEFAULT 0;
   DECLARE LST_CUR CURSOR FOR select id from table_name where id 123;
   DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
   OPEN LST_CUR;
   SET NO_DATA = 0;
   FETCH LST_CUR INTO l_id;
   WHILE NO_DATA = 0 DO
  Delete from table_name where id=l_id
  COMMIT;
  SET NO_DATA = 0;
  FETCH LST_CUR INTO l_id;
   END WHILE;
   CLOSE LST_CUR;
   END

On Fri, Nov 4, 2011 at 2:40 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 04.11.2011 08:22, schrieb Adarsh Sharma:
  delete from metadata where id2474;
  but it takes hours to complete.
 
  CREATE TABLE `metadata` (
   `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
   `id` bigint(20) DEFAULT NULL,
   `url` varchar(800) DEFAULT NULL,
   `meta_field` varchar(200) DEFAULT NULL,
   `meta_value` varchar(2000) DEFAULT NULL,
   `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (`meta_id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;
 
  Please let me know any quickest way to do this.
  I tried to create indexes in these tables on id, but this too takes time

 well i guess you have to sit out add the key
 wrong table design having an id-column without a key or
 something weird in the application not using the primary
 key for such operations





Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k or
10k rows.
This will be much faster than ur individual update stmt.

regards
anandkl

On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK TABLES
 hinders performance for large update statements on MYISAM tables when it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement: Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM tables,
  so no transactions.  And like I said, this is not a production box nor is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and maybe
 a
  few specialized operations where it has its advantages but for 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so there
 is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a single
  user-box and mysql instance, that locking tables would cause these DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 



Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
No,
Use a cursor(select column_name to be used in where condition of update
stmt), loop through it for each update.

regards
anandkl

On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k
 or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK TABLES
 hinders performance for large update statements on MYISAM tables when it
 is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box nor
 is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance
 of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9%
 of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing
 of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so
 there is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a single
  user-box and mysql instance, that locking tables would cause these
 DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 






Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
May be if u can let the audience know a sip-net of ur sql, some can help u

On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:


 Sorry, but you do not understand my original issue or question.

 -Hank



 On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of update
 stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k
 or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when
 it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box
 nor is
  there any application running, so there's no contention for the
 tables being
  locked.  I'm trying to update a database design on two tables with
 200
  million records each, so anything I can do to increase the
 performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on
 each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9%
 of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions
 with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but
 I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for
 testing of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so
 there is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a
 single
  user-box and mysql instance, that locking tables would cause these
 DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 








Re: query optimization

2011-09-22 Thread Ananda Kumar
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full
table scan, you might want to check on this and use a WHERE condition to
use indexed column

On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote:



  I have a table with 24 million rows, I need to figure out how to optimize
 a query.  It has to do with mac addresses and radius packets - I want to see
 the # of connections and the min/max date. So I basically want all this
 data:

   select cpe_mac,count(*) c,min(r3_dt) mindt,max(r3_dt) maxdt, max(rec_num)
 recn
   from radiuscap
   where r3_dt=SUBDATE(NOW(),INTERVAL 30 DAY)
 and r3_type='Access'
   group by cpe_mac order by cpe_mac
 ;

 This piece of the query takes 30 seconds to run and produces 3500 rows.  I
 have r3_dt indexed.  I also want a status field of the row with the highest
 r3_dt:

 select rec_num,cpe_mac,req_status
 from rad_r3cap
 where r3_type='Access'
   and (cpe_mac,r3_dt) in (select cpe_mac,max(r3_dt) from rad_r3cap)
 ;

 This piece of the query takes forever,  I let it run for an hour and it
 still didn't finish, it's obviously not using indexes.  I have no idea how
 far along it got.  I wrote a php script to run the 1st query, then do 3500
 individual lookups for the status using the max(rec_num) field in the 1st
 query, and I can get the data in 31 seconds.  So I CAN produce this data,
 but very slowly, and not in 1 sql query.  I want to consolidate this into 1
 sql so I can make a view.

 If anyone can point me in the right direction, I'd appreciate it!



 mysql desc rad_r3cap;
 +-+-+--+-+-++
 | Field   | Type| Null | Key | Default | Extra  |
 +-+-+--+-+-++
 | rec_num | int(11) | NO   | PRI | NULL| auto_increment |
 | r3_dt   | datetime| YES  | MUL | NULL||
 | r3_micros   | int(11) | YES  | | NULL||
 | r3_type | varchar(16) | YES  | | NULL||
 | req_status  | varchar(16) | YES  | | NULL||
 | req_comment | varchar(64) | YES  | | NULL||
 | asn_ip  | varchar(16) | YES  | MUL | NULL||
 | asn_name| varchar(16) | YES  | | NULL||
 | bsid| varchar(12) | YES  | MUL | NULL||
 | cpe_ip  | varchar(16) | YES  | | NULL||
 | cpe_mac | varchar(12) | YES  | MUL | NULL||
 | filename| varchar(32) | YES  | | NULL||
 | linenum | int(11) | YES  | | NULL||
 | r3_hour | datetime| YES  | MUL | NULL||
 | user_name   | varchar(64) | YES  | | NULL||
 +-+-+--+-+-++

 mysql show indexes in rad_r3cap;

 +---++--+--+-+---+-+--++--++-+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +---++--+--+-+---+-+--++--++-+
 | rad_r3cap |  0 | PRIMARY  |1 | rec_num | A
   |23877677 | NULL | NULL   |  | BTREE  | |
 | rad_r3cap |  0 | r3cap_dt |1 | r3_dt   | A
   |NULL | NULL | NULL   | YES  | BTREE  | |
 | rad_r3cap |  0 | r3cap_dt |2 | r3_micros   | A
   |NULL | NULL | NULL   | YES  | BTREE  | |
 | rad_r3cap |  1 | r3cap_bsid   |1 | bsid| A
   | 346 | NULL | NULL   | YES  | BTREE  | |
 | rad_r3cap |  1 | r3cap_asnip  |1 | asn_ip  | A
   |  55 | NULL | NULL   | YES  | BTREE  | |
 | rad_r3cap |  1 | r3cap_cpemac |1 | cpe_mac | A
   |4758 | NULL | NULL   | YES  | BTREE  | |
 | rad_r3cap |  1 | r3cap_date   |1 | r3_hour | A
   |1548 | NULL | NULL   | YES  | BTREE  | |

 +---++--+--+-+---+-+--++--++-+
 7 rows in set (0.00 sec)



Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
do u have index on dest,key

On Fri, Sep 23, 2011 at 12:21 AM, Hank hes...@gmail.com wrote:


 Like I said, the problem is not just one particular SQL statement. It is
 several dozen statements operating on tables with several hundred million
 records.  The problem is that I am finding that when I use LOCK TABLES,
 these queries run slower (please read my ORIGINAL post with all this
 information).  I am trying to find a logical or reasonable explanation WHY
 this would be the case, despite the fact that the documentation states
 otherwise (see: Right here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html )

 But if seeing some SQL will make you happy, here is just one example:

 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 for 140 million records in dest and  220 million records in source.
  Source is indexed by key+seq (key is primary key, but seq is included as a
 covering index). There is no index on dest.seq -- that index is built once
 the update is complete.  This query takes about 3.5 hours when I don't use
 LOCK TABLES, and over 4 hours when I do use LOCK TABLES.

 -Hank



 On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar anan...@gmail.com wrote:

 May be if u can let the audience know a sip-net of ur sql, some can help u


 On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:


 Sorry, but you do not understand my original issue or question.

 -Hank



 On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of
 update stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.comwrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every
 1k or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when
 it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best
 solution for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a
 single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box
 nor is
  there any application running, so there's no contention for the
 tables being
  locked.  I'm trying to update a database design on two tables with
 200
  million records each, so anything I can do to increase the
 performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on
 each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you
 are using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for
 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions
 with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates

Re: Questions about Mysqldump

2011-09-15 Thread Ananda Kumar
or u can use for loop, have only the database to be exported and use that
variable in --database and do mysqldump of each database.

On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen cars...@bitbybit.dkwrote:

 On 15-09-2011 10:31, Chris Tate-Davies wrote:

 Adarsh,

 1)

 When restoring a mysqldump you have the option of which database to
 restore.

 mysql database1  backup.sql


 Admittedly, it's been a few years since I last used mysqldump, but I
 suspect that it will contain USE commands - as such, it will restore to
 whatever database data was dumped from.

 You'll want to have --one-database on the cmd line too.



 2)

 You might be able to use the --ignore-table command. I'm not sure if
 this would work

 mysqldump --all-databases -q --single-transaction
 --ignore-table=**databasetoignore.* | gzip 
 /media/disk-1/Server11_**MysqlBackup_15September2011/**
 mysql_15sep2011backup.sql.gz


 or create a short script that asks mysql for all databases, greps away
 those you don't want to dump, and runs mysqldump on the rest.

 / Carsten



 3)

 The docs are here for mysqldump, might be worth a read:

 http://dev.mysql.com/doc/**refman/5.1/en/mysqldump.htmlhttp://dev.mysql.com/doc/refman/5.1/en/mysqldump.html


 HTH, Chris



 On 15/09/11 06:29, Adarsh Sharma wrote:

 Dear all,

 Today i backup my all databases (25) by using the below command :-

 mysqldump --all-databases -q --single-transaction | gzip 
 /media/disk-1/Server11_**MysqlBackup_15September2011/**
 mysql_15sep2011backup.sql.gz


 Now I have some doubts or problems that I need to handle in future :

 1. Is there any option in restore command ( I use mysql  backup.sql )
 to store only specific 1 or 2 databases out of this big backup file.
 2. While taking mysqldump of all databases , is there any way to leave
 specific databases , I know there is --databases option , but we have
 to name other 23 databases then.
 3. What are the settings that are need to changed in my.cnf to make
 backup  restore faster.


 Thanks




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?**unsub=anan...@gmail.comhttp://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: optimising for 100000 entries

2011-09-14 Thread Ananda Kumar
Dr. Doctor,
What kind of 10 entries? Is it insert,update delete etc.

regards
anandkl

On Wed, Sep 14, 2011 at 6:30 PM, The Doctor doc...@doctor.nl2k.ab.cawrote:

 Question:

 How can you optimise MySQL for 10 entires?

 Just running OSCemmerce and it is slow to pull up a who catalogue.

 --
 Member - Liberal International  This is doc...@nl2k.ab.ca Ici
 doc...@nl2k.ab.ca
 God, Queen and country! Never Satan President Republic! Beware AntiChrist
 rising!
 https://www.fullyfollow.me/rootnl2k
 Ontario, Nfld, and Manitoba boot the extremists out and vote Liberal!

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: optimising for 100000 entries

2011-09-14 Thread Ananda Kumar
So,
You want to have 100,000 buttons for 100,000 entries or just have one filter
column, which allows you to specify any type of WHERE CONDITION

regards
anandkl

On Wed, Sep 14, 2011 at 7:17 PM, Arthur Fuller fuller.art...@gmail.comwrote:

 Forgive  my bluntness, but IMO it is silly to attempt to retrieve a 100,000
 rows, except for reporting purposes, and in that case, said reports ought to
 run against a replica, not the OLTP instance.

 Far better, IMO, is to present (in the UI) an alphabet as buttons, plus a
 textbox for refinements. The alphabet buttons cause the recordSource to
 change to something like SELECT * FROM Clients WHERE ClientName LIKE 'A*'.
 Click the B button and the RecordSource changes to SELECT * FROM Clients
 WHERE ClientName LIKE 'B*'. IMO, such an interface gives the user all the
 power she needs, and costs the system as little as possible.

 To accomplish this, all you need is a sproc that accepts one parameter,
 that being the letter corresponding to the letter-button the user pressed.

 I have implemented exactly this solution on a table with only half the
 number of rows you cite, but it works beautifully and it is quick as
 lightning.

 HTH,
 Arthur


 On Wed, Sep 14, 2011 at 9:24 AM, Ananda Kumar anan...@gmail.com wrote:

 Dr. Doctor,
 What kind of 10 entries? Is it insert,update delete etc.

 regards
 anandkl

 On Wed, Sep 14, 2011 at 6:30 PM, The Doctor doc...@doctor.nl2k.ab.ca
 wrote:

  Question:
 
  How can you optimise MySQL for 10 entires?
 
  Just running OSCemmerce and it is slow to pull up a who catalogue.
 
  --
  Member - Liberal International  This is doc...@nl2k.ab.ca Ici
  doc...@nl2k.ab.ca
  God, Queen and country! Never Satan President Republic! Beware
 AntiChrist
  rising!
  https://www.fullyfollow.me/rootnl2k
  Ontario, Nfld, and Manitoba boot the extremists out and vote Liberal!
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com
 
 





Re: strange mysql update ..

2011-09-08 Thread Ananda Kumar
Can you lets us know what is the output of

select * from user_info where user_id=16078845;

On Thu, Sep 8, 2011 at 1:02 PM, umapathi b umapath...@gmail.com wrote:

 I wanted to change the login_date of one user . The original data of that
 user is like this ..

 select * from user_info where user_id = 16078845 \G
 *** 1. row ***
   user_id: 16078845
   drivers_license: TEST1140DL
login_date: 2011-06-19 11:20:07
 course_id: 1011
  regulator_id: 10840
 test_info: 
   completion_date: 2011-06-19 11:37:16
print_date: NULL
  password: test1140dl
 certificate_number: NULL
 login: test114...@1140dl.com

 I fired the update statement in a wrong way ..like this ..

 update user_info set login_date='2011-08-05 04:15:05' and user_id =16078845
 limit 1 ;
 ( I forgot to use where . instead of where I used and )
 update user_info set login_date='2011-08-05 04:15:05' where user_id
 =16078845 limit 1 ; ( this is the query intended )

 after the update ..I got this message ..
 mysql update user_info set login_date='2011-08-05 04:15:05' and user_id
 =16078845 limit 1;
 Query OK, 1 row affected, 1 warning (0.02 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

 It shows that one record is affected and one row changed ..
 I did show warnings ..the output is like this ..

 mysql show warnings;

 +-+--+-+
 | Level   | Code | Message
 |

 +-+--+-+
 | Warning | 1292 | Truncated incorrect DOUBLE value: '2011-08-05 04:15:05'
 |


 +-+--+-+

 But I could not get any record in the table with the updated login_date ..
 mysql select * from user_info where login_date like '2011-08-05%' ;
 Empty set (0.67 sec)

 So my question is what happened exactly ?
 Why no records updated ?

 Help is highly appreciated in this regard ..

 - Umapathi
 umapath...@gmail.com



Re: Schema list

2011-07-27 Thread Ananda Kumar
Is this a production setup.
If not, take complete dump of the all databases.
Drop the xYZ database, see if you can  see all the objects under XYZ.

Since the xYZ database is created, its obvious, that names are case
sensitive, and it show not show object from XYZ, when u under xYZ.

Can you please paste the results that your seeing.

regards
anandkl



On Wed, Jul 27, 2011 at 3:09 PM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:

 There is one database name 'XYZ' on linux system, some one has mistakenly
 created new
 database name with 'xYZ', database is created when I use 'xYZ' database,
 there are no
 tables with SHOW TABLES, but when i run SHOW PROCEDURE STATUS it show
 procedures from
 'XYZ' database.
 My concern is that if i drop new database created 'xYZ' will it drop
 procedures also
 listed from 'XYZ' database



Re: Backup onle one procedure

2011-02-10 Thread Ananda Kumar
sorry, my bad.

Its -R and not -p.

regards
anandkl

On Thu, Feb 10, 2011 at 1:58 PM, Singer X.J. Wang w...@singerwang.comwrote:

 Remember that procedure is defined per database,

 mysqldump -u[user] -p[pass] --where=db=`whatyouwant` and
 name=`whatyouwant` mysql proc


 On Thu, Feb 10, 2011 at 02:55, Ananda Kumar anan...@gmail.com wrote:

 there is -p option please used that.

 On Thu, Feb 10, 2011 at 12:47 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:

  On Thu, Feb 10, 2011 at 7:43 AM, Adarsh Sharma 
 adarsh.sha...@orkash.com
  wrote:
 
   I am researching all the ways to backup in mysql and donot able to
 find a
   command that take individual backup of only one procedure in mysql.
  
 
  Have a look at the SHOW CREATE PROCEDURE syntax. It's not mysqldump, but
 it
  will yield a statement that can recreate the given procedure from
 scratch.
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 


 --
 The best compliment you could give Pythian for our service is a referral.





Re: Backup onle one procedure

2011-02-09 Thread Ananda Kumar
there is -p option please used that.

On Thu, Feb 10, 2011 at 12:47 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 On Thu, Feb 10, 2011 at 7:43 AM, Adarsh Sharma adarsh.sha...@orkash.com
 wrote:

  I am researching all the ways to backup in mysql and donot able to find a
  command that take individual backup of only one procedure in mysql.
 

 Have a look at the SHOW CREATE PROCEDURE syntax. It's not mysqldump, but it
 will yield a statement that can recreate the given procedure from scratch.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Table creation

2011-01-31 Thread Ananda Kumar
what does 'show create table teste2 shows

2011/1/31 João Cândido de Souza Neto j...@consultorweb.cnt.br

 Please, give us some information about your server.

 --
 João Cândido de Souza Neto

 M. Rodrigo Monteiro fale...@rodrigomonteiro.net escreveu na mensagem
 news:AANLkTikw2rDzhZU2+SmVeiPnVCYB-Q=vce5nufa7o...@mail.gmail.com...
 Hi!

 2011/1/31 João Cândido de Souza Neto j...@consultorweb.cnt.br:
  CREATE TABLE `DATABASE`.`Teste2`(`id` SMALLINT(2) , `nome`
  VARCHAR(10)) ENGINE=InnoDB;
 

 I forget to put in the e-mail. I tryied with ENGINE and TYPE, samething.


 Regards,
 Rodrigo.

 --
 M. Rodrigo Monteiro

 Free as in Freedom, not free as in free beer
 As we are liberated from our own fear, our presence automatically
 liberates others
 Linux User # 403730

 Pense antes de imprimir. Think before printing.

 AVISO LEGAL
 Esta mensagem é destinada exclusivamente para a(s) pessoa(s) a quem é
 dirigida, podendo conter informação confidencial e/ou legalmente
 privilegiada. Se você não for destinatário desta mensagem, desde já
 fica notificado de abster-se a divulgar, copiar, distribuir, examinar
 ou, de qualquer forma, utilizar a informação contida nesta mensagem,
 por ser ilegal. Caso você tenha recebido esta mensagem por engano,
 pedimos que nos retorne este E-Mail, promovendo, desde logo, a
 eliminação do seu conteúdo em sua base de dados, registros ou sistema
 de controle. Fica desprovida de eficácia e validade a mensagem que
 contiver vínculos obrigacionais, expedida por quem não detenha poderes
 de representação.

 LEGAL ADVICE
 This message is exclusively destined for the people to whom it is
 directed, and it can bear private and/or legally exceptional
 information. If you are not addressee of this message, since now you
 are advised to not release, copy, distribute, check or, otherwise, use
 the information contained in this message, because it is illegal. If
 you received this message by mistake, we ask you to return this email,
 making possible, as soon as possible, the elimination of its contents
 of your database, registrations or controls system. The message that
 bears any mandatory links, issued by someone who has no representation
 powers, shall be null or void.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Help needed with what appears to be a corrupted innodb db

2011-01-09 Thread Ananda Kumar
Pito,

can u show us the innodb parameters in the my.cnf file.

regards
anandkl

On Sat, Jan 8, 2011 at 10:31 PM, Pito Salas r...@salas.com wrote:

 I am very new to trying to solve a problem like this and have searched
 and searched the web for a useful troubleshooting guide but I am
 honestly stuck. I wonder if anyone out there could take a look at this
 bit of mysqld log. Any help at all would be greatly appreciated!!!

 Pito

 110107 15:07:15  mysqld started
 110107 15:07:15  InnoDB: Database was not shut down normally!
 InnoDB: Starting crash recovery.
 InnoDB: Reading tablespace information from the .ibd files...
 InnoDB: Restoring possible half-written data pages from the doublewrite
 InnoDB: buffer...
 110107 15:07:15  InnoDB: Starting log scan based on checkpoint at
 InnoDB: log sequence number 35 515914826.
 InnoDB: Doing recovery: scanned up to log sequence number 35 515915839
 InnoDB: 1 transaction(s) which must be rolled back or cleaned up
 InnoDB: in total 1 row operations to undo
 InnoDB: Trx id counter is 0 1697553664
 110107 15:07:15  InnoDB: Starting an apply batch of log records to the
 database...
 InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38
 39 40 41 42 43 44
 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
 68 69 70 71 72 73
 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98
 99
 InnoDB: Apply batch completed
 InnoDB: Starting rollback of uncommitted transactions
 InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB:
 Error: trying to
 access page number 3522914176 in space 0,
 InnoDB: space name ./ibdata1,
 InnoDB: which is outside the tablespace bounds.
 InnoDB: Byte offset 0, len 16384, i/o type 10
 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file
 fil0fil.c line
 3922
 InnoDB: We intentionally generate a memory trap.
 InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
 InnoDB: If you get repeated assertion failures or crashes, even
 InnoDB: immediately after the mysqld startup, there may be
 InnoDB: corruption in the InnoDB tablespace. Please refer to
 InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
 InnoDB: about forcing recovery.
 mysqld got signal 11;

 This could be because you hit a bug. It is also possible that this binary
 or one of the libraries it was linked against is corrupt, improperly built,
 or misconfigured. This error can also be caused by malfunctioning hardware.
 We will try our best to scrape up some info that will hopefully help
 diagnose
 the problem, but since we have already crashed, something is definitely
 wrong
 and this may fail.

 key_buffer_size=0
 read_buffer_size=131072
 max_used_connections=0
 max_connections=100
 threads_connected=0
 It is possible that mysqld could use up to
 key_buffer_size + (read_buffer_size +
 sort_buffer_size)*max_connections = 217599 K
 bytes of memory
 Hope that's ok; if not, decrease some variables in the equation.

 thd=(nil)
 Attempting backtrace. You can use the following information to find out
 where mysqld died. If you see no messages after this, something went
 terribly wrong...
 Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct.
 Stack range sanity check OK, backtrace follows:
 0x8139eec
 0x83721d5
 0x833d897
 0x833db71
 0x832aa38
 0x835f025
 0x835f7a3
 0x830a77e
 0x8326b57
 0x831c825
 0x8317b8d
 0x82a9e66
 0x8315732
 0x834fc9a
 0x828d7c3
 0x81c29dd
 0x81b5620
 0x813d9fe
 0x40fdf3
 0x80d5ff1
 New value of fp=(nil) failed sanity check, terminating stack trace!

  Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html
 and follow
 instructions on how to resolve the stack trace. Resolved
 stack trace is much more helpful in diagnosing the problem, so please do
 resolve it The manual page at 
 http://www.mysql.com/doc/en/Crashing.htmlcontains
 information that should help you find out what is causing the crash.
 110107 15:07:15  mysqld ended

 --
 Check out http://www.salas.com and http://www.blogbridge.com/look

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Loading date takes a long time after replication

2011-01-02 Thread Ananda Kumar
What is the output of show full processlist

On Mon, Jan 3, 2011 at 1:10 PM, Sairam Krishnamurthy kmsram...@gmail.comwrote:

 Hi all,

 I set up replication some days back. Because of some internal issues we
 canceled the replication and reset the master to be a normal database
 server. But after this revert, the database is unusually slow.

 It was slow because of the discussion below when replication was on. But I
 wonder why it is slow now after the master has been reset. By reset I mean,
 commenting out the master setup lines in the my.cnf file, removal of logs
 and restarting the database.

 Thanks in advance


 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161


 On 12/06/2010 04:47 AM, Ananda Kumar wrote:

 Also, make sure your /tmp folder is on a separate and fast disk.
 We had similar issues and we moved /tmp folder from Local to SAN storage
 and it was quite fast.

 regards
 anandkl

 On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be wrote:

Are you saying that mass inserts go much slower now that you've set up
replication? In that case, I suspect you have your binlogs on the
same disk
as your data.

Put the binary logs on separate disks, and you'll notice a
dramatic increase
in performance.


On Mon, Dec 6, 2010 at 7:17 AM, Sairam Krishnamurthy
kmsram...@gmail.com mailto:kmsram...@gmail.comwrote:


 All,

 I have a situation. We have a very large database(gigabytes).
When we load
 the data in to the table, it will be huge again (20 million
rows). So in
 order to get good backup we set up a replication(master-slave
setup). But
 now, because of this setup, loading into the table takes hours
which will be
 generally done in a few minutes otherwise.

 Is there a workaround for this? We really don't want to wait for
hours for
 the table to be loaded. Should we abandon replication for this?
Any pointers
 ?

 --
 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel





Re: Error while running Mysql

2010-12-23 Thread Ananda Kumar
have u over allocated RAM/ process

regards
anandkl

On Thu, Dec 23, 2010 at 6:15 PM, Glyn Astill glynast...@yahoo.co.uk wrote:

 I've no idea of the status of dtrace on linux, as I've never tried, but
 failing that you could run it through gdb to get some insight into the
 issue.

 --- On Thu, 23/12/10, Johan De Meersman vegiv...@tuxera.be wrote:

  From: Johan De Meersman vegiv...@tuxera.be
  Subject: Re: Error while running Mysql
  To: Adarsh Sharma adarsh.sha...@orkash.com
  Cc: mysql@lists.mysql.com
  Date: Thursday, 23 December, 2010, 12:39
  Probably one for the guys with the
  compilers, but have you tried running it
  with dtrace and seeing where it explodes ?
 
  On Thu, Dec 23, 2010 at 1:38 PM, Adarsh Sharma adarsh.sha...@orkash.com
 wrote:
 
   Dear all,
  
   I am able o successfully build Mysql 5.5.8 from its
  source code on CentOS
   but when I issued the following command , it doesn't
  shows the mysql
   prompt, there is no error in log file.
  
  
   [r...@ws-test mysql-5.5.8]# bin/mysql -p
   Enter password:
   Welcome to the MySQL monitor.  Commands end with
  ; or \g.
   Your MySQL connection id is 2
   Server version: 5.5.8 Source distribution
  
   Copyright (c) 2000, 2010, Oracle and/or its
  affiliates. All rights
   reserved.
  
   Oracle is a registered trademark of Oracle Corporation
  and/or its
   affiliates. Other names may be trademarks of their
  respective
   owners.
  
   Segmentation fault
   [r...@ws-test mysql-5.5.8]#
  
   Don't know what to do.Please help.
  
  
   Thanks  Regards
  
   Adarsh Sharma
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
  
  
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Kill DELETE Query

2010-12-16 Thread Ananda Kumar
If u have used a stored proc to delete the rows, and commting freqently,
then the kill will happen faster.
If you have just used delete from table_name where condition, then it
would take toot much time to rollback all the deleted but not commited rows.

Regards
anandkl

On Fri, Dec 17, 2010 at 8:37 AM, Willy Mularto sangpr...@gmail.com wrote:

 Hi List,
 I run a delete query to delete around 1 million rows in innodb table, It's
 been hours and still unfinish. Is it safe to kill that delete query process
 while the table is also inserting and updating other rows? Thanks.




 sangprabv
 sangpr...@gmail.com
 http://www.petitiononline.com/froyo/



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Kill DELETE Query

2010-12-16 Thread Ananda Kumar
No...it will not.

On Fri, Dec 17, 2010 at 11:26 AM, Willy Mularto sangpr...@gmail.com wrote:

 Thanks for the reply. I used non stored procedure approach. Another
 question is if I kill the process will it crash the table? Thanks.



 sangprabv
 sangpr...@gmail.com
 http://www.petitiononline.com/froyo/


 On Dec 17, 2010, at 12:06 PM, Ananda Kumar wrote:

 If u have used a stored proc to delete the rows, and commting freqently,
 then the kill will happen faster.
 If you have just used delete from table_name where condition, then it
 would take toot much time to rollback all the deleted but not commited rows.

 Regards
 anandkl

 On Fri, Dec 17, 2010 at 8:37 AM, Willy Mularto sangpr...@gmail.comwrote:

 Hi List,
 I run a delete query to delete around 1 million rows in innodb table, It's
 been hours and still unfinish. Is it safe to kill that delete query process
 while the table is also inserting and updating other rows? Thanks.




 sangprabv
 sangpr...@gmail.com
 http://www.petitiononline.com/froyo/



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com






Re: Moving of databases from one server to another

2010-12-08 Thread Ananda Kumar
copy the /etc/init.d/mysql file from your old m/c to the new and try the
start/stop.

regards
anandkl

On Wed, Dec 8, 2010 at 2:21 PM, Machiel Richards machi...@rdc.co.za wrote:

 HI All

I am hoping someone has had this before as this one is baffling me
 entirely.

We did a MySQL database move from one machine to another one last
 night.

The O/S versions are the same and so are the database version
 (5.1.22).

The database was installed and configured on the new machine and
 was running perfectly. no issues.

Seeing that both versions would be exactly the same, we decided
 to do a full backup including the mysql database in order to preserve
 all permissions etc... and then restore this to the new machine.

The backup and restore went through without any issues
 whatsoever.

However when I used the /etc/init.d/mysql script to restart the
 database, I started getting errors during shutdown and startup
 procedure.

Despite this the database did start up and was accessible,
 however even when running /etc/init.d/mysql status command I get errors
 as below (these are the same received during startup):




 /etc/init.d/mysql status
/usr/bin/mysqladmin: connect to
server at 'localhost' failed
error: 'Access denied for user
'debian-sys-maint'@'localhost'
 (using password: YES)'
 *

I have tried to look on the net for answers but nothing was
 found to resolve this. I have reset the passwords for this user (also
 getting the same error for r...@localhost).
One site also suggested correcting the password in
 the /etc/mysql/debian.cnf which I did, but still getting these messages.


This is a production system and I am a bit concerned to the
 fact that the startup script does not work properly.


  Is there anyone who has had this issue before and have
 been able to resolve it?

Thank you in advance for any help on this matter.


 Regards
 Machiel








Re: Loading date takes a long time after replication

2010-12-06 Thread Ananda Kumar
Also, make sure your /tmp folder is on a separate and fast disk.
We had similar issues and we moved /tmp folder from Local to SAN storage and
it was quite fast.

regards
anandkl

On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 Are you saying that mass inserts go much slower now that you've set up
 replication? In that case, I suspect you have your binlogs on the same disk
 as your data.

 Put the binary logs on separate disks, and you'll notice a dramatic
 increase
 in performance.


 On Mon, Dec 6, 2010 at 7:17 AM, Sairam Krishnamurthy kmsram...@gmail.com
 wrote:

  All,
 
  I have a situation. We have a very large database(gigabytes). When we
 load
  the data in to the table, it will be huge again (20 million rows). So in
  order to get good backup we set up a replication(master-slave setup). But
  now, because of this setup, loading into the table takes hours which will
 be
  generally done in a few minutes otherwise.
 
  Is there a workaround for this? We really don't want to wait for hours
 for
  the table to be loaded. Should we abandon replication for this? Any
 pointers
  ?
 
  --
  Thanks,
  Sairam Krishnamurthy
  +1 612 859 8161
 
 


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: question about restoring...

2010-11-12 Thread Ananda Kumar
If you just need specific records, you can use -w option of mysql to
extract only the specifc records.
Then you can run the dump file into another db.

regards
anandkl

On Fri, Nov 12, 2010 at 2:35 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 From the OP:

  I have a copy of the INNODB files for these two tables - is there a way
  to extract the table contents from these files short of a full import?
 

 I have to agree, that's quite ambiguous. Andy, is it a copy of the innoDB
 datafiles, or a database dump that you have ?

 In the latter case, it's reasonably simple to extract what you need; in the
 former case you're gonna have to try attaching them to a new instance -
 good
 luck with that.



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: clone a database on the same machine

2010-09-20 Thread Ananda Kumar
The dump file has to be edited to replace old db name to the new db name.

regards
anandkl

On Mon, Sep 20, 2010 at 3:00 PM, Uwe Brauer o...@mat.ucm.es wrote:

 Hello

 I would like to clone a database db_org to db_clone on the same machine.
 Could
 I use the dump command for this? Should the user of both db
 be the same?
 Something like
 # mysqladmin create db_clone -u DB_user

 mysqldump -p db_org  clone.sql
 mysql -p -D db_clone  clone.sql


 thanks

 Uwe Brauer


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: clone a database on the same machine

2010-09-20 Thread Ananda Kumar
With the method you mentioned, you need to have the new db name already
present.
You can do it this way or the edit dump file way.

regards
anandkl

On Mon, Sep 20, 2010 at 3:19 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 Not the way he does it :-)

 If you use --databases, mysqldump will add create database and use
 database statements. if you specify the db without that parameter, it
 won't.


 On Mon, Sep 20, 2010 at 11:34 AM, Ananda Kumar anan...@gmail.com wrote:

 The dump file has to be edited to replace old db name to the new db name.

 regards
 anandkl

 On Mon, Sep 20, 2010 at 3:00 PM, Uwe Brauer o...@mat.ucm.es wrote:

  Hello
 
  I would like to clone a database db_org to db_clone on the same machine.
  Could
  I use the dump command for this? Should the user of both db
  be the same?
  Something like
  # mysqladmin create db_clone -u DB_user
 
  mysqldump -p db_org  clone.sql
  mysql -p -D db_clone  clone.sql
 
 
  thanks
 
  Uwe Brauer
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com
 
 




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: clone a database on the same machine

2010-09-20 Thread Ananda Kumar
oopss...sorry...i did not see that line

On Mon, Sep 20, 2010 at 3:44 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 He did suggest doing mysqladmin create :-p


 On Mon, Sep 20, 2010 at 11:58 AM, Ananda Kumar anan...@gmail.com wrote:

 With the method you mentioned, you need to have the new db name already
 present.
 You can do it this way or the edit dump file way.

 regards
 anandkl

   On Mon, Sep 20, 2010 at 3:19 PM, Johan De Meersman 
 vegiv...@tuxera.bewrote:

 Not the way he does it :-)

 If you use --databases, mysqldump will add create database and use
 database statements. if you specify the db without that parameter, it
 won't.


 On Mon, Sep 20, 2010 at 11:34 AM, Ananda Kumar anan...@gmail.comwrote:

 The dump file has to be edited to replace old db name to the new db
 name.

 regards
 anandkl

 On Mon, Sep 20, 2010 at 3:00 PM, Uwe Brauer o...@mat.ucm.es wrote:

  Hello
 
  I would like to clone a database db_org to db_clone on the same
 machine.
  Could
  I use the dump command for this? Should the user of both db
  be the same?
  Something like
  # mysqladmin create db_clone -u DB_user
 
  mysqldump -p db_org  clone.sql
  mysql -p -D db_clone  clone.sql
 
 
  thanks
 
  Uwe Brauer
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=anan...@gmail.com
 
 




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel





 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: how to recover crashed db: only the db or the whole directory /var/lib/mysql

2010-09-16 Thread Ananda Kumar
It depends on what u want to recover and type of the db engine...

If u want to recover just the specifi db and if u have dump of the db, just
restore the db for innodb.

for myisam, u can just restore the files in that database.

regards
anandkl

On Thu, Sep 16, 2010 at 3:27 PM, Uwe Brauer o...@mat.ucm.es wrote:

 Hello

 I have so far tried (unsuccessfully) to recover the database
 from the crashed Mac. (I am using Kubuntu 9.04)

 I tried 2 tactics.

-  shutdown the server (service mysql shutdown)
   copy the maqwiki folder into /var/lib/mysql
   restart the server. The server came up some tables
   show their content.

-  shutdown the server
   replace the whole myslq directory which in the case
   of the Mac is somewhere in /Application/MAMP/db/mysql
   or something like this
   to /var/lib/mysql
   then the server did not come up. I looked for a log
   file, for example in /var/log, but the mysql.log is
   empty and the syslog does not show anything with
   respect  to mysql.

 Any ideas

 Thanks

 Uwe Brauer



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Does putting a LIMIT on a DELETE clause make any difference?

2010-09-09 Thread Ananda Kumar
Vincent,

Since the column is indexed, it would use the index during the delete.

regards
anandkl

On Thu, Sep 9, 2010 at 5:47 AM, Daevid Vincent dae...@daevid.com wrote:

 I am curious about something.

 I have a glue or hanging table like so:

 CREATE TABLE `fault_impact_has_fault_system_impact` (
  `id_fault_impact` int(10) unsigned NOT NULL,
  `id_fault_system_impact` smallint(5) unsigned NOT NULL,
  KEY `fault_impact_key` (`id_fault_impact`,`id_fault_system_impact`),
  KEY `id_fault_system_impact` (`id_fault_system_impact`),
  CONSTRAINT `fault_impact_has_fault_system_impact_ibfk_1` FOREIGN KEY
 (`id_fault_impact`) REFERENCES `fault_impact` (`id_fault_impact`) ON DELETE
 CASCADE ON UPDATE CASCADE,
  CONSTRAINT `fault_impact_has_fault_system_impact_ibfk_2` FOREIGN KEY
 (`id_fault_system_impact`) REFERENCES `fault_system_impact`
 (`id_fault_system_impact`) ON DELETE CASCADE ON UPDATE CASCADE
 )

 And a lookup table like this:

 CREATE TABLE `fault_system_impact` (
  `id_fault_system_impact` smallint(5) unsigned NOT NULL auto_increment,
  `fault_sytem_impact_name` varchar(50) NOT NULL,
  PRIMARY KEY  (`id_fault_system_impact`)
 )

 I have a bunch of checkboxes in a FORM and so in order to update
 properly, I wipe out all the PK IDs and then start inserting. It looks like
 this:

 UPDATE `fault_impact` SET `bite_code` = '12', `bite_code_type` = '32',
 `bite_subcode` = '21', `description_text` = 'Some random fault description
 here.', `fault_id` = '11-1', `fault_impact_other_explain` = '',
 `id_fault_area_impact` = '3', `symptom_lru_id` = '232', `symptom_lru_subid`
 = '34', `sys_perf_affected` = '', `update_date` = '2010-09-09 00:04:29'
 WHERE id_fault_impact = '2495' LIMIT 1;

 DELETE FROM fault_impact_has_fault_system_impact WHERE id_fault_impact =
 2495;

 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 1);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 3);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 2);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 7);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 10);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 14);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 9);
 INSERT INTO
 fault_impact_has_fault_system_impact(`id_fault_impact`,`id_fault_system_imp
 act`) VALUES(2495, 4);

 Given that I know there can only be a maximum of id_fault_system_impact IDs
 -- currently there are 17 rows in the fault_system_impact table -- and
 they're unique to any given id_fault_impact, would it benefit me to change
 my DELETE statement to something like this:

 DELETE FROM fault_impact_has_fault_system_impact WHERE id_fault_impact =
 2495 LIMIT 17;

 Since the fault_impact_has_fault_system_impact table could have thousands
 of rows and it seems that mySQL would do a table scan? Unfortunately, you
 can't EXPLAIN on a DELETE to see what it might be doing. :( OR is mySQL
 smart enough to know that the id_fault_impact is an index and therefore it
 will just be right quick and stop after deleting those 8 rows above?





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Query SUM help

2010-09-09 Thread Ananda Kumar
did u try to use  LIMIT after ORDER BY

On Thu, Sep 9, 2010 at 1:27 PM, Tompkins Neil
neil.tompk...@googlemail.comwrote:

 Any help would be really appreciated ?



 -- Forwarded message --
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: Wed, Sep 8, 2010 at 5:30 PM
 Subject: Query SUM help
 To: [MySQL] mysql@lists.mysql.com


 Hi

 I've the following query :

 SELECT total_team_rating, my_teams_id
 FROM
 (SELECT players.teams_id AS my_teams_id, SUM(players_master.rating) AS
 total_team_rating
 FROM players
 INNER JOIN players_master ON players.players_id = players_master.players_id
 WHERE players.worlds_id = 1 AND players.red_cards = 0 AND
 players.injury_duration_remaining = 0
 GROUP BY players.teams_id) s1
 ORDER BY s1.total_team_rating DESC

 This gives me the total of players_master.rating for each players.teams_id.
  However, I'm wanting to only base the players_master.rating on the top 11
 records in the players table for each team.  How can I modify my query to
 achieve this ?

 Thanks
 Neil



Re: Query SUM help

2010-09-09 Thread Ananda Kumar
try using the RANK function...

something like select * from table order by RANK desc limit 11.this will
get u the top 11 rows.

regards
anandkl

On Thu, Sep 9, 2010 at 1:55 PM, Tompkins Neil
neil.tompk...@googlemail.comwrote:

 Yes, but it doesn't work.  Basically I want the SUM(players_master.rating)
 only to SUM the top 11 players from each team.  Any suggestions ?

 Cheers
 Neil


 On Thu, Sep 9, 2010 at 9:17 AM, Ananda Kumar anan...@gmail.com wrote:

 did u try to use  LIMIT after ORDER BY


 On Thu, Sep 9, 2010 at 1:27 PM, Tompkins Neil 
 neil.tompk...@googlemail.com wrote:

 Any help would be really appreciated ?



 -- Forwarded message --
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: Wed, Sep 8, 2010 at 5:30 PM
 Subject: Query SUM help
 To: [MySQL] mysql@lists.mysql.com


 Hi

 I've the following query :

 SELECT total_team_rating, my_teams_id
 FROM
 (SELECT players.teams_id AS my_teams_id, SUM(players_master.rating) AS
 total_team_rating
 FROM players
 INNER JOIN players_master ON players.players_id =
 players_master.players_id
 WHERE players.worlds_id = 1 AND players.red_cards = 0 AND
 players.injury_duration_remaining = 0
 GROUP BY players.teams_id) s1
 ORDER BY s1.total_team_rating DESC

 This gives me the total of players_master.rating for each
 players.teams_id.
  However, I'm wanting to only base the players_master.rating on the top
 11
 records in the players table for each team.  How can I modify my query to
 achieve this ?

 Thanks
 Neil






Re: Trying to remove a filesort.

2010-09-09 Thread Ananda Kumar
have u set sort_buffer_size at session level or in my.cnf.
Setting high value in my.cnf, will cause mysql to run out off MEMORY and
paging will happen

regards
anandkl

On Fri, Sep 10, 2010 at 1:10 AM, Phil freedc@gmail.com wrote:

 Even prior to the group by it's still not likely to ever be more than 200
 or
 so maximum.

 I have the sort_buffer_size at 256Mb so I don't believe it's that either :(

 On Thu, Sep 9, 2010 at 3:16 PM, Michael Dykman mdyk...@gmail.com wrote:

  How many rows before the GROUP BY?  Group by is, in effect a sorting
  process..  perhaps that contains enough data to justify going to disk.
 
  What is the value of the variable sort_buffer_size?
  show variables like '%sort%';
 
   - md
 
  On Thu, Sep 9, 2010 at 3:04 PM, Phil freedc@gmail.com wrote:
   On average it would be between 10 and 40, certainly no more than 100.
  
  
   On Thu, Sep 9, 2010 at 2:57 PM, Michael Dykman mdyk...@gmail.com
  wrote:
  
   The filesort is probably necessary because of the number of rows in
   the result set to be ordered.  How many rows do you get out of this
   query?
  
- michael dykman
  
   On Thu, Sep 9, 2010 at 1:53 PM, Phil freedc@gmail.com wrote:
I wonder if anyone could help with a query which I've been unable to
prevent
from using a filesort. Might be something obvious I'm overlooking!
   
I have a table which tracks milestones in distributed computing
  projects
   
Create Table: CREATE TABLE `boinc_milestone` (
 `proj` char(6) NOT NULL,
 `id` int(11) NOT NULL,
 `stat_date` date NOT NULL DEFAULT '-00-00',
 `milestone_type` char(1) NOT NULL DEFAULT '0',
 `milestone` double NOT NULL DEFAULT '0',
 `cpid` varchar(32) DEFAULT NULL,
 `team` int(11) DEFAULT NULL,
 PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
 KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
 KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
 KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
   
These are added to on a daily basis as users pass the various credit
milestones so for instance you can end up with rows for
1000,5000,1,5,100 etc on different dates as time goes
 on.
   
Now on one page for display I want to show the latest milestone for
  each
project for a particular cpid. The query I use is as follows:
   
select a.proj,a.id,max(stat_date),max(a.milestone) as
milestone,b.description
from boinc_milestone a join boinc_projects b on a.proj =
  b.proj
where cpid = '$cpid'
group by proj
order by stat_date desc
   
The order by causes the filesort and I can't find an easy way around
  it.
   
mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone)
 as
milestone,b.description from boinc_milestone a join boinc_projects b
  on
a.proj = b.proj where cpid = 'XXX' group by proj order by
 stat_date\G
*** 1. row ***
  id: 1
 select_type: SIMPLE
   table: a
type: ref
possible_keys: PRIMARY,two,cpid,team
 key: cpid
 key_len: 35
 ref: const
rows: 1
   Extra: Using where; Using index; Using temporary; Using
  filesort
*** 2. row ***
  id: 1
 select_type: SIMPLE
   table: b
type: eq_ref
possible_keys: PRIMARY
 key: PRIMARY
 key_len: 10
 ref: stats.a.proj
rows: 1
   Extra: Using where
2 rows in set (0.00 sec)
   
I could just remove the order by altogether and perform the sort in
  php
afterwards I guess but any other ideas?
   
Thanks
   
Phil
   
--
Distributed Computing stats
http://stats.free-dc.org
   
  
  
  
   --
- michael dykman
- mdyk...@gmail.com
  
May the Source be with you.
  
  
  
   --
   Distributed Computing stats
   http://stats.free-dc.org
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 



 --
 Distributed Computing stats
 http://stats.free-dc.org



Re: Trying to remove a filesort.

2010-09-09 Thread Ananda Kumar
Its not advisiable...as this size will be allocated to all the session and
cause system running out of memory.
It should be set at session and in my.cnf it should be around 2 MB.

Please correct if  i am wrong.

regards
anandkl

On Fri, Sep 10, 2010 at 1:26 AM, Phil freedc@gmail.com wrote:

 It's in my.cnf. There is 12Gb in the database server and I watch it fairly
 carefully and have not gone into swap yet in the past few years.

 On Thu, Sep 9, 2010 at 3:43 PM, Ananda Kumar anan...@gmail.com wrote:

  have u set sort_buffer_size at session level or in my.cnf.
  Setting high value in my.cnf, will cause mysql to run out off MEMORY and
  paging will happen
 
  regards
  anandkl
 
  On Fri, Sep 10, 2010 at 1:10 AM, Phil freedc@gmail.com wrote:
 
  Even prior to the group by it's still not likely to ever be more than
 200
  or
  so maximum.
 
  I have the sort_buffer_size at 256Mb so I don't believe it's that either
  :(
 
  On Thu, Sep 9, 2010 at 3:16 PM, Michael Dykman mdyk...@gmail.com
 wrote:
 
   How many rows before the GROUP BY?  Group by is, in effect a sorting
   process..  perhaps that contains enough data to justify going to disk.
  
   What is the value of the variable sort_buffer_size?
   show variables like '%sort%';
  
- md
  
   On Thu, Sep 9, 2010 at 3:04 PM, Phil freedc@gmail.com wrote:
On average it would be between 10 and 40, certainly no more than
 100.
   
   
On Thu, Sep 9, 2010 at 2:57 PM, Michael Dykman mdyk...@gmail.com
   wrote:
   
The filesort is probably necessary because of the number of rows in
the result set to be ordered.  How many rows do you get out of this
query?
   
 - michael dykman
   
On Thu, Sep 9, 2010 at 1:53 PM, Phil freedc@gmail.com wrote:
 I wonder if anyone could help with a query which I've been unable
  to
 prevent
 from using a filesort. Might be something obvious I'm
 overlooking!

 I have a table which tracks milestones in distributed computing
   projects

 Create Table: CREATE TABLE `boinc_milestone` (
  `proj` char(6) NOT NULL,
  `id` int(11) NOT NULL,
  `stat_date` date NOT NULL DEFAULT '-00-00',
  `milestone_type` char(1) NOT NULL DEFAULT '0',
  `milestone` double NOT NULL DEFAULT '0',
  `cpid` varchar(32) DEFAULT NULL,
  `team` int(11) DEFAULT NULL,
  PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
  KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
  KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
  KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 These are added to on a daily basis as users pass the various
  credit
 milestones so for instance you can end up with rows for
 1000,5000,1,5,100 etc on different dates as time goes
  on.

 Now on one page for display I want to show the latest milestone
 for
   each
 project for a particular cpid. The query I use is as follows:

 select a.proj,a.id,max(stat_date),max(a.milestone) as
 milestone,b.description
 from boinc_milestone a join boinc_projects b on a.proj =
   b.proj
 where cpid = '$cpid'
 group by proj
 order by stat_date desc

 The order by causes the filesort and I can't find an easy way
  around
   it.

 mysql explain  select a.proj,a.id
 ,max(stat_date),max(a.milestone)
  as
 milestone,b.description from boinc_milestone a join
 boinc_projects
  b
   on
 a.proj = b.proj where cpid = 'XXX' group by proj order by
  stat_date\G
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: a
 type: ref
 possible_keys: PRIMARY,two,cpid,team
  key: cpid
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index; Using temporary; Using
   filesort
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: b
 type: eq_ref
 possible_keys: PRIMARY
  key: PRIMARY
  key_len: 10
  ref: stats.a.proj
 rows: 1
Extra: Using where
 2 rows in set (0.00 sec)

 I could just remove the order by altogether and perform the sort
 in
   php
 afterwards I guess but any other ideas?

 Thanks

 Phil

 --
 Distributed Computing stats
 http://stats.free-dc.org

   
   
   
--
 - michael dykman
 - mdyk...@gmail.com
   
 May the Source be with you.
   
   
   
--
Distributed Computing stats
http://stats.free-dc.org
   
  
  
  
   --
- michael dykman
- mdyk...@gmail.com
  
May the Source be with you.
  
 
 
 
  --
  Distributed Computing stats
  http://stats.free-dc.org
 
 
 


 --
  Distributed Computing stats
 http

Re: Query help

2010-09-06 Thread Ananda Kumar
Tompkins,
Which field stores the result of matches.

regards
anandkl

On Mon, Sep 6, 2010 at 4:45 PM, Tompkins Neil
neil.tompk...@googlemail.comwrote:

 Hi,

 I've the following fields within a table :

 fixtures_results_id
 home_teams_id
 away_teams_id
 home_goals
 away_goals
 home_users_id
 away_users_id

 From this I want to extract the following type of information if
 the home_users_id or away_users_id = 1 :

 total number of games games
 number of games won
 number of games drawn
 number of games lost
 number of goals scored
 number of goals conceded
 biggest win
 biggest loss
 most goals in a game

 I'd appreciate any help with regards the query and whether or not it is
 possible to do ?

 Cheers
 Neil



Re: Query help

2010-09-06 Thread Ananda Kumar
Also, can u please lets u know the value's in this table.
Just one row, an example would do.

regards
anandkl



On Mon, Sep 6, 2010 at 5:35 PM, Tompkins Neil
neil.tompk...@googlemail.comwrote:

 These two fields

  home_goals and away_goals

 Cheers
 Neil


 On Mon, Sep 6, 2010 at 12:58 PM, Ananda Kumar anan...@gmail.com wrote:

 Tompkins,
 Which field stores the result of matches.

 regards
 anandkl

   On Mon, Sep 6, 2010 at 4:45 PM, Tompkins Neil 
 neil.tompk...@googlemail.com wrote:

 Hi,

 I've the following fields within a table :

 fixtures_results_id
 home_teams_id
 away_teams_id
 home_goals
 away_goals
 home_users_id
 away_users_id

 From this I want to extract the following type of information if
 the home_users_id or away_users_id = 1 :

 total number of games games
 number of games won
 number of games drawn
 number of games lost
 number of goals scored
 number of goals conceded
 biggest win
 biggest loss
 most goals in a game

 I'd appreciate any help with regards the query and whether or not it is
 possible to do ?

 Cheers
 Neil






Re: MySQL upgrades?

2010-09-03 Thread Ananda Kumar
If you planing to migrate to a new hardware, then install the new version of
mysql and take dump of the current data and imported into ur new m/c and
test your app. If  you all looks fine, then ur done.

regards
anandkl

On Fri, Sep 3, 2010 at 3:53 PM, Machiel Richards machiel.richa...@gmail.com
 wrote:

 Good day all

I am looking for someone that can maybe help me with some info
 regarding the upgrading of MySQL.

We have a MySQL database currently running with a master/slave
 replication setup.

 The current version of the database is as follows:

mysql select version();
+--+
| version()|
+--+
| 5.0.72sp1-enterprise-gpl-log |
+--+
1 row in set (0.00 sec)

This is currently running on RedHat.


   This database is however set to be migrated to new hardware later
 during this month and discussions have been in place in order to upgrade
 the Database to the latest version possible.

As it would seem, the latest enterprise (pro/advanced) version
 available are version 5.1.46sp1 and version 5.1.50 which includes the
 interim updates as well.


 I am trying to find someone who has had experience and can give
 me some pointers in the following regard as this will be my first
 database upgrade:

 1. which one of these two version will be the best to upgrade to.
 2.  where can I get information on how to upgrade the database
 itself
 3.  has anyone done this before? if so, is there anything that we
 should look out for?


I would really appreciate the help here.

 Regards
 Machiel



Re: Logs not working

2010-09-02 Thread Ananda Kumar
Did u check the logs on the db server, to see what the issue was.

regards
anandkl

On Thu, Sep 2, 2010 at 6:25 AM, monloi perez mlp_fol...@yahoo.com wrote:

 All,

 I'm not sure if this is the right mailing list since the specific mailing
 lists
 doesn't seem to meet my concern.

 For some reason mysql client stops logging to mysqd.log. We had an issue on
 Aug
 20. But when I checked the log, the latest was on Aug 2.
 Any idea on how to resolve this or what caused it?

 Thanks,
 Mon






Re: collation problems

2010-08-31 Thread Ananda Kumar
can u please list out the table structure...as collation can also be set at
column level

regards
anandkl

On Tue, Aug 31, 2010 at 6:00 PM, mysql my...@ayni.com wrote:

 Hi listers
 mysql server here is

 mysql-server-5.1.48-2.fc13.x86_64

 this morning i created a message with a literal string in chinese in it.
 the messages in the application i used are stored in a mysql database, when
 you submit them, like in a sent folder.
 With this chinese literal in it, i, however, got
 ERROR 1271 (HY000): Illegal mix of collations for operation 'concat'

 when i sent the message.

 without the chinese literal, the message was stored in the mysql db
 correctly.

 i, then, changed the connection names to utf8 and collation to utf8_bin and
 then the session parameters looked like:

 mysql set names utf8;
 Query OK, 0 rows affected (0.00 sec)

 mysql set collation_connection = utf8_bin;
 Query OK, 0 rows affected (0.00 sec)

 mysql show session variables like character_set%;
 +--++
 | Variable_name| Value  |
 +--++
 | character_set_client | utf8   |
 | character_set_connection | utf8   |
 | character_set_database   | latin1 |
 | character_set_filesystem | binary |
 | character_set_results| utf8   |
 | character_set_server | latin1 |
 | character_set_system | utf8   |
 | character_sets_dir   | /usr/share/mysql/charsets/ |
 +--++
 8 rows in set (0.00 sec)

 mysql show session variables like collation%;
 +--+---+
 | Variable_name| Value |
 +--+---+
 | collation_connection | utf8_bin  |
 | collation_database   | latin1_swedish_ci |
 | collation_server | latin1_swedish_ci |
 +--+---+
 3 rows in set (0.00 sec)

 mysql

 i thought, if you set the collation_connection to utf8_bin, you may send
 fairly everything to mysql.


 but still, when i tried to concat the new string (including the chinese
 characers) i got:

 mysql update suomi_contacts2 set history = concat(now(), ' ', ''
 ,'concerne: utf-8-bin collation for chinese charset',' ', 'Hoi
 Suomixer,\r\n\r\nIf you\'re careful enough, nothing bad or good will ever
 happen to you.\r\n 葛斯克 愛德華 /  台北市八德路四段\r\n\r\n\r\n \r\nsuomi\r\n', ' ',
 '--- ', history) where counter = 1127;
 ERROR 1271 (HY000): Illegal mix of collations for operation 'concat'



 Question: What did i miss? what do i have to change?

 Thanks in advance

 suomi





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: collation problems

2010-08-31 Thread Ananda Kumar
can u please give the output of
desc suomi_contacts2;

2010/8/31 mysql my...@ayni.com

 Hi Ananda

 table structure is:

 mysql show full columns from suomi_contacts2;

 +--+--+---+--+-+---+-+-+-+
 | Field| Type | Collation | Null | Key |
 Default   | Extra   | Privileges
  | Comment |

 +--+--+---+--+-+---+-+-+-+
 | name | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | firm | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | title| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | phone| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | phone_std| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | fax  | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | mail | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | standard_mail| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | comment  | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | status   | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | url  | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | businesscategory | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | address  | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | addon| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | givenname| text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | history  | longtext | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | favorit  | text | latin1_swedish_ci | YES  | |
 NULL  | |
 select,insert,update,references | |
 | last_update  | timestamp| NULL  | NO   | |
 CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
 select,insert,update,references | |
 | task_link| int(11)  | NULL  | YES  | |
 NULL  | |
 select,insert,update,references | |
 | counter  | int(10) unsigned | NULL  | NO   | PRI |
 NULL  | auto_increment  |
 select,insert,update,references | |

 +--+--+---+--+-+---+-+-+-+
 20 rows in set (0.00 sec)

 mysql

 suomi

 On 2010-08-31 14:52, Ananda Kumar wrote:
  can u please list out the table structure...as collation can also be set
  at column level
  regards
  anandkl
 
  On Tue, Aug 31, 2010 at 6:00 PM, mysql my...@ayni.com
   mailto:my...@ayni.com wrote:
 
  Hi listers
  mysql server here is
 
  mysql-server-5.1.48-2.fc13.x86_64
 
  this morning i created a message with a literal string in chinese in
  it. the messages in the application i used are stored in a mysql
  database, when you submit them, like in a sent folder.
  With this chinese literal in it, i, however, got
  ERROR 1271 (HY000): Illegal mix of collations for operation 'concat'
 
  when i sent the message.
 
  without

Re: collation problems

2010-08-31 Thread Ananda Kumar
did u try changing the collation for history column to UTF8
and try the update.



2010/8/31 mysql my...@ayni.com

 On 2010-08-31 15:17, Ananda Kumar wrote:
  desc suomi_contacts2;

 mysql desc suomi_contacts2;

 +--+--+--+-+---+-+
 | Field| Type | Null | Key | Default   |
 Extra   |

 +--+--+--+-+---+-+
 | name | text | YES  | | NULL  |
|
 | firm | text | YES  | | NULL  |
|
 | title| text | YES  | | NULL  |
|
 | phone| text | YES  | | NULL  |
|
 | phone_std| text | YES  | | NULL  |
|
 | fax  | text | YES  | | NULL  |
|
 | mail | text | YES  | | NULL  |
|
 | standard_mail| text | YES  | | NULL  |
|
 | comment  | text | YES  | | NULL  |
|
 | status   | text | YES  | | NULL  |
|
 | url  | text | YES  | | NULL  |
|
 | businesscategory | text | YES  | | NULL  |
|
 | address  | text | YES  | | NULL  |
|
 | addon| text | YES  | | NULL  |
|
 | givenname| text | YES  | | NULL  |
|
 | history  | longtext | YES  | | NULL  |
|
 | favorit  | text | YES  | | NULL  |
|
 | last_update  | timestamp| NO   | | CURRENT_TIMESTAMP |
 on update CURRENT_TIMESTAMP |
 | task_link| int(11)  | YES  | | NULL  |
|
 | counter  | int(10) unsigned | NO   | PRI | NULL  |
 auto_increment  |

 +--+--+--+-+---+-+
 20 rows in set (0.00 sec)

 mysql

 --
  MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: master-slave replication sync problems.

2010-08-26 Thread Ananda Kumar
since u r starting slave by postion beyond master, and if some of the
changes are already present on slave, it would error out with duplicate.

You need
show slave status\G;
slave stop;
show slave status\G;..wait for few min..than again
show slave status\G;just to make sure...slave is in complete sync with
master...
Here u will see the master log file and position.
U need to use this, next time u start ur slave to sync with master.

regards
anandkl

On Thu, Aug 26, 2010 at 6:34 PM, Norman Khine nor...@khine.net wrote:

 hello,
 i have a working master-slave replication, the problem i find is that
 if i restart the MASTER there is a difference in the MASTER_LOG_FILE
 and MASTER_LOG_POS on the SLAVE.

 what is the correct way to keep the two slaves in sync even after i
 restart the server. the way i do it now is to:

 [MASTER]
 mysql show master status;


 +--+--+--+--+
 | mysql-bin.10 | 13405429 | upgrade,tracker,bugs |
 mysql,information_schema |

 +--+--+--+--+
 1 row in set (0.00 sec)

 [SLAVE]
 mysql stop slave;
 mysql change master to MASTER_HOST='master.domain.com',
 MASTER_USER='replicator', MASTER_PASSWORD='xxx',
 MASTER_LOG_FILE='mysql-bin.10', MASTER_LOG_POS=13301215;
 mysql start slave;

 is this correct or is there a better way to do this?

 thanks

 --
 ˙uʍop ǝpısdn p,uɹnʇ pןɹoʍ ǝɥʇ ǝǝs noʎ 'ʇuǝɯɐן sǝɯıʇ ǝɥʇ puɐ 'ʇuǝʇuoɔ
 ǝq s,ʇǝן ʇǝʎ
 % .join( [ {'*':'@','^':'.'}.get(c,None) or
 chr(97+(ord(c)-83)%26) for c in ,adym,*)uzq^zqf ] )

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




  1   2   3   4   5   6   >