This just seems to slow

2011-01-02 Thread Jerry Schwartz
I'm trying to load data into a simple table, and it is taking many hours (and 
still not done). I know hardware, etc., can have a big effect, but NOTHING 
should have this big an effect.

=
us-gii show create table t_dmu_history\G
*** 1. row ***
   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't control 
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how clumsy 
the method was.

Is that autocommit a problem? This is a bulk load into an empty table, so 
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



where clause

2011-01-02 Thread Elim PDT

I have a table with a column pv (int)
I want to fetch all the rows r such that (r.pv)+1 is the pv-value of another 
row.


what the sql looks like? thanks 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: This just seems to slow

2011-01-02 Thread mos

Jerry,
  Use Load Data Infile when loading a lot of data. Whoever is giving 
you the data should be able to dump it to a CSV file. Your imports will be 
much faster.


Mike

At 07:51 PM 1/2/2011, you wrote:

I'm trying to load data into a simple table, and it is taking many hours (and
still not done). I know hardware, etc., can have a big effect, but NOTHING
should have this big an effect.

=
us-gii show create table t_dmu_history\G
*** 1. row ***
   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't control
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how clumsy
the method was.

Is that autocommit a problem? This is a bulk load into an empty table, so
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: This just seems to slow

2011-01-02 Thread Daevid Vincent
Another option would be to mangle your insert statement with some other
language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
instead. Something like:

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519),
VALUES (13071, 299520),
VALUES (13071, 299521),
...

That will radically speed up the inserts.

Also delete your INDEX / KEYs and add them at the very end instead. 

-Original Message-
From: mos [mailto:mo...@fastmail.fm] 
Sent: Sunday, January 02, 2011 8:42 PM
To: mysql@lists.mysql.com
Subject: Re: This just seems to slow

Jerry,
   Use Load Data Infile when loading a lot of data. Whoever is giving 
you the data should be able to dump it to a CSV file. Your imports will be 
much faster.

Mike

At 07:51 PM 1/2/2011, you wrote:
I'm trying to load data into a simple table, and it is taking many hours
(and
still not done). I know hardware, etc., can have a big effect, but NOTHING
should have this big an effect.

=
us-gii show create table t_dmu_history\G
*** 1. row ***
Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
   `DM_History_DM_ID` int(11) DEFAULT NULL,
   `DM_History_Customer_ID` int(11) DEFAULT NULL,
   PRIMARY KEY (`t_dmu_history_id`),
   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't
control
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how
clumsy
the method was.

Is that autocommit a problem? This is a bulk load into an empty table, so
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

Issue while SymLinking a Database

2011-01-02 Thread Adarsh Sharma

Dear all,

I am working on a stable solution for resolving Space Issue of data 
directory of MyISAM tables.
We have a table of 70GB in /hdd-1/mysql_data path and there is 10GB 
space available space in Hard Disk.Now the table expects to grow upto 150Gb.


I have some doubts regarding Symlinking a database. The steps involved 
are :-


1. Shutdown the server.
2. Create a new directory and move your database to new drive.
3. Symlink the database in the original directory and change permissions.

This involves a lot of time to move 70Gb data to new place.

Well this wouldn't be the perfect solution I'm looking for.

Is there any particular solution that requires no server shutdown and 
any client query operations affected and simply put new data into 
another partition.


Would Partitioning is the only rescue operation?

Please help me to find a stable and standardized solution.



Thanks  Regards

Adarsh Sharma

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Issue while SymLinking a Database

2011-01-02 Thread Suresh Kuna
Hey Adarsh,

If no downtime then the only way is lock with write on the table, move to
the new space, create symlink, flush the table.

Remember, this symlinks will have issues if you execute any maintenance on
these tables which are moved.

Better idea - As the tables are MyISAM, move one database dir one at a time
by locking all tables and create a symlink for the database folder.

On Mon, Jan 3, 2011 at 10:56 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Dear all,

 I am working on a stable solution for resolving Space Issue of data
 directory of MyISAM tables.
 We have a table of 70GB in /hdd-1/mysql_data path and there is 10GB space
 available space in Hard Disk.Now the table expects to grow upto 150Gb.

 I have some doubts regarding Symlinking a database. The steps involved are
 :-

 1. Shutdown the server.
 2. Create a new directory and move your database to new drive.
 3. Symlink the database in the original directory and change permissions.

 This involves a lot of time to move 70Gb data to new place.

 Well this wouldn't be the perfect solution I'm looking for.

 Is there any particular solution that requires no server shutdown and any
 client query operations affected and simply put new data into another
 partition.

 Would Partitioning is the only rescue operation?

 Please help me to find a stable and standardized solution.



 Thanks  Regards

 Adarsh Sharma

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=sureshkumar...@gmail.com




-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Loading date takes a long time after replication

2011-01-02 Thread Sairam Krishnamurthy

Hi all,

I set up replication some days back. Because of some internal issues we 
canceled the replication and reset the master to be a normal database 
server. But after this revert, the database is unusually slow.


It was slow because of the discussion below when replication was on. But 
I wonder why it is slow now after the master has been reset. By reset I 
mean, commenting out the master setup lines in the my.cnf file, removal 
of logs and restarting the database.


Thanks in advance

Thanks,
Sairam Krishnamurthy
+1 612 859 8161


On 12/06/2010 04:47 AM, Ananda Kumar wrote:

Also, make sure your /tmp folder is on a separate and fast disk.
We had similar issues and we moved /tmp folder from Local to SAN 
storage and it was quite fast.


regards
anandkl

On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman vegiv...@tuxera.be 
mailto:vegiv...@tuxera.be wrote:


Are you saying that mass inserts go much slower now that you've set up
replication? In that case, I suspect you have your binlogs on the
same disk
as your data.

Put the binary logs on separate disks, and you'll notice a
dramatic increase
in performance.


On Mon, Dec 6, 2010 at 7:17 AM, Sairam Krishnamurthy
kmsram...@gmail.com mailto:kmsram...@gmail.comwrote:

 All,

 I have a situation. We have a very large database(gigabytes).
When we load
 the data in to the table, it will be huge again (20 million
rows). So in
 order to get good backup we set up a replication(master-slave
setup). But
 now, because of this setup, loading into the table takes hours
which will be
 generally done in a few minutes otherwise.

 Is there a workaround for this? We really don't want to wait for
hours for
 the table to be loaded. Should we abandon replication for this?
Any pointers
 ?

 --
 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




Re: Loading date takes a long time after replication

2011-01-02 Thread Ananda Kumar
What is the output of show full processlist

On Mon, Jan 3, 2011 at 1:10 PM, Sairam Krishnamurthy kmsram...@gmail.comwrote:

 Hi all,

 I set up replication some days back. Because of some internal issues we
 canceled the replication and reset the master to be a normal database
 server. But after this revert, the database is unusually slow.

 It was slow because of the discussion below when replication was on. But I
 wonder why it is slow now after the master has been reset. By reset I mean,
 commenting out the master setup lines in the my.cnf file, removal of logs
 and restarting the database.

 Thanks in advance


 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161


 On 12/06/2010 04:47 AM, Ananda Kumar wrote:

 Also, make sure your /tmp folder is on a separate and fast disk.
 We had similar issues and we moved /tmp folder from Local to SAN storage
 and it was quite fast.

 regards
 anandkl

 On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be wrote:

Are you saying that mass inserts go much slower now that you've set up
replication? In that case, I suspect you have your binlogs on the
same disk
as your data.

Put the binary logs on separate disks, and you'll notice a
dramatic increase
in performance.


On Mon, Dec 6, 2010 at 7:17 AM, Sairam Krishnamurthy
kmsram...@gmail.com mailto:kmsram...@gmail.comwrote:


 All,

 I have a situation. We have a very large database(gigabytes).
When we load
 the data in to the table, it will be huge again (20 million
rows). So in
 order to get good backup we set up a replication(master-slave
setup). But
 now, because of this setup, loading into the table takes hours
which will be
 generally done in a few minutes otherwise.

 Is there a workaround for this? We really don't want to wait for
hours for
 the table to be loaded. Should we abandon replication for this?
Any pointers
 ?

 --
 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel





Re: Loading date takes a long time after replication

2011-01-02 Thread Sairam Krishnamurthy

Apparently nothing. Just the Show process list query.

 70100 | user | localhost | database | Query   |0 | NULL  | 
show full processlist


Thanks,
Sairam Krishnamurthy
+1 612 859 8161


On 01/03/2011 01:51 AM, Ananda Kumar wrote:

What is the output of show full processlist

On Mon, Jan 3, 2011 at 1:10 PM, Sairam Krishnamurthy 
kmsram...@gmail.com mailto:kmsram...@gmail.com wrote:


Hi all,

I set up replication some days back. Because of some internal
issues we canceled the replication and reset the master to be a
normal database server. But after this revert, the database is
unusually slow.

It was slow because of the discussion below when replication was
on. But I wonder why it is slow now after the master has been
reset. By reset I mean, commenting out the master setup lines in
the my.cnf file, removal of logs and restarting the database.

Thanks in advance


Thanks,
Sairam Krishnamurthy
+1 612 859 8161


On 12/06/2010 04:47 AM, Ananda Kumar wrote:

Also, make sure your /tmp folder is on a separate and fast disk.
We had similar issues and we moved /tmp folder from Local to
SAN storage and it was quite fast.

regards
anandkl

On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman
vegiv...@tuxera.be mailto:vegiv...@tuxera.be
mailto:vegiv...@tuxera.be mailto:vegiv...@tuxera.be wrote:

   Are you saying that mass inserts go much slower now that
you've set up
   replication? In that case, I suspect you have your binlogs
on the
   same disk
   as your data.

   Put the binary logs on separate disks, and you'll notice a
   dramatic increase
   in performance.


   On Mon, Dec 6, 2010 at 7:17 AM, Sairam Krishnamurthy
kmsram...@gmail.com mailto:kmsram...@gmail.com
mailto:kmsram...@gmail.com mailto:kmsram...@gmail.comwrote:


 All,

 I have a situation. We have a very large database(gigabytes).
   When we load
 the data in to the table, it will be huge again (20 million
   rows). So in
 order to get good backup we set up a replication(master-slave
   setup). But
 now, because of this setup, loading into the table takes hours
   which will be
 generally done in a few minutes otherwise.

 Is there a workaround for this? We really don't want to wait for
   hours for
 the table to be loaded. Should we abandon replication for this?
   Any pointers
 ?

 --
 Thanks,
 Sairam Krishnamurthy
 +1 612 859 8161




   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel