Yup, I'm doing clean tests,lshutdown, and reload mysql each test.
The raid setup is similar, Faster is raid1 with 10k harddisk, slower is raid
10 with 15k.
Metrics show
Old raid
Secuecial writting 1G: 533 mb/s (using dd if=/dev/zero of=1G bs=1024
count=102400)
Secuencial reading 1G: 500 mb/s
New
Hi,
When i do a netstat and grep port 3306 , i can see lots of TIME_WAIT.Can
you please advise , what might be the issue for so many TIME_WAIT ?
Here are the logs and related files:
My.CNF on DB Server
##
##
[root@sql mysql]# cat /etc/my.cnf
[mysqld]
safe-show-dat
On Sun, Feb 13, 2011 at 11:40 PM, Andrés Tello wrote:
> I have a test process, which runs in the "old server" in 35 seconds, the new
> server runs the same process in 110.
>
> There is a change of version from mysql 4.1.22 to 5.1.22.
> We were stuck at 5.1.22 because higher version give us anothe
rebird
and MySQL.
This release fixes a possible crash and some other issues.
For more information, see http://www.upscene.com/go/?go=news&id=20110216
For a full list of fixes in version 4.1.1 and 4.1.0, see:
http://www.upscene.com/go/?go=tracker&v=4.1.1&id=1
http://www.upscene.com/go/
Hi All
I am trying to setup replication between 2 mysql servers, however
when running the command below on the slave machine, I get the error as
shown below.
CHANGE MASTER TO MASTER_HOST='', MASTER_USER='repladmin',
MASTER_PASSWORD='', master_log_file=‘mysql-bin.000620’,
master_log_pos=7131
Dear all,
Today I am puzzled around a problem of inserting data into new table in
new format. I have a table named *user_news* as :
We have four rows with respect to each record_id.
fore.g : I have listed main columns as
*record_id field_name field_value*
572SOI
Thank you for the information and script. I will try it out tonight when
traffic stops.
Thanks,
Carl
- Original Message -
From: "Reindl Harald"
To: "Carl"
Cc:
Sent: Wednesday, February 16, 2011 7:47 AM
Subject: Re: Replication issue
--
MySQL General Mailing List
For list archi
Am 16.02.2011 13:39, schrieb Carl:
> I was describing how long it takes to do a mysqldump, move the data,
> load the data in the slave and then restart the slave.
I would never do this with dumps because
* text-files -> *br*
* size
* overhead
Really important is that you stop the slave
It won't hurt anything to change the mac_allowed_packet size so I will increase
it (it is set to 80MB, I will double it and see what happens it the future.)
Does anyone know if there are issues replicating blobs (I read yesterday that
these sometimes cause problems)? I am just trying to see if
I was describing how long it takes to do a mysqldump, move the data, load
the data in the slave and then restart the slave. I have never used the
rsync process... I will try it out in the in the middle of the night when I
have time to recover from a screwup. Who says systems people need sleep!
I can not believe that this would take 24 hours
since rsync with compression is very efficient
and on the other hand - who cares, the master
is not down if you do this in the order i described
Am 16.02.2011 12:57, schrieb Carl:
>> are you saying to restart the slave in question from a good copy of
are you saying to restart the slave in question from a good copy of the
master that I know to be good?
Reindl Harald replied:
yes!
there is a reason why the salve stops to work and in my opinion
the only save way to get a 100% clean slave is clone it again
from the stopped master
Carl:
I was
On Wed, Feb 16, 2011 at 12:23 PM, Carl wrote:
> 110216 5:15:20 [ERROR] Error reading packet from server: log event entry
> exceeded max_allowed_packet; Increase
> max_allowed_packet on master ( server_errno=1236)
>
This seems to be the major player, here. I would make sure to increase the
setti
On Wed, Feb 16, 2011 at 10:23 AM, Machiel Richards wrote:
> Due to differences within the 2 versions, we had to exclude the
> mysql database from the backup and restore.
>
Yep :-)
>When setting up the replication, should we still
> exclude the mysql database from the
Mostly correct - save for pointer sizes and such, but it's pretty hard to
reach those.
SQL vs NoSQL is not a matter of data size - plenty of fud is being spread
about NoSQL, for some reason - but a matter of access patterns.
Without knowing what you need and how you design, that question can't be
Am 16.02.2011 12:36, schrieb Carl:
> are you saying to restart the slave in question from a good copy of the
> master that I know to be good?
yes!
there is a reason why the salve stops to work and in my opinion
the only save way to get a 100% clean slave is clone it again
from the stopped master
Am 16.02.2011 12:33, schrieb Carl:
> The max_allowed_packet setting is the same on both.
the question is how large the setting is
we have 200M on all machines
> I have tried restarting the slave... didn't work
after replication errors you should every time
* stop the slave
* "hot" rsync the dad
I am not quite certain I understand your suggestion:
Forget workarounds to solve replication errors
and re-init you replication if you will be sure
it is really consistent
When you say re-init the replication, are you saying to restart the slave in
question from a good copy of the master that I
The max_allowed_packet setting is the same on both.
I have tried restarting the slave... didn't work. I can bounce the master.
Thanks,
Carl
- Original Message -
From: "Elizabeth Mattijsen"
To: "Carl"
Cc:
Sent: Wednesday, February 16, 2011 6:23 AM
Subject: Re: Replication issue
First make sure that the "max_allowed_packet" setting is the same on both
masters.
Make sure that setting is active on the slave in question. Then start
replication or bounce the master (not sure which I did to fix this the last
time I ran into this).
Elizabeth Mattijsen
One more bit of information... this is from the error log:
110215 8:19:32 [ERROR] Error reading relay log event: slave SQL thread aborted
because of I/O error
110215 8:19:32 [ERROR] Slave SQL: Relay log read failure: Could not parse
relay log event entry. The possible r
easons are: the master'
Got fatal error 1236 from master when reading data from binary log:
'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on
master
So do this in your my.cnf :-)
Forget workarounds to solve replication errors
and re-init you replication if you will be sure
it is really consis
Run the change master again to get the relay logs from master server again.
On Wed, Feb 16, 2011 at 4:50 PM, Carl wrote:
> I am running master - master replication between two locations using MySQL
> version 5.1.41 on Slackware Linux 13 (64bit).
>
> The problem from show slave status is:
>
>
I am running master - master replication between two locations using MySQL
version 5.1.41 on Slackware Linux 13 (64bit).
The problem from show slave status is:
Last_Error: Relay log read failure: Could not parse relay
log event entry. The possible reasons are: the master's bi
Hi All
as per my mail yesterday we are busy migrating and upgrading a
current MySQL database this weekend.
All tests on the new system has been done,etc...
However, in order to minimize downtime, it was decided to setup
the new "master" server (Mysql 5.1) to be a slave t
there are no hard limits as long your hardware ist fast enough
* memory, memory and agin: memory
* disk-speed
* cpu
Am 16.02.2011 06:04, schrieb Adarsh Sharma:
> Dear all,
>
> I want to know the upper limit of mysql after which Mysql-5.* fails to
> handle large amount of data ( 100's of GB
> o
26 matches
Mail list logo