Hmmm...
no more ideas or suggestions anybody? :(
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I use rsync to set up the slave...
On Mon, Apr 21, 2008 at 12:30 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote:
Eric Bergen schrieb:
Hi Jan,
You have two separate issues here. First the issue with the link
between the external slave and the master. Running mysql
Eric Bergen schrieb:
Hi Jan,
You have two separate issues here. First the issue with the link
between the external slave and the master. Running mysql through
something like stunnel may help with the connection and data loss
issues.
I wonder how any corruption could happen on a TCP
I have a setup with a master and a bunch of slaves in my LAN as well as
one external slave that is running on a Xen-Server on the internet.
All servers run Debian Linux and its mysql version 5.0.32
Binlogs are around 2 GB per day. I have no trouble at all with my local
slaves, but the external one
David Schneider-Joseph schrieb:
Hi all,
I am attempting to convert a very large table (~23 million rows) from
MyISAM to InnoDB. If I do it in chunks of one million at a time, the
first million are very fast (approx. 3 minutes or so), and then it
gets progressively worse, until by the time I
David Schneider-Joseph schrieb:
On Jan 29, 2008, at 6:09 PM, Jan Kirchhoff wrote:
what hardware are you running on and you much memory do you have?
what version of mysql?| |
How did you set innodb_buffer_pool_size?
Hardware:
Dual AMD Opteron 246 2.0 GHz
4 GB DDR RAM (no swap being used
mos schrieb:
I posted this message twice in the past 3 days, and it never gets on
the mailing list. Why?
Here it is again:
I have a Text field that contains paragraph text and for security
reasons I need to have it encrypted. If I do this, how can I still
implement full text search on it?
Ratheesh K J schrieb:
Hello all,
What is the best possible values in my.cnf for a 8 processor (Quad core-2 cpu)
8 GB RAM machine dedicated for MySQL server only. No other application will run
on this machine.
the innodb_buffer_pool_size cannot accept values above 2000 MB due to 32 bit
Ratheesh K J schrieb:
Thanks,
It helped me a lot. I wanted to know
1. what are the various scenarios where my replication setup can
fail? (considering even issues like network failure and server
reboot etc). What is the normal procedure to correct the failure
when
Ratheesh K J schrieb:
Hello all,
I issued a create table statement on the master for a table which was not
present on the master but present on the slave.
I did this purposely to see the error on slave.
I am a newbie to replication. Now when i see SLave status on the slave machine
it shows
Ratheesh K J schrieb:
Hello all,
yesterday we seperated our app server and db server. We moved our 70GB of data from our app server to a new DB server. We installed MySQL 4.1.11 on the DB server.
Now the following happened. On the DB server the ibdata1 and all the databases
are the old ones
Kishore Jalleda schrieb:
Hi you may be having issues with the byte order on the opetron's and the
P4's , this was asked earlier in the list, and here's what Jimmy from Mysql
had to say
Kishore,
Thanks for the suggestion, but all x86 have the same byte order... and
as I wrote its
to change/try on my new servers?!?
any ideas anybody?
thanks
Jan
Jan Kirchhoff schrieb:
I thought I found the reason for my problems with the change in
join-behaviour in mysql 5, but Iwas wrong :( there is more trouble :(
my replications hangs with simple queries like insert into table
(a,b,c
MyISAM
tables
http://www.innodb.com/order.php
- Original Message -
From: Jan Kirchhoff [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?
Hi,
I am currently
As I already wrote I try do get a replication running from a mysql-4.1.13
(32bit) master to a 5.0.18 (64bit) slave. It only runs for a few minutes and
then a query hangs.
I think I now found out why:
I modified a multi-table-update that hung to a select. The same query on the absolutely
I thought I found the reason for my problems with the change in
join-behaviour in mysql 5, but Iwas wrong :( there is more trouble :(
my replications hangs with simple queries like insert into table
(a,b,c) values (1,2,3) on a myisam-table.
It just hangs forever with no cpu-load on the slave.
Comma separated JOINS strikes again!!!
[...]
Here is where you will find this change documented in the manual:
http://dev.mysql.com/doc/refman/5.0/en/upgrading-from-4-1.html
I read that page over and over again... probably too late at night.
thanks for that info. Thanks to Peter, too.
kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...
Jan
Jan Kirchhoff schrieb:
sheeri kritzer schrieb
I've been trying to get my new mysql-5.0.18-servers running as slaves of our
production systems to check if all our applications work fine with mysql 5 and
to do some tests and tuning on the new servers.
The old servers are all P4s, 3GB RAM running debian-linux, 2.4-kernel and
official mysql
Hi,
Did anybody ever benchmark heap-tables against a cluster?
I have a table with 900.000 rows (40 fields, CHARs, INTs and DOUBLEs,
Avg_row_length=294) that gets around 600 updates/sec (grouped in about 12
extended inserts a minute inserting/updating 3000 rows each).
This is currently a
Hello,
I am just doing my first testing on a mysql-cluster system. Curently, I habe 1
management node running and 2 Data-Nodes that also run a mysqld each.
The servers are Dual-Opterons with 6GB of RAM each.
I did a dump of a database of one of our production systems (about 1.5GB
sheeri kritzer schrieb:
Why are you using a heap table?
We started out with a myisam-table years ago when the table was much
smaller und less frequently updated. We tried innodb about 2 or 3 years
ago and couldn't get a satisfying result. We then changed it to HEAP and
everything was fine.
sheeri kritzer schrieb:
No problem:
Firstly, how are you measuring your updates on a single table? I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered. The average for one table
Hi Brent,
Wow, it seems like you are going to extremes. To jump from myisam to
heap is a big step. Did you try using InnoDB? It would handle locking
issues much better since it doesn't lock the table. Heap tables can
be pretty dangerous since it's all in memory. If the machine crashes,
Hi,
I am currently using a replication setup on two servers with mysql
4.1.13-standard-log (master/slave each a P4 2.4ghz, 3GB RAM, Hardware
SCSI-RAID).
I have a table that has lots of updates and selects. We converted this table
(along with other tables) from a myisam to a heap-table 6 months
to upgrade to the latest release (4.1.9 now).
Jan Kirchhoff [EMAIL PROTECTED] wrote:
Gleb Paharenko schrieb:
Hello.
I've looked through the bug database, and the only thing
that I've found was an already-closed bug:
http
to make sure the tables are OK...
Jan Kirchhoff [EMAIL PROTECTED] wrote:
Hi,
My problem still goes on... After having had the problem 2 more times
within 1 day, I decided to re-do the replication (copy the whole
database onto the slave with rsync and reset master
Hi,
My problem still goes on... After having had the problem 2 more times
within 1 day, I decided to re-do the replication (copy the whole
database onto the slave with rsync and reset master and slave). That
only lasted for little more than 1 day and I ended up with the same error:
Could not
Misao schrieb:
Our production databases here are really growing and getting to be rather
big. The question on our minds is; when is a database or table just too big?
We have a few 20-30GB-InnoDB-Tables (growing) without any problems
(mysql 4.1.5gamma).
The limits of mysql are somewhere in the
Hello,
I have a replication setup on to linux boxes (debian woody, kernel 2.4.21-xfs,
mysql 4.1.7-standard official intel-compiler binary from mysql.com).
master:~# mysqladmin status
Uptime: 464848 Threads: 10 Questions: 296385136 Slow queries: 1752 Opens:
2629 Flush tables: 1 Open
harrison, thanks for you mail,
I think mysql uses way too much memory (overhead) to store my data.
How much overhead do you think it is using? Each row is 61 bytes in
geldbrief, which is *exactly* the amount needed for the datatypes you
have.
[...]
Now if you take 61 * 2449755 (number of rows)
I was just wondering if anybody has been using very large HEAP-tables
and if there are ways to have mysql use the memory more efficient:
(I have no experience with all heap-tables but using them as temporary
tables...)
I just started testing with 2 heap-tables on a development-system (p4
Philippe Poelvoorde wrote:
Maybe you should try to normalize your table,
'symbol' could have its own table, that would reduce data and index.
And then try to reduce the size of your rows, bidsize and asksize
should be in integer I think. Maybe 'float' would be enough.
What represents the 'quelle'
Philippe Poelvoorde wrote:
Hi,
I changed a few columns, bidsize and asksize are integer now, and i
changed ticknumber to smallint unsigned.
At first I used the ticknumbers by the feedserver, now I count up to
65,000 and then reset the counter back to 0. I need that additional
column to handle
That sounds like a typical mod_perl-problem. The script is making new
connections and doesn't close the old ones.
You should add debug-code to your script and add
* * * * * root mysql -e 'show processlist'
/tmp/mysql_processlist_debug_`date +%s`.txt
to your /etc/crontab in order to log the
Jim wrote:
Hi. I'm wondering if anyone can help me tune this database so it runs
better on my hardware. I've made some attempts, but either they've
made it worse or not changed anything. Changing the database design
itself has shown the most improvement, but I'd still like to know how
to
Jocelyn Fournier wrote:
Hi,
A quick fix would be to set the wait_timeout variable in the my.cnf to a
much smaller value than 28800 (default value).
Try to add wait_timeout=60 in the my.cnf for example, the connections should
be automatically closed after 60 secondes if there are not used anymore.
put the following option in your my.cnf on the slave in order to ignore
errors. Just use the error-numbers you'd like to ignore:
slave-skip-error=1053
Jan
Jim Nachlin wrote:
Is there any way within mysql to have the slaves not stop replicating
on an error. For some reason, my application is
://dev.mysql.com/doc/mysql/en/Replication_Options.html
We've started using replication over the Internet in 2001 using
SSH-Tunnels (SSH-Port-Forwarding) which works fine, too. We haven't had
any problems.
regards
Jan Kirchhoff
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
David Griffiths wrote:
We just put a new dual-Opteron server into our production environment.
We ordered a Megaraid SCSI card and five 10k drives, and a 3Ware
Escalade SATA card with six 7200 RPM drives (Maxtor) to see which ones
were best.
Our network guy did a bunch of benchmarking on the
[EMAIL PROTECTED] wrote:
Problem: Spam Abuse
IP of offender: 66.50.xxX.245
Date of offense: 2004-07-05
Time of offense: 16:15
Now if I query the database based on date and ip address, I get the
following:
Id Date Time Record TypeFull
Name IP
Egor Egorov wrote:
Money is not really an issue but of course we don't want to waste it for
scsi-hardware if we can reach almost the same speed with hardware
sata-raids.
'Almost' is a key word. Some SCSI disk are working at 15k RPM, which will give
you a HUGE MySQL performance growth
Hi,
We are currently using a 4.0.16-replication-setup (debian-linux, kernel
2.4.21, xfs) of two 2.4ghz Intel-Pentium4 systems with 3gig RAM each
and SCSI-Hardware-Raid, connected via gigabit-ethernet. We are reaching
the limit of those systems and are going to buy new hardware as well as
My replication crashed just once again...
my my.cnf on the slave contains:
master-host = 123.123.123.123
master-user = rep
master-password = hidden
replicate-do-db = db1
server-id = 6
replicate-ignore-table=db1.specials
I created a new DB on the master called "specials".
My Replication-Slave crashed a few days ago with an error in the error-log
saying something like "duplicate primary key processing query "INSERT INTO
testtable (time,name,number) VALUES (NOW(),'Hello',10)"
testtable's primary key was (time,name)
time: datetime
name: varchar
number: int
What
* It'd be handy to create a compressed tar file (.tar.gz). I'll
probably add that.
great ;) but i'll transfer it compressed with scp, so it's no big problem for
me. But a "-z"-switch would probably be useful for lots of people.
* It'd be nice to specify which databases/tables
one of my tables has ~10 million rows (but just 4 columns:
int,double,double,date (date,int as primary key)) and when i started a
replication on that database it crashed within 48 hours without any messages
in the error-log. the mysql-server stays, up, just the replication dies. i
don't have
47 matches
Mail list logo