application I guess :) Second you
could create the table with generous MAX_ROWS and AVG_ROW_LENGTH to tell
mysql to use bigger pointer. I don't know the real impact on performance
that will have but at least you won't be limited to 4G of data anymore!
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG
have no experience. The
documentation should provide the information needed however:
http://dev.mysql.com/doc/refman/5.0/en/instance-manager-command-options.html
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list
the advantage to allow you to store
more than 2 types of address along a user and keep all the address in
the same table instead of having 2 tables that store more or less the
same kind of information :)
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http
this settings till I could upgrade to a 64 bits host.
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
it. Don't know if
falcon use file handle per table or not tough...
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
Mariella Petrini a écrit :
I have re-run and job and I was able to repeat the
problem.
Find attached the mysql server error log with all the
tarces
Are you running on a 32 bits architecture ? I have seen case where 1.7G
is way too much for the total space mysql is allowed to use, especially
if you have innodb buffer set to bigger than default value(8MB). Also
you should check that the user is really allowed that size (ulimit).
The
of user level memory per process, so do not
# set it too high.
innodb_buffer_pool_size=2G
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
Justin a écrit :
Ok.. Straight to the point.. Here is what I currently have.
MySQL Ver 14.12 Distrib 5.0.27
RHEL
super privilege and the replication thread.
Depending of your exact version to set it it's something in the line of
set GLOBAL read_only=true;
You can also remove it in case you need it.
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL
files and not
reading them? Or, how else can I interpret this?
-- Cos
The binlog are creating most of your constant write most probably. If
you have no slave attached, you're not reading them at all...
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
in '%'. It's a
feature not a bug ;)
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
you don't even have the full
4GB of ram you can technically adressesed.
The 64 bits os would increase this limit to 64gb++ (on 64 bits hardware)
Good luck!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http
was reporting Lost connection to mysql
server during query
We adjust the setting of the persistent connection to 45 mins and the
problem went away.
This don't have anything to do with the version, but that was with 4.1 :)
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http
if the IO_THREAD
parse enough of the event to read timestamp in it.
Hope that helps!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL
)
Best luck
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
And simply for all other
mysqlbing other_binlog | mysql
Or a different way that someone recommended is to setup a local
replication on itself. Which i'll leave as an exercice :)
Hope that helps!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General
Ofer Inbar a écrit :
Mathieu Bruneau [EMAIL PROTECTED] wrote:
Ofer Inbar a écrit :
I can repeat the problem with this procedure on the test db:
- Import a full mysqldump file from the prodution db
- flush logs
- run a full mysqldump with --flush-logs --master-data=2
- do a bunch
?
As a workaround, I always play it safe and use the flush privileges
after changing privileges !
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com
and you wish to increase those esttings, I
advice to consider a 64 Bits architecture as the next upgrade :) Should
really help with db is starting to get tight in a 32bits one!
Good luck!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing
(growth)
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
commands on the
fly? Shouldn't they be executed just as the insert/update commands?
Regards
Bgs
I never experienced such issues... (Unless someone killed the alter
table) What error do you see in the replication exactly ?
(Also which version of mysql are you using ?)
--
Mathieu Bruneau
such type of mysqlbackup script?
It doesn't put the structure in a separate file (yet), but you may want
to have a look at ZRM (Zmanda Recovery Manager) for mysql here
http://www.zmanda.com/backup-mysql.html
This is the most complete free solution that I know off!
--
Mathieu Bruneau
aka ROunofF
(but will give a bigger dump size)
Also about using the mysqldump 5.0 on a mysql 4.1 server... hmmm not
sure about which side effect that may have! I usually use the version
that comes with the server...
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
Adrian Bruce
between
the db but allow for tuning of param for each, and allow more memory to
be used than the 3-4G limit per process on 32bits architecture)
mysqld_multi allows easy management of those kind of setup!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL
format, newer one are much longer and start with
an * so you know which user uses the new and the old scheme)
3) No '%' means everything except localhost... That's a special case in
mysql :)
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
Joey a écrit
Dominik Klein a écrit :
Is there a way to allow the
SQL_THREAD to write while holding everything else ?
iptables -A INPUT -p tcp --dport 3306 -s MASTER_IP -j ACCEPT
iptables -A INPUT -p tcp --dport 3306 -j REJECT
Well technically that will refuse connection. Since the switch would be
You have many options like the people below just suggest...
1 - Use mysqldump
2 - Use mysqlhotcopy
or
3 - do the mysqlhotcopy/mysqldump yourself
Since I found that neither 1 nor 2 gives exactly a perfect result in
many backup scheme alone. I started working on something that complement
1 and 2
that you probably want to use MyISAM table format and not the
old ISAM one :)
But with that database size and that amount of ram, you'll have
intensive IO unless it's almost always the same data that is accessed...
(Which doesn't make much sense if you keep that much!)
Just my 2cent...
--
Mathieu
is that MySQL built table with a default of 4Gb maximum. It's
possible to go but you need to consider it if you think your table may
grow bigger than that (for MyISAM that is)!
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
kalin mintchev a écrit :
hi all...
what's the best way to periodically back up mysql data?
so that databases and tables can be still usable even after a mysql upgrade?
thanks...
The only absolutely portable way is the dump in text file...
Good luck
--
Mathieu Bruneau
aka ROunofF
however use an typical index here, or even better an unique
index to ensure the validation!
Hope it helps you in you development!
See for all information about fulltext index in the manual
http://dev.mysql.com/doc/refman/4.1/en/fulltext-search.html
--
Mathieu Bruneau
aka ROunofF
===
GPG keys
? I'm not even sure if a limit of that kind exists
(Just putting my tought on the table)
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com
Hello everybody,
I use a table for caching, the table is quite simple there is id
field which is a md5 on some attributes of an object, a timestamp field
and a text field for storing the data.
| id char(32) | timestamp | data (text) |
We clean the table every 2 mins for data older that 10
32 matches
Mail list logo