Hi
Recently I noticed the server takes lot of time on and off when opening and
closing tables. And I tried to increase the table_cache more the the total
tables (file_limit is properly set); and the problem still continues and
lowering it also continues.. and tried to set in middle.. same
Any
hi,
I wonder if it is safe to assume that binlog can stay in master's
memory when replication happens. If not, when the binlog getts
corruptted, will the slave's binlog also get corrupted?
Is there way to make the slave's binlog survive even in master's disk failure?
Thank you
--
MySQL
Hi All,
is it possible to auto insert to another table once a new data is
inserted on a table? i'm using asterisk mysql cdr, what i'd like to do
is once asterisk insert new data on the cdr table, i will insert to
another table which includes already how much the call was coz i dont
want to
You can probably use a trigger. Check the section of the manual that
explains triggers.
Baron
On Fri, Feb 27, 2009 at 8:04 AM, Ron r...@silverbackasp.com wrote:
Hi All,
is it possible to auto insert to another table once a new data is inserted
on a table? i'm using asterisk mysql cdr, what
On Fri, Feb 27, 2009 at 7:04 AM, Cui Shijun rancp...@gmail.com wrote:
hi,
I wonder if it is safe to assume that binlog can stay in master's
memory when replication happens
It's not safe to assume. It varies from system to system depending on
operating system, filesystem, scheduler algorithm,
Hi everybody ,
I'm searching for a good way to monitor MySQL
availability (to be able to be alerted when it goes down unplanned) and i
just wanted to poke around to know which ways you people find the most
efficient. Are you using third party software , scripts ,
On Fri, Feb 27, 2009 at 4:19 AM, dbrb2002-...@yahoo.com wrote:
Hi
Recently I noticed the server takes lot of time on and off when opening and
closing tables. And I tried to increase the table_cache more the the total
tables (file_limit is properly set); and the problem still continues and
We monitor hundreds of production systems with Nagios, of any kind.
I dont have time to search for better ones, but this is doing its job.
Cheers
Claudio
2009/2/27 Éric Fournier eric.fourn...@cspq.gouv.qc.ca
Hi everybody ,
I'm searching for a good way to
Hi,
I'm using Nagios or Zabbix as a monitoring system for my/my clients
infrastructures.
When the client doesn't want an access to his monitoring system, I install
Nagios with a couples of fine-tuned configuration/plugins. This is mainly a
legacy position since I've been using that for years.
On Fri, February 27, 2009 05:50, Baron Schwartz wrote:
On Fri, Feb 27, 2009 at 4:19 AM, dbrb2002-...@yahoo.com wrote:
Hi
Recently I noticed the server takes lot of time on and off when opening
and closing tables. And I tried to increase the table_cache more the the
total tables (file_limit
Thanks for the quick followup Baron..
vmstat
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo in cs us sy id wa st
3 0100 499380 139256 560400000 190 693 11 11 20 2 70 8 0
In the last episode (Feb 27), dbrb2002-...@yahoo.com said:
Recently I noticed the server takes lot of time on and off when opening
and closing tables. And I tried to increase the table_cache more the the
total tables (file_limit is properly set); and the problem still continues
and lowering
Looks like the system is doing a lot of disk WRITES. Your writes/sec are
much higher than your reads/sec. But the time stuff waits in the queue is
low. Did you try top -i with the x option entered? That will produce a
colored line if a dask is I/O bound.
On Fri, February 27, 2009 11:51,
What is this supposed to mean from the manual:
The use of |mysql_num_rows()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-num-rows.html depends on
whether you use |mysql_store_result()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-store-result.html or
|mysql_use_result()|
Is there another way to determine what the query found? The input is a
string whatever.
I'm looking for a return of a string or null or nuber of rows or
something that will permit to to channel the execution of the file.
The snippet below is good for (there is a match) and (there is no match)
but
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very simple table but when a report is run (query) it just takes
hours,
and sometimes the application hangs.
I was trying to play a little with indexes and tuning (there is not great
indexes to be done though)
but
Great Brent, helps a lot!
it is very good to know your experience.
I will speak to developers and try to see if there is the opportunity to
apply the 'Divide et Impera' principle!
I am sorry to say MySQL it is a little out of control when dealing with
huge tables, it is the first time I had to
Have you tried disabling indexes while loading?
Here is what I mean...
CREATE TABLE tb1 (A INT NOT NULL AUTO INCREMENT PRIMARY KEY,B VARCHAR(20),C
VARCHAR(10));
Load tb1 with data
Create a new table, tb2, with new structure (indexing B and C columns)
CREATE TABLE tb2 LIKE tb1;
ALTER TABLE tb2
Thanks Dan.. thats a valuable point.. and this actually happening with MyISAM
tables only..
But the question is; when I set the table_cache to higher than total tables..
then it should stop closing the table in first place..so that only un-opened
tables will be opened and kept in cache.. it
MySQL can open a single table multiple times depending on how many
clients need to use it. This means that having a table_cache the same
as the total_tables will only work if your mysql server only has one
client.
For more details read:
http://dev.mysql.com/doc/refman/5.0/en/table-cache.html
On
In the last episode (Feb 27), dbrb2002-...@yahoo.com said:
Thanks Dan.. thats a valuable point.. and this actually happening with
MyISAM tables only..
But the question is; when I set the table_cache to higher than total
tables.. then it should stop closing the table in first place..so that
21 matches
Mail list logo