Re: large temp files created by mysql

2012-05-24 Thread Luis Daniel Lucio Quiroz
I got a solution maybe

step 1:
mysql explain select * from users;
++-+---+--+---+--+-+--+--+---+
| id | select_type | table | type | possible_keys | key  | key_len |
ref  | rows | Extra |
++-+---+--+---+--+-+--+--+---+
|  1 | SIMPLE  | users | ALL  | NULL  | NULL | NULL|
NULL | 32883093 |   |
++-+---+--+---+--+-+--+--+---+
1 row in set (0.00 sec)

so you get the rows field

Step2:
select * from users, limit $r,1


What do you think? Is the only way i found what delays seconds not
minuts. USERS is a 19GB Table for me.

LD

2011/10/30 Jan Steinman j...@bytesmiths.com:
 Actually, having tried that, you still need the ORDER BY RAND() in there. 
 Otherwise, I keep getting the same record over and over. But it surely cuts 
 way down on the number of rows that need to be sorted.

 So if your table size is fairly stable, and you pick a good number for the 
 WHERE constant, you can make this quite speedy.

 Still, it seems there should be a better way...

 On 30 Oct 11, at 18:51, Jan Steinman wrote:

 From: mos mo...@fastmail.fm


 At 10:34 AM 10/24/2011, you wrote:
 select id from table order by rand() limit 1;
 is doing as example a dumb temporary table with the full size

 Because it has to sort the entire table, then it returns the one row. This 
 of course is extremely inefficient. :)

 That is absolutely incredible and counter-intuitive, and (as you say) 
 extremely inefficient!

 This is used everywhere. Perhaps it is one of the biggest anti-patterns in 
 SQL. I just checked two different SQL cookbook sites, and they both 
 recommend ORDER BY RAND().

 I just googled around a bit, and found that putting RAND() in the WHERE 
 clause is very efficient:

 SELECT id FROM table WHERE RAND()  0.01 LIMIT 1

 The comparison constant can be optimized for the number of rows you have. 
 The above returns the first record of 1% of the table. If you have a million 
 rows, you might want to bump that to something like 100 parts per million or 
 so.

 But really, folks, this is something so ubiquitous and so recommended, why 
 can't the query optimizer look out for ORDER BY RAND() and simply skip the 
 table sort and just grab some record? (Hopefully using something better than 
 Knuth's LCRNG...)

 
 Learning to think wholistically requires an overriding, or reversal, of much 
 of the cultural heritage of the last few hundred years. -- David Holmgren
  Jan Steinman, EcoReality Co-op 


 
 Within a few human generations, the low-energy patterns observable in natural 
 landscapes will again form the basis of human system design after the richest 
 deposits of fossil fuels and minerals are exhausted. -- David Holmgren
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    
 http://lists.mysql.com/mysql?unsub=luis.daniel.lu...@gmail.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-24 Thread Yu Watanabe
Alex

Thank you for the advice.

Probably, we will put index (key) on both columns.

Thanks,
Yu

Alex Schaft さんは書きました:
On 2012/05/24 07:37, Alex Schaft wrote:
 You are selecting a record based on the value of data_id and
 thold_enabled, but don't have an index on either? Add an index for both.
 If data_id is unique, then you would only need an index on that.

 Alex


On second thought, an index on thold_enabled won't mean much I think, so
either leave it off or create an index on data_id plus thold_enabled.
Someone more knowledgeable may correct me.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-24 Thread Yu Watanabe
Johnny

Thank you for the reply.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

  The problem is mainly on MyISAM engine.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

  The server has no RAID.

Thanks,
Yu

Johnny Withers さんは書きました:
I don't see any attachments.

First, I would upgrade to 5.5 as 5.0 is very old. The upgrade process
is painless.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

Sent from my iPad

On May 22, 2012, at 9:08 PM, Yu Watanabe yu.watan...@jp.fujitsu.com wrote:

 Hello all.

 I would like to ask for advice with performance tuning with MySQL.

 Following are some data for my server.

 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise

 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from 
 mysqladmin

 I have 2 database working with high load.

 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.

 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

 and read Hiperformance Mysql vol.2 ,
 and increased the following values,

 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size

 but made not much difference.

 According to the ps and sar result

 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028

 *2 SAR
 Average CPU user 25%
sys  5%
io   3%

 I assume that MySQL can work more but currently not.

 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.

 It would be a great help if people in this forum can give
 us an adivice for the tuning.

 Best Regards,
 Yu Watanabe


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Architectural Help

2012-05-24 Thread Anupam Karmarkar
Hi All,


I need architectural help for our requirement, 


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations, 


Current writes each day central server 1million. 


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags. 


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done) 



--Anupam


Re: Architectural Help

2012-05-24 Thread Ananda Kumar
is the central database server just ONE server, to which all your 50 data
center app connects

On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:

 Hi All,


 I need architectural help for our requirement,


 We have nearly 50 data centre through out different cities from these data
 center application connect to central database server currently, there are
 conectivity and nework flcutions issues for different data center, so we
 comeup with solution each data center we should have local database server
 which will keep syncing with other server so that application doesnt fail ,
 User data can be updated in any of server and should reflect in every
 server.  Application consists of write/read/delete operations,


 Current writes each day central server 1million.


 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.


 How can we achive this ? solution needs very much real time data accepting
 nework lags.


 Solution

 Collect all changes in other 49 server into 1 central server(How can we
 collect data)


 49 keeps updating data into local database from central server(Using
 Repliation Can be done)



 --Anupam



RE: Architectural Help

2012-05-24 Thread Nigel Wood
A few questions:
which is more or a problem: network outages, network capacity or query latency? 
When you say near real-time do you need transactional consistent view on all 
49 servers or can some lag be tolerated?
Can any one of the 49 local servers potentially update/delete the same rows or 
data?
Is there any natural segmentation point within the data? 
Do the data centers have diverse networks so that connections to some data 
centers may remain when others? 
In the event that a local data centre is totally isolated from the others what 
data should it be allowed to update?
Do your applications produce/examine  large data set querying by secondary keys 
or using tull text search?
Are you in a position to modify the applications?
__
From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
Sent: Thursday, May 24, 2012 10:17 AM
To: mysql@lists.mysql.com
Subject: Architectural Help

Hi All,


I need architectural help for our requirement,


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations,


Current writes each day central server 1million.


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags.


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done)



--Anupam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Architectural Help

2012-05-24 Thread Anupam Karmarkar
Dear Nigel,

Thank for reply.. 


See my comments below


--Anupam



 From: Nigel Wood nw...@plus.net
To: Anupam Karmarkar sb_akarmar...@yahoo.com; mysql@lists.mysql.com 
mysql@lists.mysql.com 
Sent: Thursday, 24 May 2012 3:35 PM
Subject: RE: Architectural Help
 
A few questions:
which is more or a problem: network outages, network capacity or query latency? 
Network Outages is problem, between data centres
When you say near real-time do you need transactional consistent view on all 
49 servers or can some lag be tolerated?
Some lag can be tolerated considering network outage 
Can any one of the 49 local servers potentially update/delete the same rows or 
data?
Yes central server can update same row or local server but not at same time,  
Few data will be updated in central location and others majorly in local.One 
Application Writes in Centeral and Others in local, Local app and central app 
shares some data.
Is there any natural segmentation point within the data? 
No
Do the data centers have diverse networks so that connections to some data 
centers may remain when others? 
Yes, we have diverse nework , so connectivity will be avaiable in other data 
centers.
In the event that a local data centre is totally isolated from the others what 
data should it be allowed to update?
Loca application should always write in local database, Central Application 
Updated will not be available to local.
Do your applications produce/examine  large data set querying by secondary keys 
or using tull text search?
We dont have text search or we dont query large data
Are you in a position to modify the applications?
No, Micro changes ok
__
From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
Sent: Thursday, May 24, 2012 10:17 AM
To: mysql@lists.mysql.com
Subject: Architectural Help

Hi All,


I need architectural help for our requirement,


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations,


Current writes each day central server 1million.


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags.


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done)



--Anupam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

RE: large temp files created by mysql

2012-05-24 Thread Rick James
Maybe.

1. The Rows is approximate, so you could over-shoot or under-shoot the end of 
the table.

2. OFFSET (limit $r,1) still has to scan $r rows.

3. SELECT * with an OFFSET will read the entire rows.  SELECT `foo`, where 
`foo` is indexed, but not the InnoDB PRIMARY KEY, will scan only the 
INDEX(foo).  This is likely to be much faster.  But you are unlikely to do that 
unless foo is UNIQUE.

A slight improvement (addressing both issues) is to decide which end to 
approach from.  But scanning from the end needs an ORDER BY, preferably on the 
PRIMARY KEY.

etc.


 -Original Message-
 From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com]
 Sent: Thursday, May 24, 2012 12:00 AM
 To: Jan Steinman
 Cc: mysql@lists.mysql.com
 Subject: Re: large temp files created by mysql
 
 I got a solution maybe
 
 step 1:
 mysql explain select * from users;
 ++-+---+--+---+--+-+---
 ---+--+---+
 | id | select_type | table | type | possible_keys | key  | key_len |
 ref  | rows | Extra |
 ++-+---+--+---+--+-+---
 ---+--+---+
 |  1 | SIMPLE  | users | ALL  | NULL  | NULL | NULL|
 NULL | 32883093 |   |
 ++-+---+--+---+--+-+---
 ---+--+---+
 1 row in set (0.00 sec)
 
 so you get the rows field
 
 Step2:
 select * from users, limit $r,1
 
 
 What do you think? Is the only way i found what delays seconds not
 minuts. USERS is a 19GB Table for me.
 
 LD
 
 2011/10/30 Jan Steinman j...@bytesmiths.com:
  Actually, having tried that, you still need the ORDER BY RAND() in
 there. Otherwise, I keep getting the same record over and over. But it
 surely cuts way down on the number of rows that need to be sorted.
 
  So if your table size is fairly stable, and you pick a good number
 for the WHERE constant, you can make this quite speedy.
 
  Still, it seems there should be a better way...
 
  On 30 Oct 11, at 18:51, Jan Steinman wrote:
 
  From: mos mo...@fastmail.fm
 
 
  At 10:34 AM 10/24/2011, you wrote:
  select id from table order by rand() limit 1; is doing as example
 a
  dumb temporary table with the full size
 
  Because it has to sort the entire table, then it returns the one
  row. This of course is extremely inefficient. :)
 
  That is absolutely incredible and counter-intuitive, and (as you
 say) extremely inefficient!
 
  This is used everywhere. Perhaps it is one of the biggest anti-
 patterns in SQL. I just checked two different SQL cookbook sites,
 and they both recommend ORDER BY RAND().
 
  I just googled around a bit, and found that putting RAND() in the
 WHERE clause is very efficient:
 
  SELECT id FROM table WHERE RAND()  0.01 LIMIT 1
 
  The comparison constant can be optimized for the number of rows you
 have. The above returns the first record of 1% of the table. If you
 have a million rows, you might want to bump that to something like 100
 parts per million or so.
 
  But really, folks, this is something so ubiquitous and so
  recommended, why can't the query optimizer look out for ORDER BY
  RAND() and simply skip the table sort and just grab some record?
  (Hopefully using something better than Knuth's LCRNG...)
 
  
  Learning to think wholistically requires an overriding, or reversal,
  of much of the cultural heritage of the last few hundred years. --
  David Holmgren
   Jan Steinman, EcoReality Co-op 
 
 
  
  Within a few human generations, the low-energy patterns observable in
  natural landscapes will again form the basis of human system design
  after the richest deposits of fossil fuels and minerals are
 exhausted.
  -- David Holmgren
   Jan Steinman, EcoReality Co-op 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql To unsubscribe:
  http://lists.mysql.com/mysql?unsub=luis.daniel.lu...@gmail.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: large temp files created by mysql

2012-05-24 Thread Luis Daniel Lucio Quiroz
For my propouses aprox is fine. I guess for others it doesnt
El may 24, 2012 9:59 a.m., Rick James rja...@yahoo-inc.com escribió:

 Maybe.

 1. The Rows is approximate, so you could over-shoot or under-shoot the
 end of the table.

 2. OFFSET (limit $r,1) still has to scan $r rows.

 3. SELECT * with an OFFSET will read the entire rows.  SELECT `foo`, where
 `foo` is indexed, but not the InnoDB PRIMARY KEY, will scan only the
 INDEX(foo).  This is likely to be much faster.  But you are unlikely to do
 that unless foo is UNIQUE.

 A slight improvement (addressing both issues) is to decide which end to
 approach from.  But scanning from the end needs an ORDER BY, preferably on
 the PRIMARY KEY.

 etc.


  -Original Message-
  From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com]
  Sent: Thursday, May 24, 2012 12:00 AM
  To: Jan Steinman
  Cc: mysql@lists.mysql.com
  Subject: Re: large temp files created by mysql
 
  I got a solution maybe
 
  step 1:
  mysql explain select * from users;
  ++-+---+--+---+--+-+---
  ---+--+---+
  | id | select_type | table | type | possible_keys | key  | key_len |
  ref  | rows | Extra |
  ++-+---+--+---+--+-+---
  ---+--+---+
  |  1 | SIMPLE  | users | ALL  | NULL  | NULL | NULL|
  NULL | 32883093 |   |
  ++-+---+--+---+--+-+---
  ---+--+---+
  1 row in set (0.00 sec)
 
  so you get the rows field
 
  Step2:
  select * from users, limit $r,1
 
 
  What do you think? Is the only way i found what delays seconds not
  minuts. USERS is a 19GB Table for me.
 
  LD
 
  2011/10/30 Jan Steinman j...@bytesmiths.com:
   Actually, having tried that, you still need the ORDER BY RAND() in
  there. Otherwise, I keep getting the same record over and over. But it
  surely cuts way down on the number of rows that need to be sorted.
  
   So if your table size is fairly stable, and you pick a good number
  for the WHERE constant, you can make this quite speedy.
  
   Still, it seems there should be a better way...
  
   On 30 Oct 11, at 18:51, Jan Steinman wrote:
  
   From: mos mo...@fastmail.fm
  
  
   At 10:34 AM 10/24/2011, you wrote:
   select id from table order by rand() limit 1; is doing as example
  a
   dumb temporary table with the full size
  
   Because it has to sort the entire table, then it returns the one
   row. This of course is extremely inefficient. :)
  
   That is absolutely incredible and counter-intuitive, and (as you
  say) extremely inefficient!
  
   This is used everywhere. Perhaps it is one of the biggest anti-
  patterns in SQL. I just checked two different SQL cookbook sites,
  and they both recommend ORDER BY RAND().
  
   I just googled around a bit, and found that putting RAND() in the
  WHERE clause is very efficient:
  
   SELECT id FROM table WHERE RAND()  0.01 LIMIT 1
  
   The comparison constant can be optimized for the number of rows you
  have. The above returns the first record of 1% of the table. If you
  have a million rows, you might want to bump that to something like 100
  parts per million or so.
  
   But really, folks, this is something so ubiquitous and so
   recommended, why can't the query optimizer look out for ORDER BY
   RAND() and simply skip the table sort and just grab some record?
   (Hopefully using something better than Knuth's LCRNG...)
  
   
   Learning to think wholistically requires an overriding, or reversal,
   of much of the cultural heritage of the last few hundred years. --
   David Holmgren
    Jan Steinman, EcoReality Co-op 
  
  
   
   Within a few human generations, the low-energy patterns observable in
   natural landscapes will again form the basis of human system design
   after the richest deposits of fossil fuels and minerals are
  exhausted.
   -- David Holmgren
    Jan Steinman, EcoReality Co-op 
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql To unsubscribe:
   http://lists.mysql.com/mysql?unsub=luis.daniel.lu...@gmail.com
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql




RE: Need help for performance tuning with Mysql

2012-05-24 Thread Rick James
Thanks.  I got tired of answering the same questions about buffer_pool and 
key_buffer over and over on forums.mysql.com; now I just point people at that 
page.

INT(1) and INT(50) are identical -- and take 4 bytes.  See TINYINT, BIGINT, 
etc.  Also, UNSIGNED is probably wanted in more places than you have it.

555KB is not very big.  But a table scan (as indicated by the EXPLAIN) costs 
something.

select  * -- Is this what you are really fetching?  If not, we can 
discuss a covering index.
from  thold_data
where  thold_enabled='on' -- 2-valued flag?  Not likely to be useful in an 
index, esp. not by itself
  AND  data_id = 91633;   -- Probably the best bet.

Recommend:
INDEX(data_id)  -- or UNIQUE, if it is unique
INDEX(data_id, thold_enabled)  -- or the opposite order; this probably would 
not be noticeable better.

`notify_default` enum('on','off') default NULL
Did you really mean to have 3 values (on, off, NULL)?



 -Original Message-
 From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
 Sent: Wednesday, May 23, 2012 7:10 PM
 To: mysql@lists.mysql.com
 Subject: Re: Need help for performance tuning with Mysql
 
 Rick
 
 Thank you for the reply.
 
 1. There are only a few things worth tuning -- see
 http://mysql.rjweb.org/doc.php/memory (they don't include the ones you
 tried)
 
 
   The page is really cool. Its very simple and easy to understand.
 
 2. Instead INDEXes and schema design must be studied.  Please provide:
 SHOW CREATE TABLE
 
   | thold_data | CREATE TABLE `thold_data` (
 `id` int(11) NOT NULL auto_increment,
 `rra_id` int(11) NOT NULL default '0',
 `data_id` int(11) NOT NULL default '0',
 `thold_hi` varchar(100) default NULL,
 `thold_low` varchar(100) default NULL,
 `thold_fail_trigger` int(10) unsigned default NULL,
 `thold_fail_count` int(11) NOT NULL default '0',
 `thold_alert` int(1) NOT NULL default '0',
 `thold_enabled` enum('on','off') NOT NULL default 'on',
 `bl_enabled` enum('on','off') NOT NULL default 'off',
 `bl_ref_time` int(50) unsigned default NULL,
 `bl_ref_time_range` int(10) unsigned default NULL,
 `bl_pct_down` int(10) unsigned default NULL,
 `bl_pct_up` int(10) unsigned default NULL,
 `bl_fail_trigger` int(10) unsigned default NULL,
 `bl_fail_count` int(11) unsigned default NULL,
 `bl_alert` int(2) NOT NULL default '0',
 `lastread` varchar(100) default NULL,
 `oldvalue` varchar(100) NOT NULL default '',
 `repeat_alert` int(10) unsigned default NULL,
 `notify_default` enum('on','off') default NULL,
 `notify_extra` varchar(255) default NULL,
 `host_id` int(10) default NULL,
 `syslog_priority` int(2) default '3',
 `cdef` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `rra_id` (`rra_id`)
   ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 
 SHOW TABLE SIZE
 
   You must be mentioning about the show table status
 
 mysql show table status where name = thold_data;
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 | Name   | Engine | Version | Row_format | Rows | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free |
 Auto_increment | Create_time | Update_time | Check_time
 | Collation | Checksum | Create_options | Comment |
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 | thold_data | MyISAM |  10 | Dynamic| 6161 | 90 |
 555128 | 281474976710655 |   140288 | 0 |  70258 |
 2012-05-24 10:41:47 | 2012-05-24 10:47:19 | 2012-05-24
 10:41:47 | latin1_swedish_ci | NULL || |
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 1 row in set (0.00 sec)
 
 EXPLAIN SELECT
 
   I have seen the following select query in the slow query log.
   I also saw update queries as well.
 
 mysql explain select * from thold_data where thold_enabled='on' AND
 mysql data_id = 91633;
 ++-++--+---+--+
 -+--+--+-+
 | id | select_type | table  | type | possible_keys | key  | key_len
 | ref  | rows | Extra   |
 ++-++--+---+--+
 -+--+--+-+
 |  1 | SIMPLE  | thold_data | ALL  | NULL 

RE: Architectural Help

2012-05-24 Thread Rick James
I deal with dozens of systems, all doing cross-country or cross-ocean 
replication.  The only viable approach (that we have deployed in production) is
* Dual-master, single writer -- That is, all 49 clients write to one machine
* The two masters are geographically separate.
* Failover involves switching to the 'other' master.  It is _not_ fully 
automated.
* Slaves, as needed, scattered around the world -- This provides read scaling.

1M writes per day -- that is an average of 12/sec.  Yawn.
Replication delay -- you probably will almost never see any.
Network outages -- one of many things that can cause trouble.  The sooner you 
write the data _and_ copy it to a _remote_ site, the sooner you are immune to 
floods, tornados, cyclones, power outages, motherboard death, etc.

 -Original Message-
 From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 3:39 AM
 To: Nigel Wood
 Cc: mysql@lists.mysql.com
 Subject: Re: Architectural Help
 
 Dear Nigel,
 
 Thank for reply..
 
 
 See my comments below
 
 
 --Anupam
 
 
 
  From: Nigel Wood nw...@plus.net
 To: Anupam Karmarkar sb_akarmar...@yahoo.com; mysql@lists.mysql.com
 mysql@lists.mysql.com
 Sent: Thursday, 24 May 2012 3:35 PM
 Subject: RE: Architectural Help
 
 A few questions:
 which is more or a problem: network outages, network capacity or query
 latency?
 Network Outages is problem, between data centres
 When you say near real-time do you need transactional consistent view
 on all 49 servers or can some lag be tolerated?
 Some lag can be tolerated considering network outage
 Can any one of the 49 local servers potentially update/delete the same
 rows or data?
 Yes central server can update same row or local server but not at
 same time,  Few data will be updated in central location and others
 majorly in local.One Application Writes in Centeral and Others in
 local, Local app and central app shares some data.
 Is there any natural segmentation point within the data?
 No
 Do the data centers have diverse networks so that connections to some
 data centers may remain when others?
 Yes, we have diverse nework , so connectivity will be avaiable in
 other data centers.
 In the event that a local data centre is totally isolated from the
 others what data should it be allowed to update?
 Loca application should always write in local database, Central
 Application Updated will not be available to local.
 Do your applications produce/examine  large data set querying by
 secondary keys or using tull text search?
 We dont have text search or we dont query large data
 Are you in a position to modify the applications?
 No, Micro changes ok
 __
 From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 10:17 AM
 To: mysql@lists.mysql.com
 Subject: Architectural Help
 
 Hi All,
 
 
 I need architectural help for our requirement,
 
 
 We have nearly 50 data centre through out different cities from these
 data center application connect to central database server currently,
 there are conectivity and nework flcutions issues for different data
 center, so we comeup with solution each data center we should have
 local database server which will keep syncing with other server so that
 application doesnt fail , User data can be updated in any of server and
 should reflect in every server.  Application consists of
 write/read/delete operations,
 
 
 Current writes each day central server 1million.
 
 
 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.
 
 
 How can we achive this ? solution needs very much real time data
 accepting nework lags.
 
 
 Solution
 
 Collect all changes in other 49 server into 1 central server(How can we
 collect data)
 
 
 49 keeps updating data into local database from central server(Using
 Repliation Can be done)
 
 
 
 --Anupam
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql To unsubscribe:
 http://lists.mysql.com/mysql

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql