Re: Does binlog stay in memory when replication happens?

2009-02-28 Thread Cui Shijun
hi,
  Thank you for your reply :-)

 What does replication have to do with it?
I wonder whether mysql will copy its binlog from memory( if it exists
) then send to slave. I haven't refer to the source codes yet...

 Slave's binlog?  The slave's binlog is not tied to the master's
 binlog.  I am not sure you understand clearly how replication works.
Yes. I am confused by binlog  RELAY log. Given the data which is
replicated only, will the content( at least SQL ) of the two logs be
the same?

 Again I'm not sure what you are really asking.  You might be looking
 for the Google synchronous replication patches.  Do a web search for
 those and read up on what they offer.
It helps :-)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Ofertas em Higiêne Bucal

2009-02-28 Thread Dental Supplies
Use o código de desconto NOSSOSITE  e receba 10% de desconto em todas as suas 
compras

Temos os melhores preços do Mercado.  Se você encontrar um preço menor não 
faremos o mesmo preço.

Você esta recebendo este por causa da sua relação com Gemas Brasil . Por favor 
confirme o seu interesse em receber e-mail de nosso site. Você pode remover 
se não quizer mais receber e-mails.

       


 




IONIC WHITE - CLAREADOR DE DENTES - 21 MINUTOS - LUZ ATIVADO POR 129,00   
Waterpik Fio Dental c/ Pontas Clareados por 95,00   

  
 


          Waterpik Ultra WP-100 
Por 299,00 reaisESCOVA SENTAL SENSONIC POR 359,00 REAIS  
WATERPIK 
WP-450 PORTÁTIL POR 280,00

 

 



Re: MyISAM large tables and indexes managing problems

2009-02-28 Thread Claudio Nanni
Hi Rolando,
I am going to give it a try, but the thing is that the creation of index
with MyISAM table causes
a re-copy of the table (using temporary table) and so it is the same thing,
and seens to take a lot of time.
I stopped it after 10 hours or so.
I think is the way mysql manages the creation of indexes.

Anyway thank you and I will update you!

Claudio


2009/2/27 Rolando Edwards redwa...@logicworks.net

 Have you tried disabling indexes while loading?
 Here is what I mean...

 CREATE TABLE tb1 (A INT NOT NULL AUTO INCREMENT PRIMARY KEY,B VARCHAR(20),C
 VARCHAR(10));

 Load tb1 with data

 Create a new table, tb2, with new structure (indexing B and C columns)
 CREATE TABLE tb2 LIKE tb1;
 ALTER TABLE tb2 ADD INDEX NDX1 (B);
 ALTER TABLE tb2 ADD INDEX NDX2 (C);

 Load tb2 with non-unique indexes turned off

 ALTER TABLE tb2 DISABLE KEYS;
 INSERT INTO tb2 SELECT * FROM tb1;

 Only the Primary Key got loaded in tb2
 Now build the other two indexes

 ALTER TABLE tb2 ENABLE KEYS;

 This should build the indexes linearly, loading key entries into the .MYI
 file of the MyISAM table.

 Give it a try !!!


 Rolando A. Edwards
 MySQL DBA (CMDBA)

 155 Avenue of the Americas, Fifth Floor
 New York, NY 10013
 212-625-5307 (Work)
 201-660-3221 (Cell)
 AIM : RolandoLogicWorx
 Skype : RolandoLogicWorx
 redwa...@logicworks.net


 -Original Message-
 From: Claudio Nanni [mailto:claudio.na...@gmail.com]
 Sent: Friday, February 27, 2009 4:43 PM
 To: mysql@lists.mysql.com
 Subject: MyISAM large tables and indexes managing problems

 Hi,
 I have one 15GB table with 250 million records and just the primary key,
 it is a very simple table but when a report is run (query) it just takes
 hours,
 and sometimes the application hangs.
 I was trying to play a little with indexes and tuning (there is not great
 indexes to be done though)
 but eveytime I try to alter table for indexes it just hogs the disk space
 and takes hours
 to try to build indexes in various passages(.TMD) but it is a real pain
 since I cannot even kill the mysql process,
 and I had to kill the server with table corruption and had to stop/start
 and
 repair table.
 Does anybody experience problems in managing a simple MyISAM table with
  250
 million records and a primary key?
 I tried also to duplicate the table, add indexes and insert into it (also
 using INNODB for the new table) but it is really
 taking ages everytime. And I had to move the 'tmpdir' to the data partition
 because it was filling the / 100%.

 MySQL is 5.0.x on 64bit RHEL 5 with 16GB RAM and NAS storage.

 Any hint on how to manage big tables?

 Thanks

 Claudio Nanni



Re: MyISAM large tables and indexes managing problems

2009-02-28 Thread Claudio Nanni
Yes I killed several times the query but now way, the server was continuing
to hog disk space and not even shutdown worked!
Thanks!
Claudio



2009/2/27 Brent Baisley brentt...@gmail.com

 MySQL can handle large tables no problem, it's large queries that it
 has issues with. You couldn't just kill the query instead of killing
 MySQL?
 use show processlist to get the query id, then kill it. You may
 already know that.

 Brent

 On Fri, Feb 27, 2009 at 5:25 PM, Claudio Nanni claudio.na...@gmail.com
 wrote:
  Great Brent, helps a lot!
  it is very good to know your experience.
  I will speak to developers and try to see if there is the opportunity to
  apply the 'Divide et Impera' principle!
  I am sorry to say MySQL it is a little out of control when dealing with
 huge
  tables, it is the first time I had to kill MySQL deamon a couple of
 times.
 
  Thanks again Brent
 
  Claudio
 
 
 
  Brent Baisley wrote:
 
  I've used a similar setup and hit up to 400 million with 5-7 million
  records being added and deleted per day. Processing the table like you
  mention gave me the exact same result. The query was just too big for
  MySQL to handle. If you can break down your query into multiple
  smaller queries, it will run much quicker. I went from 7-8 hours down
  to 10 minutes. I broke my query into increments based on date/time and
  merged the results.
  I also switched to using MERGE tables so I could create a much narrow
  set of tables to query on (i.e. current_month). Of course, this means
  splitting your table into separate tables based on a certain criteria.
  Basically, divide and conquer.
 
  Hope that helps.
 
  Brent Baisley
 
  On Fri, Feb 27, 2009 at 4:42 PM, Claudio Nanni claudio.na...@gmail.com
 
  wrote:
 
 
  Hi,
  I have one 15GB table with 250 million records and just the primary
 key,
  it is a very simple table but when a report is run (query) it just
 takes
  hours,
  and sometimes the application hangs.
  I was trying to play a little with indexes and tuning (there is not
 great
  indexes to be done though)
  but eveytime I try to alter table for indexes it just hogs the disk
 space
  and takes hours
  to try to build indexes in various passages(.TMD) but it is a real pain
  since I cannot even kill the mysql process,
  and I had to kill the server with table corruption and had to
 stop/start
  and
  repair table.
  Does anybody experience problems in managing a simple MyISAM table with
   250
  million records and a primary key?
  I tried also to duplicate the table, add indexes and insert into it
 (also
  using INNODB for the new table) but it is really
  taking ages everytime. And I had to move the 'tmpdir' to the data
  partition
  because it was filling the / 100%.
 
  MySQL is 5.0.x on 64bit RHEL 5 with 16GB RAM and NAS storage.
 
  Any hint on how to manage big tables?
 
  Thanks
 
  Claudio Nanni
 
 
 
 
 
 



Re: MySQL Closing/Opening tables

2009-02-28 Thread Baron Schwartz
Hi,

On Fri, Feb 27, 2009 at 2:51 PM,  dbrb2002-...@yahoo.com wrote:
 Thanks for the quick followup Baron..

  vmstat
 procs ---memory-- ---swap-- -io --system--
 -cpu--
  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
 wa st
  3  0    100 499380 139256 5604000    0    0   190   693   11   11 20  2 70
 8  0

The first line of output is averages since boot, you need to let it
run for several iterations to see what's happening NOW.  But on
average we can see that you're spending 8% of CPU time waiting for
I/O, which may be significant.  If you have for example a 4-core
system, that could mean one core is spending 32% of its time.  And
that's on average, which means peaks are higher.  If you run mpstat
-P ALL 5 you will be able to see the iowait for each CPU or core.
But based on iostat output you pasted, I can pretty much predict
you're going to see high I/O wait.  Looking at iostat, I can see your
await (average wait time) is pretty long.  I like to see await in the
low-single-digit ms range.  And you've got reasonably high utilization
percent too.  All this while not doing many writes per second, and
with a short disk queue, in a non-peak time.  Look at iostat during
the times of high stalls and I bet you'll see the problem clearly.

I think the answer is probably that you have slow disks.  Get more and
faster disks.  If you need high performance, upgrade to a RAID array
with a battery-backed write cache on the controller, set to writeback
policy.  Get 10k or 15k RPM disks.  You'll see *much* higher
performance.  A typical Percona client has an LSI MegaRAID card (the
Dell PERC is basically the same thing) with 4 or 6 15k RPM 2.5
server-grade SAS drives in a RAID 10 array.  You don't need to go this
high-end -- maybe you can get perfectly fine performance with three
7200RPM or 10kRPM disks in RAID 5, I don't know, that's up to you.  It
would be cheaper and if it's good enough, that's great.

We always benchmark drives to make sure they are installed correctly.
Here's a set of benchmarks with iozone from a recent job that has this
setup.  You may need to view this in fixed-width font:

./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P -T -f -x -S 4096 -a C 1 +D

Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P
-T -f -x -S 4096 -a C 1 +D
Time Resolution = 0.01 seconds.
Processor cache size set to 4096 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read
  131072  16 17531847394372501   73101
932   74864 323272611
  262144  16 17332147455773412   73468
1120   74673 328073020
  524288  16 18132667510872978   72991
912   74291 326872524
 1048576  16 18632677474173103   72578
769   74096 327173487
 2097152  16 18432677473073474   72316
645   38541 303573862

Look ma, only 175 writes per second!  Slow as a dog!  So I checked the
RAID configuration and found out that the hosting provider had
mistakenly set the controller to WriteThrough policy... after fixing
that, look at the difference:

Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o
-O -P -T -f -x -S 4096 -a C 1 +D
Time Resolution = 0.01 seconds.
Processor cache size set to 4096 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read
  131072  16326072157657475236   74495
1396   75960 729974788
  262144  16312273427625575272   73661
1259   75304 742274755
  524288  16311873467615675214   73629
1114   75390 740074256
 1048576  16311274547598174478   73206
1029   75029 757173901
 2097152  16311074687610074780   73218
926   75292 757374316

That's more like it.  Over 3000 synchronous writes per second (TO
DURABLE STORAGE, not just the OS cache), while maintaining await in
the 1 to 3 ms range (as shown by iostat, not shown here).  This is
what I consider to be a real disk subsystem for a database server
:-)  You might consider benchmarking your disks to see what level of
performance they can achieve.  It is always worth doing IMO.

Dan Nelson's advice is also right on target.  And Eric Bergen's too,
hi Eric :)  I disagree with Mr. Musatto -- just because you're doing

Re: Concurrent Inserts with merged table

2009-02-28 Thread buford
 At 07:38 PM 2/25/2009, you wrote:

 
  not sure, though, feel free to report test results

Results not good so far. I created a big load file
Then I did a LOAD DATA CONCURRENT INFILE 'my_big_file'...

While that was running, I ran a SELECT COUNT(*) query in another session.
The latter was blocked until the former finished every time.

Looks like the concurrent select thing described in the documentation
doesn't even work as advertised for even this simple case.

 What happens if you do a Select * from table limit 10?

 This bug was reported back in 2006 and should have been fixed by now. What
 version of MySQL are you using?



Thanks for that link. My test was on version 5.0.27. Is that prior to the
fix?





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: whoa!

2009-02-28 Thread Andy Shellam

Hi PJ,

Having been a C programmer for a while this makes perfect sense to me.  
You call mysql_(real)_query to execute the query and get the results 
from the server.  Because the MySQL library doesn't know when you've 
finished with the result-set, you have to tell it to write the results 
to memory (mysql_store_result.)  This can then be read using 
mysql_num_rows and other data retrieval functions.


This sentence in the manual makes this clear:

After you have called mysql_store_result() and gotten back a result 
that isn't a null pointer, you can call mysql_num_rows() to find out how 
many rows are in the result set.


When you're finished with the result-set you can then use 
mysql_free_result to release the memory allocated by the library.


What difficulties are you having with error checking?  Most MySQL 
functions return NULL if the operation fails - mysql_real_connect, 
mysql_store_result etc.  Others return non-zero, like mysql_real_query.  
All the function documentation in the manual gives the return codes and 
how to tell if a call failed.


Personally I've found the MySQL manual the best of any documentation for 
systems we use, although I agree an example of a simple client/query 
would have been useful.


Regards,
Andy

PJ wrote:

What is this supposed to mean from the manual:
The use of |mysql_num_rows()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-num-rows.html depends on
whether you use |mysql_store_result()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-store-result.html or
|mysql_use_result()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-use-result.html to return
the result set
|mysql_num_rows()|
http://dev.mysql.com/doc/refman/5.0/en/mysql-num-rows.html is intended
for use with statements that return a result set, such as |SELECT|
http://dev.mysql.com/doc/refman/5.0/en/select.html.
Does this mean you have to use
mysql_store_result() before using mysql_num_rows() ? kind of doesn't
make sense to have to do that. And there are no clear cut examples or
explanations...
I do not wish to piss-and-moan but I do find that there is a lot to be
desired in the manual; things that are very ;contradictory and/ or
unclear and certainly lacking in examples.

And please, somebody guide me to some tutorial or something where I can
learn to set up proper error checking so you guys don't have to listen
to my problems. :'(
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Is binlog using WAL?

2009-02-28 Thread Cui Shijun
hi,
  It seems that mysql will call write interfaces(eg. bulk_update_row
 ha_update_row ...) of HANDLER before writing a binlog.
  Does this mean the time when binlog is written actually vary from
engine to engine?

  Thank you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org