[Maria-developers] add a rotate event at begin of relay log make greate help

2014-11-25 Thread nanyi607rao
Hi kristian,

currently, a relaylog is immediately purged after it be completely replayed 
(relay_log_purge=1). and if IO thread crashes, we should delete all relay logs 
and fetch master binlog again from exec_master_log_pos/file 
(relay_log_purge=ON). I think that make a greap waste of disk IO especially to 
cloud disk. The critical problem is that there is no method to know the 
original binlog filename on master of a event in relaylog.

Actually we can know a relaylog event's original binlog offset on master, but 
we can't know the original binlog filename. so why not forcely add a rotate 
event or any other new type event which contained the original master binlog 
filename at begin of a relay log. as I know two different master binlogs' 
events wouldn't contained in one relay log, so the original binlog filename of 
a relaylog's events is only one. 

If we want to know a relaylog event's original binlog filename/position, we can 
get original filename from the begin of relaylog, and get original binlog 
offset from event's "end log pos". oppositely, if we know a event's binlog 
filename/end positon on master, wo also can quickly find it in slave's relay 
log.

That can make many benefits, firstly no need to delete all relay logs and fetch 
master binlogs again when IO thread crashed, because we can get exact last 
read_master_log_filename/position from last relay log. secondly, there is no 
need to use GTID in 1 vs n replication failover Scenario (Gtid must set 
log_slave_updates=ON in mysql 5.6, which increase disk IO load), if master 
crashes, other slaves can get lost events for the newest slave's relay log, as 
long as relay log don't be purged, then promotes the newest slave to master.

Thanks.

2014-11-26



nanyi607rao___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] doubt about read uncommited

2014-11-25 Thread Roberto Spadim
Hi Jocelyn!
2014-11-25 5:27 GMT-02:00 jocelyn fournier :

> Hi Jean,
>
> Read uncommited is definitly treated differently on TokuDB (no snapshot
> read). You have a description of the kind of lock taken on TokuDB depending
> on the isolation level here :
> https://github.com/Tokutek/tokudb-engine/wiki/Transactions-and-Concurrency


wow, a very nice wiki about isolation levels, should be nice this at
mariadb kb, and maybe with innodb and others engines, just to better
explain how it works, very nice wiki


>   Jocelyn
>

thanks Jocelyn!
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] doubt about read uncommited

2014-11-25 Thread Roberto Spadim
Hi Daniel!
2014-11-25 5:54 GMT-02:00 Daniel Black :

>
>
> - Original Message -
> > hi Daniel
> >
> > the problems:
> >
> > 1) i have a application/database design problem, i don't have access to
> > source code, the application is running in a server, and database at
> other,
> > i have only access to database
> > 2) at database the problem is with myisam locking contention when running
> > slow updated and selects, update get a write lock and select wait it, no
> > problems with insert (yet), no deletes in this table (yet)
>
> have you used explain {query} on all of these to verify if there is an
> missing index problem?
>

yes, that's considered and very relevant to optimizer
i check profiling too, i will get a 10.1 version to check analyze too (the
real execution "explain")
i get slow queries mainly with slow log, and i consider creating a index,
and don't create too many index, sometimes i check if histograms could
help, i don't remember others options to 'tune' the optimizer, but that's
what i normally do when i found a slow query or users report about
contention




>
> > i tried to partition this table, but this don't reduce the write lock
> time,
> > the where part of update normally get rows from all partitions
>
> Don't consider partitioning yet. It probably won't help.
>
yeap :/ didn't work, i didn't tested shard with spider yet



>
> > 3) users report long times waiting data (this happen with long updates, i
> > confirm this with slow query log and processlist)
> >
> >
> > what i'm thinking as possible solutions, but i don't know if it's really
> > possible, or how complex it is
> >
> > 0) leave developer solve the problem rewriting code
> > the 'best' one, but i need to contact he, i'm not finding he, trying to
> > call, sending email and nothing...
>
> When you have indexes that need adding feed this back. When you see bad
> queries feed those back with details and a substitute query that does the
> same thing.
>
nice, i'm logging them to send to developer
sorry i don't have many contact with he to solve this easily, many thanks
helping me here


>
>
> > 1) convert myisam to innodb and check what happen
> > i'm creating another server to reproduce the load, this take sometime
> > (>100gb myisam tables)  it's executing from the first email
> > i will execute some logged queries to reproduce load, and check what
> > problems i will have
> > at query log there're queries without "where" part (possible slow queries
> > to innodb) and queries using count/min/max without "where" too.
>
> Don't remove the where part. It changes the query significantly and you
> don't have a comparison point. If your app doesn't do the queries without
> the wheres then why are you?
>
yeap, i think developer use the table as a statistic table, myisam
sometimes do a good job with this design, but innodb sometimes execute full
table scan when it don't have a where part
i was thinking about a way to rewrite the table without changing engine,
something like... if i found a query without where, rewrite to  read from a
statistic table, create some triggers to update this statistic table and
'solve' part of the problem, i don't know if this work, again just an idea


>
> If you get two servers replicating of each other.

yeap, i forgot the option to create replicas... i will check this, maybe
just a good balance solve the main problem


> Have one server with myisam tables, another with innodb and at some point
> switch the application to a different server making sure replication has
> caught up before writes are enabled.
>
> Make sure the innodb server is tuned for innodb with decent buffer pool
> size etc.
>
yeap, what i can do today without problem is get all queries from one
database log and test the same load at the other, ok this don't test the
application, i'm with doubts about BLOB inside innodb table, probably the
application test will get some feeling about this


>
> Use mariadb-10 on both. log_slow_verbosity=query_plan,explain (on both)
> will help a lot.

hum, nice, i only record the query and the time expent, i will record the
plain and explain now


> mysqldumpslow to assist in summarising all similar queries.
>
> Compare the results of the slow query log on a query by query basis.
>
nice, there's some percona tools that help here, i'm using it and raw log


>
> See if the problem is the query, the indexes or indeed the engine.
>
> Work out what's better overall by fixing indexes, suggesting query fixes
> to the developer and if something is really bad and shouldn't be give devs
> here details.
>
nice


>
> > 3) try other non standard mariadb engine?
>
> You could try tokudb the same way.
>
right, considering this too
what about replication with toku? i didn't tested yet, is something
different that i should pay attention?


>
> > 4) use faster storage to reduce write/read time, reducing lock time
> > i think that's not a good solution, but the last one i have, with a high
> > cost
>
> Lock time isn