Agreed, certainly not a MySQL only issue. If you have a tightly linked DB you can use key-based distribution, which is what MySQL cluster does (and pretty much every other RDBMS cluster). Ultimately I think, RDBMS are just not the best at scaling, and perhaps a solution like berkeley db is more appropriate. (Only taking into consideration free solutions, obviously there is Oracle RAC) -Nick
On Sun, Sep 27, 2009 at 6:12 PM, Neven MacEwan <[email protected]> wrote: > > Nick > > These 'hacks' of mysql are all what i would consider application > clusters, basically they use the the fact that each > individuals data is loosely linked and not real time to distribute the > load, the tighter and more real time the data > the harder the db has to work > > Neven >> I think Google do a very good job - there solution is cheap and very >> effective. (See Hadoop and the various subprojects: HDFS/HBase/Pig) >> >> You will find many of the web 2.0 startups (facebook, digg etc.) were >> originally using MySQL, and because of the expense of moving away from >> MySQL they have developed various techniques (sharding , each user >> having there own "virtual database", global objects etc.) which allow >> them to scale very well. Obviously they are hacks, but they work >> nicely. >> >> Scaling the web front end though is trivial and that was solved many >> years ago. DB is always where the problems are. >> -Nick >> >> >> >> On Sun, Sep 27, 2009 at 4:39 PM, Aditya <[email protected]> wrote: >> >>> Hi there is no one way of scaling i am afraid, depends on application to >>> application, technology stack and many other factors. >>> >>> And then no one has ever managed to be perfect at it, not even google and >>> other giants. >>> >>> I find some articles on this useful http://highscalability.com/ >>> >>> Cheers, >>> Adi >>> www.appliedeye.com >>> >>> >>> >>> On Sun, Sep 27, 2009 at 7:45 AM, Nick Jenkin <[email protected]> wrote: >>> >>>> There are several options: >>>> You can use a good load balancer which remembers connections and >>>> redirects connections to the same machine. This mostly solves the >>>> session problem. >>>> If you are doing it on the cheap, store the sessions in a DB. While >>>> memcached is certainly an option (and probably the best) - be sure to >>>> have significantly more memory available than you require, because if >>>> sessions start dropping out of your memcache due to lack of memory, >>>> you might have some confused customers. >>>> >>>> We use memcache extensively, it is great for caching data which >>>> doesn't change much (e.g. product data). Probably a waste of time >>>> caching data which changes often OR doesn't get used often, might as >>>> well just read it out of the DB. You can use DB slaves for that. With >>>> memcache it is mainly about maximizing your hit/miss ratio. >>>> >>>> Hire a consultant who has experience in this area before committing - >>>> it is very expensive to get it wrong. >>>> -Nick >>>> >>>> On Tue, Sep 1, 2009 at 2:34 PM, Mark S. <[email protected]> wrote: >>>> >>>>> Hi everyone, >>>>> I am a bit curious about the way a large scale web-application, is >>>>> architecturally set up. Basically, it is load balanced web-server farm >>>>> and or load balanced database farm, which could be spread across >>>>> different data-centers and can be referred as a distributed >>>>> application. But, I want to know how does one keep track of resources >>>>> e.g. session-data, in such a setup? Is it a better idea to store all >>>>> such data in a database? >>>>> And, in case, you have a distributed set-up of memcache then, is it a >>>>> good idea to keep all the data e.g. sessions, frequently used queries >>>>> in the cache and use it as the primary resource of data retrieval and >>>>> let the database work in the back-end with, updating queries? >>>>> Where do I go to research more into these type of “enterprise” level >>>>> architectures? >>>>> >>>>> >>>> >>> >> >> > >> >> > > > > > --~--~---------~--~----~------------~-------~--~----~ NZ PHP Users Group: http://groups.google.com/group/nzphpug To post, send email to [email protected] To unsubscribe, send email to [email protected] -~----------~----~----~----~------~----~------~--~---
