We're using MySQL 5 (don't know off the top of my head what specific
release).  I don't think a master/slave DB configuration is something
we can manage to set up at this point.

Querysets are fetched from the database in chunks, right?  I imagine
that the select itself is actually quite quick, but do the tables
remain locked between chunks?  I.e. if the query returns a total of N
records and Django fetches N/10 records and then performs a bunch of
calculations and saves before returning to the DB for another N/10
records, will the DB be locked during all those calculations/saves?



On Nov 24, 4:47 am, Tom Evans <tevans...@googlemail.com> wrote:
> On Wed, Nov 23, 2011 at 8:09 PM, Nikolas Stevenson-Molnar
>
> <nik.mol...@consbio.org> wrote:
> > What database are you using? You should be able to find information in
> > the documents about the locking behavior for that database. Compare that
> > with the operations your running and determine whether they would result
> > in an exclusive lock.
>
> > From your pseudocode, it looks like you're performing a possibly-lengthy
> > select, followed by lengthy calculations (not involving database
> > operations), and then a (presumably) quick save. AFAIK, databases never
> > lock when doing selects, so depending on the nature of your
> > calculations, you should be fine.
>
> MySQL will always lock tables for writes when reading from them.
> Therefore, any long running query on a mysql table will result in
> updates to that table being locked out.
>
> The easiest way around this is with hardware. Use a master-slave DB
> setup, and perform your long reads on the slave(s), and all your
> writes on the master.
>
> Cheers
>
> Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en.

Reply via email to