Hi Anssi,
creating queryset first and reusing it later is a nice idea and I'll
add it to my tests, but unfortunately it works only when you can share
this queryset between db requests, which is impossible in case of
multiply simultaneously requests to server. For example, if you show
some profile d
On Aug 6, 12:09 am, Jacob Kaplan-Moss wrote:
> If you're benchmarking this against a small dataset and an in-memory
> database like SQLite I'd fully expect to see the defer/only benchmark
> to be slower. That's because every time a QS is chained it needs to be
> copied, which is a relatively expe
Thanks Jacob, I will continue testing and report if something new on
this issue comes out.
On Aug 6, 3:50 am, Jacob Kaplan-Moss wrote:
> On Thu, Aug 5, 2010 at 6:14 PM, OverKrik wrote:
> > Hi Jeremy, I will release all my code after finishing the test suite -
> > I think, in about 2 weeks.
>
> I
On Thu, Aug 5, 2010 at 6:14 PM, OverKrik wrote:
> Hi Jeremy, I will release all my code after finishing the test suite -
> I think, in about 2 weeks.
I'm looking forward to seeing it. I agree that the results are
counter-intuitive; seems there's *something* going on here that
shouldn't be happeni
Hi Jeremy, I will release all my code after finishing the test suite -
I think, in about 2 weeks.
On Aug 6, 2:59 am, Jeremy Dunck wrote:
> On Thu, Aug 5, 2010 at 4:32 PM, OverKrik wrote:
> > I am performing every test 10 times, excluding one fastest and one
> > slowest result, restarting db ever
On Thu, Aug 5, 2010 at 4:32 PM, OverKrik wrote:
> I am performing every test 10 times, excluding one fastest and one
> slowest result, restarting db every time and performing 10 000 request
> to warm db before measuring execution time.
> Just in case, I've tried running tests in only-full-only-ful
1.
users = User.objects.only("power_level")[:50]
for user in users.iterator():
d = user.power_level
2.
users = User.objects.all()[:50]
for user in users.iterator():
d = user.power_level
1. ~24 sec
2. ~28 sec
This one looks correct.
But I am a bit confused,
On Thu, Aug 5, 2010 at 5:32 PM, OverKrik wrote:
> I am performing every test 10 times, excluding one fastest and one
> slowest result, restarting db every time and performing 10 000 request
> to warm db before measuring execution time.
> Just in case, I've tried running tests in only-full-only-ful
I am performing every test 10 times, excluding one fastest and one
slowest result, restarting db every time and performing 10 000 request
to warm db before measuring execution time.
Just in case, I've tried running tests in only-full-only-full and
defer-full-defer-full patters and got same results.
Hi Jacob, thx for reply and sorry for not enough additional info in
original post. I was thinking that this issue can be related only to
python part of the bench, as everything looked ok with queries. Just
in case I've tested query generated by only\defer queryset using raw
SQL bench and compared i
On do, 2010-08-05 at 16:09 -0500, Jacob Kaplan-Moss wrote:
> - What database engine are you using?
> - Where's the database being stored (same server? other server?
> in-memory?)
> - How much data is in the database?
> - How big is that "info" field on an average model?
- Were OS/database level
On Thu, Aug 5, 2010 at 3:44 PM, OverKrik wrote:
> Hi, I am testing performance of three querysets
Good! We need as many benchmarks as we can get our hands on.
> I was expecting first two querysets to be faster, but for some reason
> it takes about ~105sec to finish (3) and ~130sec for (1) and (2
Hi, I am testing performance of three querysets
1.
for pk in xrange(1,5):
user = User.objects.only("power_level").get(pk = pk)
d = user.power_level
2.
for pk in xrange(1,5):
user = User.objects.defer("name","email","age","info").get(pk
= pk)
d = user
13 matches
Mail list logo