Here is a use-case that can benefit from an ORM.
Let's say we have 2 functions that manipulate the same field in 2 different 
functions.

Here is how it would be done using the DAL:

def a_child_was_born_in(countryName, cityName):
    city = db.Country(Name=countryName).City(Name=cityName).select().first()
    city.update_record(Population=city.Population + 1)

def a_person_has_died_in(countryName, cityName):
    city = db.Country(Name=countryName).City(Name=cityName).select().first()
    city.update_record(Population-city.Population - 1)

Now, let's say that both functions are being used by different contexts 
within the same transaction (hypothetically, say, from some different 
functions, way deep in the call stack).

# In context 1:
a_child_was_born_in('France', 'Paris')
...
# In context 2:
a_person_has_died_in('France', 'Paris')

This would issue 4 round-trips to the database - 2 selects and 2 updates.

Now, lets say we want to optimize that, so we do a "Lazy" version of those 
functions.
How would we go about doing that?
Well, we "could" replace the ".update_record" with an ".update".

def a_child_was_born_in(countryName, cityName):
    city = db.Country(Name=countryName).City(Name=cityName).select().first()
    city.update(Population=city.Population + 1)

def a_person_has_died_in(countryName, cityName):
    city = db.Country(Name=countryName).City(Name=cityName).select().first()
    city.update(Population-city.Population - 1)

Would that work?
Well, let's see, assuming the initial population value of Paris, is 2 
million.
When a child is born, the value would get incremented locally.
But the Row object of the 'city' variable is not persisted in memory when 
the functions return.
So we need to commit the transaction after each call.
But wait a minute, that would get us back to 4 operations... Might as well 
leave the update_record the way it was.
What do we do?
Well, we could make the laziness "optional", and call the first one eagerly 
and the second lazily.
Yes, we would need to keep track of our ordering of calling them, but if we 
do it right, we could get it down to 3 operations (2 selects and one 
update).
Would that work?
Well, no, because then we would loose the second update once the second 
function returns...
Can we still do something?
Well, yes, we can activate caching on the City field, so 
it's internal-values would survive across transactions - given that we give 
the cache a long-enough time-out.
This may not help us in the updates, but it could nock-off a the second 
query (he select operation in the second function)
So the best we get is 3 operations - 1 select and 2 updates.

Now, here is the same code, using an ORM:

def a_child_was_born_in(countryName, cityName):
    city = Country(Name=countryName).City(Name=cityName)
    city.Population += 1


def a_person_has_died_in(countryName, cityName):
    city = Country(Name=countryName).City(Name=cityName)
    city.Population -= 1

The syntactic difference is small, but the semantic implication is profound.

The automatic cache-mechanism in the ORM will detect that we 
are querying the same record, and so would not query the database in the 
second function - just return the same object already in memory.
So now we're down to 3 actual operations - 1 select and 2 updates.

But it doesn't stop there...
In the DAL case, we cached the "values" inside the city field, but the 
'city' variable in the first function, is still a separate Rows object from 
the 'city' object in the second function, so we couldn't do Lazy updates.
But an ORM can have an "Identity Mapper", that would make sure they the 
same object would be returned,
It would be bound to two different name-spaces, but it would be the same 
object.
Now we could implement a "Truely" lazy update. The increment that is done 
in the first function, would be reflected in the second one, because the 
same object would be returned,

So now we're sown to 2 operations - one select, and one update - the update 
would automatically be issued for us at transaction-commit time, as it 
would be balled "pending" by the time it get's there, using the 
"Unit-of-Work" pattern..

But it doesn't have to even stop there...
The "Unit-of-Work" mechanism has this "dirty" label, which signifies that 
the current value within a record-object has different value from the one 
in the database. Now, it may be implemented poorly, and just get flagged as 
"dirty" on any update to it, or it could store the original value, and have 
the dirty-check deferred to the last minute - in which the current value 
would be compared to the original-stored one, and only be deemed "dirty" if 
there's a mis-match.

In this case, we incremented once, and decremented once, so the value goes 
back to what it was, ans do the dirty-check would fail and yield a "clean" 
flag - so the entire update operation would not even occur on 
transaction-end time.

So we are down to 1 - a single operation - the first select.

These are the kinds of benefits an ORM may have.





-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to