ection(state, passive=passive)
> if collection is PASSIVE_NORESULT:
> self.fire_remove_event(state, value, initiator)
>
> so some more complete way of not exiting the event loop too soon would
> need to be implemented.
>
> Jason, any comments on this ?
>
&
ng and removing the entry in a single
> > operation. I can imagine that there would be many variations on business
> > rules for moving an item that would be difficult to encapsulate in a
> > common
> > operation within SA.
>
> > --
> > Mike Conley
>
&
I can imagine that there would be many variations on business
> rules for moving an item that would be difficult to encapsulate in a common
> operation within SA.
>
> --
> Mike Conley
>
> On Mon, Apr 6, 2009 at 2:10 AM, jean-philippe dutreve
> wrote:
>
>
>
> > C
e entry is contained in 2 accounts temporaly.
It can lead to false computation (when summing balance for instance).
On 5 avr, 22:03, jason kirtland wrote:
> jean-philippe dutreve wrote:
> > Hi all,
>
> > I wonder if SA can handle this use case:
>
> > An Account can contain En
Hi all,
I wonder if SA can handle this use case:
An Account can contain Entries ordered by 'position' attribute.
mapper(Account, table_accounts, properties = dict(
entries = relation(Entry, lazy=True, collection_class=ordering_list
('position'),
order_by=[table_entries.c.position],
given
> > a list subclass, it mistakenly ignores the subclass method
> > implementations. The below will break, if and when that's fixed to
> > match the pure Python implementation in the standard lib.
>
> > Calling list.extend(account_entries, new_entries) is proba
hook. That may be perform better, with
> the trade-off that the position attribute can't be trusted to be in sync
> with the list order.
>
> jean-philippe dutreve wrote:
> > Below is the profiling of code that added 1200 items into an
> > ordering_list relation. I had to b
Below is the profiling of code that added 1200 items into an
ordering_list relation. I had to bypass the ordering_list stuff for
bulk additions in order to have better performance (down to 2
seconds).
Hope this post helps to improve this part (using 0.5.0rc1, python 2.5,
linux i686, 1.5Go RAM)
SA
fine. thank you for your help.
jean-philippe
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send e
ns.code AS code,
winancial_integ.acc_transactions.description AS description,
acc_entries_1.entry_id AS entry_id, acc_entries_1.account_id AS
account_id, acc_entries_1.transaction_id AS transaction_id
On May 13, 5:05 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On May 13, 2008, at 11:03 AM, jean-philippe dutre
saction_id, ...
The API in 0.5 is very good.
On May 13, 4:41 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On May 13, 2008, at 9:08 AM, jean-philippe dutreve wrote:
>
>
>
> > I'd like to delete all Transactions contained in an account hierarchy
> > without loading any t
I'd like to delete all Transactions contained in an account hierarchy
without loading any transaction into memory, just DB work with the SQL
DELETE request constructed by SA.
The query that defines the transactions is:
Session.query(Transaction).join(['entries','account','root'],
aliased=True).fi
Thank you for your support. You have done an awesome work overall.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe fr
After debugging, i've noticed that the issue is related to eager
loaded
relations. If you try the example script with _descendants relation
having lazy=None or True, then the extension method is not called
anymore.
Is there a way to fire the extension method even without eadger
loading?
> i cant
Thank you for the suggestion but the extension method doesn't fired,
even without raw sql:
mapper(Account, table_accounts, extension=AccountLoader(),
properties=dict(
children = relation(Account, lazy=None,
primaryjoin=table_accounts.c.parent_id==table_accounts.c.account_id,
Hi all,
I'm trying to load a whole Tree of Account objects (Mapped instances)
in a single SELECT with unlimited depth.
I'm using PostgreSQL connectby function from the tablefunc module.
It returns rows of each nodes in a depth first visit.
sql = """
SELECT acc_accounts.* FROM connectby('
On 7 mar, 02:29, Michael Bayer <[EMAIL PROTECTED]> wrote:
> logging module itself throws UnicodeDecodeError ?
yes, in logging.format: ... = "%s" % msg
with msg the exception message encoded in utf8 and the default
encoding is ascii.
> are you sending exception messages using logging.debug() or s
Hi all,
I use SQLAlchemy-0.4.2p3, postgreSQL 8.2.4 (UTF8 configured) and
psycopg2.
I have no issue with unicode DATA in and out of the database.
My problem is that when an IntegrityError is thrown, the exception
message is a string encoded in utf8.
And the logging module throws an UnicodeDecodeE
> its actually not eager loading the second list of "accounts"
If there is no eager loading on the second list, I don't understand
why a 'SELECT entries ...' is executed when I just
ask account.name and not account.entries.
> untested, i.e. join_depth on a mapper thats not self-referentia
Ive uploaded the script eagerload_all.py that reproduce the issue.
Hope it helps you.
On 11 sep, 16:43, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Sep 11, 2007, at 10:28 AM, Jean-Philippe Dutreve wrote:
>
>
>
> > The name is on account, not on entry.
> > Transact
action being loaded here since thats linked to an
> Entry object, which you said you didnt want to load.
>
> On Sep 11, 2007, at 5:28 AM, Jean-Philippe Dutreve wrote:
>
>
>
> > Here's my issue: 3 tables
>
> > CREATE TABLE accounts (
> > account_
Here's my issue: 3 tables
CREATE TABLE accounts (
account_id serial PRIMARY KEY,
name varchar(16) NOT NULL UNIQUE,
);
CREATE TABLE transactions (
transaction_id serial PRIMARY KEY,
);
CREATE TABLE entries (
entry_id serial PRIMARY KEY,
account_id integer NOT NULL REFERENCES ac
Here's my issue: 3 tables
CREATE TABLE accounts (
account_id serial PRIMARY KEY,
name varchar(16) NOT NULL UNIQUE,
);
CREATE TABLE transactions (
transaction_id serial PRIMARY KEY,
);
CREATE TABLE entries (
entry_id serial PRIMARY KEY,
account_id integer NOT NULL REFERENCES accoun
be rolled back.
Risky but efficient feature.
On 9 sep, 19:41, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Sep 9, 2007, at 12:21 PM, Jean-Philippe Dutreve wrote:
>
> > I prefer put constraints in database rather than in application/
> > framework because several ap
applications can access
the same database and applications can gone quicker than DB.
Fortunately, NOT NULL FKs fills my need.
On 9 sep, 16:52, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Sep 9, 2007, at 5:09 AM, Jean-Philippe Dutreve wrote:
>
>
>
> > Another solution could
Another solution could be to inverse the order:
- first delete the parent (so the rule RESTRICT is immediately fired)
- second set null the FKs.
On 8 sep, 19:52, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Sep 8, 2007, at 12:54 PM, Jean-Philippe Dutreve wrote:
>
>
>
> &g
My need is related to Postgresql ON DELETE RESTRICT/NO ACTION : I'd
want a sql exception as soon as a parent having any existing child is
deleted. I don't want cascade delete on children, just the parent but
only if it has no child.
I've remarked that SA (0.4) first SET NULL all FKs in child tabl
Thanks Jason for your clear explanation.
Is there any mean to do your suggestion to call the pure Python
version without coping/pasting it into my module?
On 7 sep, 16:28, jason kirtland <[EMAIL PROTECTED]> wrote:
> Jean-Philippe Dutreve wrote:
> > I was using SA 0.3.9 to inser
I was using SA 0.3.9 to insert an item in an ordered list with bisect
method insort (py 2.5):
mapper(Entry, table_entries)
mapper(Account, table_accounts, properties = dict(
entries = relation(Entry, lazy=True,
backref=backref('account', lazy=False),
collection_class=o
It seems that the bug fixed by changeset 2795 (column_prefix with
synonym) is still active in 0.4 branch.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqla
30 matches
Mail list logo