Re: [sqlalchemy] custom __init__ methods not being invoked

2013-02-13 Thread Ryan McKillen
Thanks for the details. Makes sense.

Still not consistent with what I'm experiencing. Although consistent with what 
I'm seeing when I put a simple example/test together. I'll keep digging...

— RM

On Feb 12, 2013, at 4:51 PM, Michael Bayer  wrote:

> its called in all SQL loading scenarios including that of relationships.
> 
> A relationship load might not actually result in the object being loaded from 
> the DB in these scenarios:
> 
> 1. the relationship is a simple many-to-one, and the object could be located 
> by primary key from the identity map without emitting a SQL load.
> 
> 2. the relationship emitted the SQL, but as it loaded the rows, the objects 
> matching those rows were already in the identity map, so they weren't 
> reconstructed.
> 
> In both scenarios above, the objects were still guaranteed to be present in 
> the identity map in only three possible ways: 
> 
> 1. they were loaded at some point earlier, in which case your reconstructor 
> was called
> 
> 2. they moved from "pending" to "persistent" , meaning you added them with 
> add(), then they got inserted, so you'd want to make sure
> whatever regular __init__ does is appropriate here
> 
> 3. the objects were detached, and were add()ed back into the session, but 
> this still implies that #1 or #2 were true for a previous Session.
> 
> 
> 
> 
> 
> 
> On Feb 12, 2013, at 5:29 PM, Ryan McKillen  wrote:
> 
>> It doesn't appear that the method decorated by @orm.reconstructor is called 
>> on objects retrieved/loaded as relationships.
>> 
>> Not my desired behavior, but I guess it is consistent with the docs:
>> "When instances are loaded during a Query operation as in 
>> query(MyMappedClass).one(), init_on_load is called."
>> 
>> So if I need it to be executed in a relationship-loading situation, what's 
>> the best way to go about it? Thanks.
>> 
>> — RM
>> 
>> 
>> On Mon, Jan 7, 2013 at 3:36 AM, Ryan McKillen  
>> wrote:
>>> Worked like a charm. Thanks.
>>> 
>>> — RM
>>> 
>>> 
>>> On Mon, Jan 7, 2013 at 6:26 PM, Michael van Tellingen 
>>>  wrote:
 See 
 http://docs.sqlalchemy.org/en/latest/orm/mapper_config.html#constructors-and-object-initialization
 
 
 
 On Mon, Jan 7, 2013 at 4:47 AM, RM  wrote:
 > I have a class which inherits from Base. My class has a metaclass which
 > inherits from DeclarativeMeta. Among other things, the metaclass adds an
 > __init__ method to the class dictionary. When I instantiate an instance 
 > of
 > my class directly, my __init__ method is invoked, but if I use the ORM to
 > retrieve an instance, my __init__ method is not invoked.
 >
 > A metaclass serves better than a mixin for what I am trying to 
 > accomplish.
 > However, I did experiment with a mixin and saw the same behavior as
 > described above.
 >
 > Any ideas? Many thanks.
 >
 > --
 > You received this message because you are subscribed to the Google Groups
 > "sqlalchemy" group.
 > To view this discussion on the web visit
 > https://groups.google.com/d/msg/sqlalchemy/-/oDj_bHNvP7EJ.
 > To post to this group, send email to sqlalchemy@googlegroups.com.
 > To unsubscribe from this group, send email to
 > sqlalchemy+unsubscr...@googlegroups.com.
 > For more options, visit this group at
 > http://groups.google.com/group/sqlalchemy?hl=en.
 
 --
 You received this message because you are subscribed to the Google Groups 
 "sqlalchemy" group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://gro

Re: [sqlalchemy] storing a large file into a LargeBinary question

2013-02-13 Thread Andre Charbonneau
Thanks for the feedback Michael.  Lots of good information in there.
I will read up on buffer() and memoryview() and also on custom
SQLAlchemy types.

Thanks again,
  Andre

On 13-02-12 04:33 PM, Michael Bayer wrote:
> On Feb 12, 2013, at 3:22 PM, Andre Charbonneau 
>  wrote:
>
>> Greetings everyone,
>> I have a piece of code in a web app where I need to store a large binary
>> file (uploaded file stored on disk by Apache server), into an object's
>> LargeBinary attribute.
>>
>> That's pretty easy to do with a syntax like:
>>
>>myobject.uploaded_file = xyz.file.read()
>>
>>
>> The problem is that I don't want to load the entire file into memory
>> when I set the LargeBinary attribute.
>>
>>
>>
>> If my understanding is correct, the above call will first cause the
>> entire content of the uploaded file to be loaded into memory and then
>> that is assigned to the myobject.uploaded_file LargeBinary attribute. 
>> Correct?
>>
>> (Then when sqlalchemy eventually issues the INSERT statement to store
>> the data in the database... But then I don't really know how the data
>> transfer is done...)
>>
>>
>> I have tried to find another way of passing the data to the LargeBinary
>> object that would not have to load the entire file into memory at once,
>> but stream it in chunks during the INSERT statement, but I was not able
>> to find anything. :-(
> In the old days these streaming binary interfaces were common, but as memory 
> has become plentiful you don't see them used anymore.  Even systems like 
> Oracle, you see client libraries reliant upon having to set the allowed size 
> of memory to be bigger than the largest value you need to store.
>
> psycopg2 does work with "buffer()" and "memoryview()" objects as the input to 
> a "bytea" column, and you could send these in as arguments where SQLAlchemy 
> should pass them through (or if not, its easy to make a custom type that 
> passes it through).  Though these objects don't appear to work around having 
> to load the data into memory, they just make memory usage more efficient by 
> removing the need for it to be copied internally. I'm not intimately 
> familiar with them enough to know if they support some way to "stream" from a 
> file handle or not.
>
> There's also a facility I've not previously heard of in Postgresql and 
> psycopg2 called the "large object" system, which appears to be an entirely 
> separate table "pg_largeobject" that stores them.  Dropping into psycopg2, 
> you can store and retrieve these objects using the object interface:
>
> http://initd.org/psycopg/docs/connection.html#connection.lobject
>
> as far as how to get that data into your table, it seems like you'd need to 
> link to the OID of your large object, rather than using bytea: 
> http://www.postgresql.org/docs/current/static/lo-funcs.html .  So you'd need 
> to forego the usage of bytea.   Again SQLAlchemy types could be created which 
> transparently perform these tasks against the OID.
>
> I'd ask on the psycopg2 list as to which feature they recommend, and I'm 
> betting they will likely say that memory is very cheap and plentiful these 
> days and you should just assume the data will be fit into memory.
>
>


-- 
André Charbonneau
Research Computing Support Analyst
Shared Services Canada | National Research Council Canada
Services partagés Canada | Conseil national de recherches Canada
100 Sussex Drive | 100, promenade Sussex Ottawa, Ontario  K1A 0R6
Canada
andre.charbonn...@ssc-spc.gc.ca
Telephone | Téléphone:  613-993-3129


-- 
André Charbonneau
Research Computing Support Analyst
Shared Services Canada | National Research Council Canada
Services partagés Canada | Conseil national de recherches Canada
100 Sussex Drive | 100, promenade Sussex
Ottawa, Ontario  K1A 0R6
Canada
andre.charbonn...@ssc-spc.gc.ca
Telephone | Téléphone:  613-993-3129

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-13 Thread shaung
On Tuesday, February 12, 2013 9:13:48 PM UTC+9, betelgeuse wrote:

> I had a smilar problem. 
> I had a ms sql database that another application created and I need to 
> select data from it. There was lots of tables so I tried reflection but 
> it was slow so I decided to use sa declarative method. But declaring all 
> the tables again in python was too much work. I use sqlautocode to 
> generate declerative table classes and use them in my models with some 
> minor modifications. if the db structure does not change too often this 
> will speed up things. 
>
>
I've been doing that way with django.
Tried sqlautocode but got an "ImportError: cannot import name 
_deferred_relation" error.
(I'm using SA 0.8)
Maybe something is broken but don't have much time to look into it :(

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-13 Thread 耿 爽
On Tue, Feb 12, 2013 at 9:37 PM, Simon King  wrote:

> > Caching the metadata should be fairly easy if you are happy with that
> > approach. I think MetaData instances are picklable:
> >
> >
> http://stackoverflow.com/questions/11785457/sqlalchemy-autoloaded-orm-persistence
>

Just pickled all the metadata and it works nicely.
Thanks.

Br,
Shaung

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.