ah, ok...are you saying you changed your own search_path ?
Yes, ive changed my search_path so that my 'runtime' user (different
than user
'jdu') don't need the schema prefix in request (schema 'jdu' was
prepend in the
search_path). I removed this change to default the search_path and it
worked!
[in response to a batch-insert-is-slow complaint on the Elixir list]
On 7/19/07, AndCycle [EMAIL PROTECTED] wrote:
I don't think db define is the major problem,
it could be sqlalchemy's problem,
because currently it haven't implement real transaction command in
most db implementation,
all
Sqlalchemy almist certainly implements transactions. The point is that insert
is bad for bulk loading data. (I presume you are bulk loading because you want
to use transactions for batch processing)
Correctly and quickly loading data is strongly depending upon the DB.
E.g. For PostgreSQL you
Isn't it what does already Elixir?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to
Isn't it what does already Elixir?
not really. Frankly, i dont know much elixir, just some impressions.
elixir is sort of syntax sugar over SA, with very little
decision-making inside. It leaves all the decisions - the routine
ones too - to the programmer. At least thats how i got it.
This
I will comment that DBAPI has no begin() method. when you use DBAPI
with autocommit=False, youre in a transaction - always. SQLAlchemy
defines a transaction abstraction on top of this that pretends to
have a begin. Its when theres *not* a sqlalchemy transaction going
on that youll see a COMMIT
this capability already exists.
For example, if you want specific SQL types, those are implemented.
if you want specfically a CHAR column, use types.CHAR. Or VARCHAR,
use types.VARCHAR. Other implemented SQL types are TIMESTAMP, CLOB
and BLOB.
But that's not all. For types that are totally
i just put up a little bit of new docs to this effect.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this
Can anyone explain why this http://paste.turbogears.org/paste/1510
fails at the last assert but this
http://paste.turbogears.org/paste/1511 works ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
Sorry for the noob question. Why do I only end up with 2 inserted
rows when doing this?
from sqlalchemy import *
db = create_engine('mysql://login:[EMAIL PROTECTED]/database')
metadata = BoundMetaData(db)
temptable = Table('temptable', metadata,
... Column('col1', Integer),
...
Apologies for not responding for a while Was stuck in the project.
Ok. So this is what happening The mapped objects are created during the
first time request to the application. So create_engine is getting called
one time only passing in the creator as a lambda: db_obj where db_obj is the
ref to
Andreas Kostyrka wrote:
Correctly and quickly loading data is strongly depending upon the DB.
E.g. For PostgreSQL you can achieve a magnitude of speedup by using COPY
FROM STDIN;
But the kinds hacks are out of scope for sqlalchemy.
On 7/19/07, Michael Bayer [EMAIL PROTECTED] wrote:
On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:
Apologies for not responding for a while Was stuck in the project.
Ok. So this is what happening The mapped objects are created during
the first time request to the application. So create_engine is
getting called one time only passing
On Jul 19, 2007, at 2:00 PM, one.person wrote:
Sorry for the noob question. Why do I only end up with 2 inserted
rows when doing this?
from sqlalchemy import *
db = create_engine('mysql://login:[EMAIL PROTECTED]/database')
metadata = BoundMetaData(db)
temptable = Table('temptable',
On 7/18/07, Olli Wang [EMAIL PROTECTED] wrote:
I just found your SAContext has a misspell of elixir, you spell it as
exilir,
Fixed in 0.3.3. I tend to pronounce that word the other way so that's
how I spelled it.
http://sluggo.scrapping.cc/python/sacontext/
Also, I have little question
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:
Apologies for not responding for a while Was stuck in the project.
Ok. So this is what happening The mapped objects are created during
the first time request to the application. So
On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:
Will this be a problem even if I attach a new session per incoming
request i.e. thread handling request ? So basically it's because of
having the same copy of mapped objects ? How can I solve the above
problem in existing way without
Or, you can create your mapped objects per request, yes, or perhaps
per thread.
how much can this cost in terms of performance ?
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:
Will this be a problem even if I attach a new session
or may be just keep on using the QueuePool approach as it will always make
sure to return the same connection to the current thread ?
On 7/20/07, Arun Kumar PG [EMAIL PROTECTED] wrote:
Or, you can create your mapped objects per request, yes, or perhaps
per thread.
how much can this cost in
On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:
or may be just keep on using the QueuePool approach as it will
always make sure to return the same connection to the current thread ?
like i said, i dont see how that helps any. a single Session thats
in flush() holds onto a single
That is what I am trying to figure out. It works perfectly when I do this.
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:
or may be just keep on using the QueuePool approach as it will
always make sure to return the same connection to
perhaps the nature of the conflict is different, then. are you able
to observe what stack traces or at least approximately what operations
are taking place when the conflict occurs ? does the conflict occur
frequently and easily with just a little bit of concurrency or is it
something that only
and how is your session connected to the database ? are you using
create_session(bind_to=something) ? or are you binding your
MetaData to the engine ? are you using BoundMetaData ?
On Jul 19, 2007, at 11:16 PM, Arun Kumar PG wrote:
the stack trace points to pool.py (I will get the exact
BoundMetaData is what I am using.
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
and how is your session connected to the database ? are you using
create_session(bind_to=something) ? or are you binding your MetaData to
the engine ? are you using BoundMetaData ?
On Jul 19, 2007, at 11:16
passing in the creator as a lambda: db_obj where db_obj is the ref
to method returning MySQldb connection.
...can we see the exact code for this please ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
25 matches
Mail list logo