I figured it out...thanks
On Saturday, January 13, 2018 at 9:35:29 AM UTC-6, Russ Wilson wrote:
>
>
> in the _compose_select_body within in compiler.py it adds [] around the
> various parts of the select. I need to alter that so it puts quotes. Is
> there a property I ca
in the _compose_select_body within in compiler.py it adds [] around the
various parts of the select. I need to alter that so it puts quotes. Is
there a property I can set to change that behavior or do i need to
overwrite it?
Thanks for the help!
--
SQLAlchemy -
The Python SQL Toolkit and
Thanks for the insights
On Tue, Jan 9, 2018 at 10:23 PM Mike Bayer <mike...@zzzcomputing.com> wrote:
> On Tue, Jan 9, 2018 at 8:45 PM, Russ Wilson <rpwil...@gmail.com> wrote:
> > So i loaded and tested the mmsql dialect and it gave the same results. It
> > r
zzzeek/sqlalchemy/blob/master/README.dialects.rst
> which also includes some links to an example dialect.
>
> On Jan 9, 2018 12:35 PM, "Russ Wilson" <rpwi...@gmail.com >
> wrote:
>
> Is there a good doc that covered at at min needs to be extended to create
&
Is there a good doc that covered at at min needs to be extended to create a
dialect?
On Mon, Jan 8, 2018 at 3:15 PM Mike Bayer <mike...@zzzcomputing.com> wrote:
> On Sun, Jan 7, 2018 at 9:07 PM, Russ Wilson <rpwil...@gmail.com> wrote:
> > I noticed if you use the cursor
esults_one = cursor.fetchmany(100)
for row in results_one:
print(type(row))
On Sunday, January 7, 2018 at 12:01:29 PM UTC-6, Mike Bayer wrote:
>
>
>
> On Jan 7, 2018 11:29 AM, "Russ Wilson" <rpwi...@gmail.com >
> wrote:
>
> When I attempt to crea
ly doesn't work), but you can
> use the first two as examples for the basics. They base off of the
> PyODBCConnector in connectors/pyodbc.py.
>
>
> On Sun, Jan 7, 2018 at 12:40 AM, Russ Wilson <rpwi...@gmail.com
> > wrote:
> >
> > I was attempting to crea
I was attempting to create a new dialect but hit and issue. pyodbc is
returning a list of pyodbc.Row. Is there a method i should be implementing
to convert the list to a list of tuples.
Thanks
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
Excellent, thank you. is_modified() works very well in this case, with
caveats noted. Also, a nice intro to the History API... hadn't seen that
before!
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop
Is there any way to tell what the outcome of a Session.merge() operation is?
The case of specific interest is when the instance to be merged *does*
exist prior to the merge() call. Is there a built in way to see if any
attributes end up updated, or does this need to be checked manually?
--
)
)
Thanks for pointing me in the right direction! This page had the info I
needed:
http://docs.sqlalchemy.org/en/latest/orm/loading_columns.html
Russ
On Thursday, May 21, 2015 at 4:01:16 PM UTC-4, Michael Bayer wrote:
On 5/21/15 3:56 PM, Russ wrote:
nope. I'd need a complete, self-contained
I have a query I am running where sqlalchemy is throwing this exception:
Exception: can't locate strategy for class
'sqlalchemy.orm.properties.ColumnProperty' (('lazy', 'joined'),)
What causes this is the addition of this joinedload_all option to a query
(q):
q =
nope. I'd need a complete, self-contained and succinct example I can run,
thanks
Ok, thanks. This is a beefy one so that will be extremely tricky to
extract. I had hoped that the combo of lazy+joined would have been a clear
indicator since they are opposite loading strategies.
Digging
, and now 1.0.0b5 and this code no longer
works.
Specifically, the code throws a KeyError on the delattr line. Here's the
clipped traceback:
File
/home/russ/code/bitbucket/sqlalchemy/lib/sqlalchemy/orm/attributes.py,
line 227, in __delete__
self.impl.delete(instance_state(instance
on
localhost. The implementation for these APIs then uses separate
credentials to ensure read-only access in their implementation, whereas the
vast majority of APIs have full access.
Still odd? :)
Thanks for the help, guys!
Russ
--
You received this message because you are subscribed
not with
the event system!
Calling the object to get the actual session prevents any unexpected
rollback handlers from firing shots from the Session graveyard.
Thanks for the help!
Russ
PS: had to dig through some old docs to check my sanity on the
scoped_session-as-a-Session confusion. It was clearly
Thanks for the idea, Jonathan! I was actually discussing such a fallback
watchdog on #postgresql earlier today. Now that I've been having event
troubles and it is highlighting atomicity issues with the db/filesystem
split, I'm definitely going to implement such a safety net.
I think the
be getting called? Am I
misunderstanding something here? Is there a proper way to get rid of all
event handlers on a session? I know that listeners can be removed
individually, but I thought scoped_session.remove() would make this
unnecessary.
Thanks,
Russ
--
You received this message because you
aspects.
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy
What is the proper way to declare a postgresql partial index when using the
@declared_attr decorator?
This form gives me Cannot compile Column object until its 'name' is
assigned:
track_type = Column(SmallInteger, nullable = False)
@declared_attr
def __table_args__(cls):
I should have also indicated that the addition of sqlalchemy.sql.text fixes
the small mixin example. The little script below works, but I don't know
if it is a sketchy hack, or a safe long term solution:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative
What is the proper way to declare a postgresql partial index when using
the @declared_attr decorator?
these two concepts aren’t really connected
Sorry -- I described that poorly, then. However, I only see the problem
(in v0.9.8) when I am using @declared_attr as in the case of a
to
putting together my profiling talk [1] from a while ago (optimizing
SQLAlchemy inserts was a perfect vehicle for the talk). I'll have to
update that thing now with the fancy new bulk operations... they look quite
convenient for decent gain with little pain. Nice!
Russ
[1]: https
, the example is quite simple (a single user table)... but you
can easily extend on it for your one-to-many tables.
I also didn't 100% scrub the SQLAlchemy code (I threw this together in a
hurry), so no yelling at me for bad code. :)
Russ
--
You received this message because you are subscribed
wow that is a great talk, I laughed my ass off and you really got in
there, nice job !
Thanks! As long as you weren't laughing because I did the sqlalchemy all
wrong! :)
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group
...
but when all I want to know is is flush() going to do anything? it seems
a waste to generate that collection.
Thanks,
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg
change at any time. :)
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/4kgsie66ewMJ.
To post to this group, send email to sqlalchemy@googlegroups.com
was made!
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/lrgrvV5l5cEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from
I have just updated SQLAlchemy from 0.7.8 to 0.8.0b2 (the current pip
default) and the
DropEverythinghttp://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverythingrecipe
has stopped working. The problem is on the DropTable line with this
error :
sqlalchemy.exc.InternalError: (InternalError)
differently in each location... or is it? I looked into
compiler.statement et al to figure out the compilation context, but could
not.
Russ
PS: For what it's worth, for this specific case in PostgreSQL, this type of
functionality is better suited to appropriate use of the timestamp with
time
I currently define a custom column type for ensuring that dates stored to
the DB are offset-aware UTC, and convert to to the appropriate timezone on
retrieval, using a class something like this:
import pytz
import sqlalchemy as sa
class UTCEnforcedDateTime(sa.types.TypeDecorator):
DateTime
in the process. :)
Russ
[1] http://docs.sqlalchemy.org/en/rel_0_7/core/compiler.html#synopsis
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/udDuAdJqGzQJ
However... element.table.name doesn't seem like it can be used directly.
It occasionally comes up with strings like %(79508240 log_event)s, which
clearly is getting substituted in the guts somewhere when I don't do this.
To clarify, I've just determined that this is specifically
On Thursday, July 12, 2012 5:42:12 PM UTC-4, Michael Bayer wrote:
let the compiler do it:
elementName = compiler.process(element, **kw)
That causes an infinite loop since it tries to compile DTColumn itself.
I've tried stuff like super(DTColumn, element).compile(), but that doesn't
help
' with
'compiler.visit_column(element, **kw)', since it may help future people.
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/bdCRfquAUd4J.
To post to this group, send
I'm trying to use the new CTE support in SQLAlchemy in a way that will
allow me to reference the recursion level as a field in the query
result. This is easy in a straight SQL CTE by aliasing a constant in
the non-recursive part, and then referencing the alias in the
recursive part. The limited
select(literal(0).alias(x)) should do it, see the documentation at ...
Thanks... literal() gave me a location on which to attach a label I
can reference. I'm closer, but still can't get this to work.
Here's my closest so far (iwth SQLAlchemy 0.7.8):
import sqlalchemy as sa
#set up the
this
and failing. I'm also still very interested in whether there is some other
efficient way to re-sync the ORM with transactions performed outside of it.
I have updated the pastebin sample code to include the above snippet as
well:
http://pastebin.com/eCDSm0YW
Russ
--
You received this message
Thanks very much for the response... lots to chew on here.
well pretty much being saavy about expiration is the primary approach. The
rows you affect via an execute(), if they've been loaded in the session
they'd need to be expired from memory.
I understand this somewhat and had done
I often mix up the SQL expression language with the use of an ORM session,
and it is great that SQLAlchemy more than supports this.
But... what are the recommended ways to keep the session in sync with what
you do with the SQL expression stuff?
For example, with the ORM you can't really do a
All the declarative examples have DeclarativeBase as the first/left base
class. Does it need to be? I've swapped it in several code locations and
experimented and it seems to be fine, but there's a lot going on with
declarative and I'm vaguely paranoid about messing it up subtly by altering
Great - thanks for the response. This was causing me more brain ache than
I care to admit. My paranoia was rooted in the fact that the docs did seem
to go out of their way to put the Base first (without specifically saying
so) which is awkward as you say.
Much appreciated.
--
You received
Is the DropEverything recipe still the best way to drop everything via
SQLAlchemy?
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverything
The recipe is ancient (almost a year old! :) ) and I just want to check if
there is a better way now.
How I got here (for searchability)...
When
know
when the DB was actually being hit, but now I'm not sure I can.
Thanks,
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/PenYdOGI1hwJ.
To post
be wary of isolation issues with this. Usually having the identity map is
awesome.
My main issue was/is that I saw SQL being emitted, was expecting 'read
committed' behaviour, and didn't get it. Now I completely know why...
thanks again.
Russ
--
You received this message because you
I was getting some strange transaction isolation behavior with
SQLAlchemy (0.7.2), psycopg2 (2.4.2), and PostgreSQL 8.4. In order to
investigate I wrote up a usage sequence that does this:
1. starts a transaction with a session (s1)
2. starts another transaction with session (s2)
3. updates a
because an attribute can be mapped to multiple columns,
i.e.http://www.sqlalchemy.org/docs/orm/mapper_config.html#mapping-a-class
Ahh... thanks.
as I did this example and thought back upon how often people want to poke
around the mapper configuration, i started trying to think of how
attrgetter and attrsetter. Stepping
into the attribute assignment (InstrumentedAttribute.__set__) was
highly confusing until reading up on those bits!! instance_state()
and instance_dict() instantly returning was somewhat mysterious for a
while!
Thanks,
Russ
--
You received this message because you
the leap to 0.7.1 ... all code now working
fine after that transition with only minor hiccups.
That was an excellent introduction to the new event system as well...
thanks again!
Russ
--
Code is reproduced below as well, in case the pastebin ever fails:
from pytz import UTC
import sqlalchemy
I have a typical case where I want to ensure that datetime values sent
to the database are UTC, and values read from the database come back
as offset-aware UTC times. I see several threads on the issue (eg:
http://goo.gl/FmdIJ is most relevant), but none address my question
that I can see. UTC
When I realized that process_bind_param only happens on commit, I
decided to switch my strategy to simply confirming that all incoming
outgoing datetime values are offset-aware UTC using this simpler code:
http://pastebin.com/gLfCUkX3
Sorry - I messed up that code segment on edit for
On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote:
Suppose a concurrent thread or process deleted your row in a new
transaction and committed it, or didn't even commit yet hence locked the
row, in between the time you said commit() and later attempted to access the
to InstanceState.expire_attributes) and nothing is leaping out at
me. Can I force an un-expire after the commit without legitimately
reflecting the persistent state?
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email
!
Russ
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http
it was
associated with at one point doesn't exist anymore?
2. Why are 'deleted' sessions still working prior to garbage
collection?
The code example below illustrates both issues. You can just comment
out the gc.collect() to toggle them.
Any clarification here would be much appreciated.
Russ
import
55 matches
Mail list logo