Re: [sqlalchemy] Error with SQL Server and utf-8 encoding

2015-09-02 Thread Massi
It works! Thanks a lot!

On Tuesday, September 1, 2015 at 5:42:08 PM UTC+2, Michael Bayer wrote:
>
>
>
> On 9/1/15 11:28 AM, Massi wrote:
>
> Hi everyone,
>
> I'm trying to manage read and write operations of utf-8 unicode strings 
> with SQL Server (sqlalchemy 0.9.10), but I'm having some problems. I 
> correctly write the strings to the database, but when I read them back and 
> try to convert to unicode I get the following error:
>
> Traceback (most recent call last):
>   File "C:\Users\Impara 01\Desktop\t.py", line 18, in 
> print unicode(row[0], "utf-8")
> UnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 0: 
> invalid continuation byte
> Process terminated with an exit code of 1
>
> Here is a sample code showing the problem:
>
> # -*- coding: utf-8 -*-
> import sqlalchemy
> from sqlalchemy import select, create_engine, MetaData, Table, Column
> import datetime
>
> engine = 
> create_engine('mssql+pyodbc://MYHOST\SQLEXPRESS/user?trusted_connection=True=utf8')
> metadata = MetaData(engine) 
> t = Table('test', metadata,
>   Column('unicode', sqlalchemy.dialects.mssql.VARCHAR())
> )
> t.create()
> s = "àèìòù"
> s = unicode(s, "utf-8")
> t.insert().values(unicode=s).execute()
> res = select([t.c.unicode]).execute().fetchall()
> for i, row in enumerate(res):
> print unicode(row[0], "utf-8")
>
> Can anyone point me out what I'm doing wrong?
>
>
> pyodbc and SQL Server are very particular about unicode.  In this case it 
> seems like you are passing a Python unicode literal into Pyodbc, and 
> assuming Pyodbc knows how to handle that, but then on the reception side 
> you're assuming that you're getting a bytestring back, and not again a 
> Python Unicode object.
>
> to do a unicode round trip, use the SQLAlchemy Unicode type, and deal only 
> with Python unicode literals in your script:
>
> t = Table( Column('x', sqlalchemy.Unicode()))
>
> s = u'àèìòù'
>
> t.insert().values(unicode=s) ...
>
> for row in res:
>print row[0]
>
> SQLAlchemy will make sure that the value is passed to pyodbc in the 
> expected format, in this case it is likely encoding to utf-8 on the way in 
> and decoding from utf-8 on the way out.
>
>
>
>
>
> Thanks in advance!
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com .
> To post to this group, send email to sqlal...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Support for Impala?

2015-09-02 Thread James Flint
Is this actively being worked on? 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Support for Impala?

2015-09-02 Thread Sam Flint
Thanks

On Wed, Sep 2, 2015 at 9:52 AM, Mike Bayer  wrote:

>
> There appears to be a rudimental dialect delivered with their own DBAPI
> here:
>
> https://github.com/cloudera/impyla/blob/master/impala/sqlalchemy.py
>
> So that would be where to go.
>
>
>
> On 9/2/15 11:40 AM, James Flint wrote:
>
> Is this actively being worked on?
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Nr21s1RfxG0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sam Flint

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: Global variables from inside render_item

2015-09-02 Thread Mike Bayer



On 9/2/15 3:13 AM, Fayaz Yusuf Khan wrote:

Hi,
I'm seeing this weird issue where several global variables in env.py 
are set as null when accessed from render_item.


Code: env.py
def render_item(type_, obj, autogen_context):
print globals()


I'm not deeply familiar with the vagaries of what globals() does other 
than I tend not to go near it except when doing eval / exec, so I don't 
have any immediate insight on this one, sorry (as always, I'd pdb it here).






Output:
{'mysql': None, 'with_statement': None, 'PasswordType': None, 
'include_symbol': None, 'PriceType': None, 'JSONEncodedDict': None, 
'compare_type': None, 'PhoneNumberType': None, 
'run_migrations_online': None, '__package__': None, 'render_item': 
None, 'target_metadata': None, 'Base': None, 'config': None, 
'__doc__': None, '__builtins__': {'bytearray': , 
'IndexError': , 'all': function all>, 'help': Type help() for interactive help, or 
help(object) for help about object., 'var.


Recently updated to:
alembic==0.8.2
SQLAlchemy==1.0.8

Any ideas why this might be happening?

Thanks.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy-alembic+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Support for Impala?

2015-09-02 Thread Mike Bayer


There appears to be a rudimental dialect delivered with their own DBAPI 
here:


https://github.com/cloudera/impyla/blob/master/impala/sqlalchemy.py

So that would be where to go.



On 9/2/15 11:40 AM, James Flint wrote:

Is this actively being worked on?
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager when 
there are multiple joinpaths leading to the same row, and only some of 
those joinpaths are using contains_eager all they way down the joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Dave Vitek
At least one database (postgres) has a pub/sub messaging facility 
(NOTIFY/LISTEN) that you can use to do this.  See the postgres docs.  We 
use this extensively.


On the listen end, you basically want to get down to the psycopg layer, 
because sqlalchemy's layers aren't going to be helpful.


1.  Get it using engine.raw_connection()
2.  Detach it from the thread pool (c.detach())
3.  Change the isolation level to autocommit 
c.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
(note that if you return the connection to the connection pool, this 
change may survive after the connection is returned to the pool, causing 
subtle havoc later)

4. Call c.poll() to wait for events (see psycopg2 docs)
5. Use c.notifies to get messages

On the notify end, you don't need to do these special things to the 
connection and can issue messages using raw sql text on sqlalchemy 
connection objects.


I would assume other backends will be completely different.  You can use 
select/epoll/whatever to do async IO if needed.  You may find postgres' 
advisory locks useful if any synchronization needs arise.


References:
http://www.postgresql.org/docs/9.4/static/sql-listen.html
http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications

- Dave

On 9/2/2015 8:48 PM, Ken Lareau wrote:
I'm going to try to see if I can give enough detail here to allow 
folks to make sense, but I will be simplifying things a bit to prevent 
a 1,000+ line email, too...


So I have an in-house application that handles software deployments.  
It uses a database backend to keep track of current deployments and 
maintain history (actually, the database is used more generally for 
our Site Operations site management with specific tables created just 
for the application).  I am working on a major refactoring of the 
deployment code itself where the actually installation of the software 
is handled by a daemon that runs constantly in the background looking 
for new deployments to perform.  The key tables look like this:



class Deployment(Base):
__tablename__ = 'deployments'

id = Column(u'DeploymentID', INTEGER(), primary_key=True)
package_id = Column(
INTEGER(),
ForeignKey('packages.package_id', ondelete='cascade'),
nullable=False
)

package = relationship(
"Package",
uselist=False,
back_populates='deployments'
)

user = Column(String(length=32), nullable=False)
status = Column(
Enum('pending', 'queued', 'inprogress', 'complete', 'failed',
 'canceled', 'stopped'),
server_default='pending',
nullable=False,
)
declared = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('declared')
app_deployments = relationship(
'AppDeployment', order_by="AppDeployment.created_at, 
AppDeployment.id"

)
host_deployments = relationship(
'HostDeployment', order_by="HostDeployment.created_at, 
HostDeployment.id"

)


class AppDeployment(Base):
__tablename__ = 'app_deployments'

id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)
app_id = Column(
u'AppID',
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
nullable=False
)

application = relationship("AppDefinition", uselist=False)
target = synonym('application')
deployment = relationship("Deployment", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(
Enum(
'complete',
'incomplete',
'inprogress',
'invalidated',
'validated',
),
nullable=False
)
environment_id = Column(
u'environment_id',
INTEGER(),
ForeignKey('environments.environmentID', ondelete='cascade'),
nullable=False
)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('realized')

environment_obj = relationship('Environment')


class HostDeployment(Base):
__tablename__ = 'host_deployments'

id = Column(u'HostDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)

deployment = relationship("Deployment", uselist=False)

host_id = Column(
u'HostID',
INTEGER(),
ForeignKey('hosts.HostID', ondelete='cascade'),
nullable=False
)
host = relationship("Host", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(Enum('inprogress', 'failed', 'ok'), nullable=False)

Re: [sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Ken Lareau
Dave,

Thanks for the response.  Unfortunately the switch to PostgreSQL, while not
a long ways away, probably will not occur until at least the beginning of
next year and this rewrite is needed now.  If there are no other options
available then I'll just need to brute force it. :)

- Ken


On Wed, Sep 2, 2015 at 6:32 PM, Dave Vitek  wrote:

> At least one database (postgres) has a pub/sub messaging facility
> (NOTIFY/LISTEN) that you can use to do this.  See the postgres docs.  We
> use this extensively.
>
> On the listen end, you basically want to get down to the psycopg layer,
> because sqlalchemy's layers aren't going to be helpful.
>
> 1.  Get it using engine.raw_connection()
> 2.  Detach it from the thread pool (c.detach())
> 3.  Change the isolation level to autocommit
> c.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
> (note that if you return the connection to the connection pool, this
> change may survive after the connection is returned to the pool, causing
> subtle havoc later)
> 4. Call c.poll() to wait for events (see psycopg2 docs)
> 5. Use c.notifies to get messages
>
> On the notify end, you don't need to do these special things to the
> connection and can issue messages using raw sql text on sqlalchemy
> connection objects.
>
> I would assume other backends will be completely different.  You can use
> select/epoll/whatever to do async IO if needed.  You may find postgres'
> advisory locks useful if any synchronization needs arise.
>
> References:
> http://www.postgresql.org/docs/9.4/static/sql-listen.html
> http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications
>
> - Dave
>
>
> On 9/2/2015 8:48 PM, Ken Lareau wrote:
>
> I'm going to try to see if I can give enough detail here to allow folks to
> make sense, but I will be simplifying things a bit to prevent a 1,000+ line
> email, too...
>
> So I have an in-house application that handles software deployments.  It
> uses a database backend to keep track of current deployments and maintain
> history (actually, the database is used more generally for our Site
> Operations site management with specific tables created just for the
> application).  I am working on a major refactoring of the deployment code
> itself where the actually installation of the software is handled by a
> daemon that runs constantly in the background looking for new deployments
> to perform.  The key tables look like this:
>
>
> class Deployment(Base):
> __tablename__ = 'deployments'
>
> id = Column(u'DeploymentID', INTEGER(), primary_key=True)
> package_id = Column(
> INTEGER(),
> ForeignKey('packages.package_id', ondelete='cascade'),
> nullable=False
> )
>
> package = relationship(
> "Package",
> uselist=False,
> back_populates='deployments'
> )
>
> user = Column(String(length=32), nullable=False)
> status = Column(
> Enum('pending', 'queued', 'inprogress', 'complete', 'failed',
>  'canceled', 'stopped'),
> server_default='pending',
> nullable=False,
> )
> declared = Column(
> TIMESTAMP(),
> nullable=False,
> server_default=func.current_timestamp()
> )
> created_at = synonym('declared')
> app_deployments = relationship(
> 'AppDeployment', order_by="AppDeployment.created_at,
> AppDeployment.id"
> )
> host_deployments = relationship(
> 'HostDeployment', order_by="HostDeployment.created_at,
> HostDeployment.id"
> )
>
>
> class AppDeployment(Base):
> __tablename__ = 'app_deployments'
>
> id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
> deployment_id = Column(
> u'DeploymentID',
> INTEGER(),
> ForeignKey('deployments.DeploymentID', ondelete='cascade'),
> nullable=False
> )
> app_id = Column(
> u'AppID',
> SMALLINT(display_width=6),
> ForeignKey('app_definitions.AppID', ondelete='cascade'),
> nullable=False
> )
>
> application = relationship("AppDefinition", uselist=False)
> target = synonym('application')
> deployment = relationship("Deployment", uselist=False)
>
> user = Column(String(length=32), nullable=False)
> status = Column(
> Enum(
> 'complete',
> 'incomplete',
> 'inprogress',
> 'invalidated',
> 'validated',
> ),
> nullable=False
> )
> environment_id = Column(
> u'environment_id',
> INTEGER(),
> ForeignKey('environments.environmentID', ondelete='cascade'),
> nullable=False
> )
> realized = Column(
> TIMESTAMP(),
> nullable=False,
> server_default=func.current_timestamp()
> )
> created_at = synonym('realized')
>
> environment_obj = relationship('Environment')
>
>
> class HostDeployment(Base):
> __tablename__ = 'host_deployments'
>
>

[sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Ken Lareau
I'm going to try to see if I can give enough detail here to allow folks to
make sense, but I will be simplifying things a bit to prevent a 1,000+ line
email, too...

So I have an in-house application that handles software deployments.  It
uses a database backend to keep track of current deployments and maintain
history (actually, the database is used more generally for our Site
Operations site management with specific tables created just for the
application).  I am working on a major refactoring of the deployment code
itself where the actually installation of the software is handled by a
daemon that runs constantly in the background looking for new deployments
to perform.  The key tables look like this:


class Deployment(Base):
__tablename__ = 'deployments'

id = Column(u'DeploymentID', INTEGER(), primary_key=True)
package_id = Column(
INTEGER(),
ForeignKey('packages.package_id', ondelete='cascade'),
nullable=False
)

package = relationship(
"Package",
uselist=False,
back_populates='deployments'
)

user = Column(String(length=32), nullable=False)
status = Column(
Enum('pending', 'queued', 'inprogress', 'complete', 'failed',
 'canceled', 'stopped'),
server_default='pending',
nullable=False,
)
declared = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('declared')
app_deployments = relationship(
'AppDeployment', order_by="AppDeployment.created_at,
AppDeployment.id"
)
host_deployments = relationship(
'HostDeployment', order_by="HostDeployment.created_at,
HostDeployment.id"
)


class AppDeployment(Base):
__tablename__ = 'app_deployments'

id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)
app_id = Column(
u'AppID',
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
nullable=False
)

application = relationship("AppDefinition", uselist=False)
target = synonym('application')
deployment = relationship("Deployment", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(
Enum(
'complete',
'incomplete',
'inprogress',
'invalidated',
'validated',
),
nullable=False
)
environment_id = Column(
u'environment_id',
INTEGER(),
ForeignKey('environments.environmentID', ondelete='cascade'),
nullable=False
)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('realized')

environment_obj = relationship('Environment')


class HostDeployment(Base):
__tablename__ = 'host_deployments'

id = Column(u'HostDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)

deployment = relationship("Deployment", uselist=False)

host_id = Column(
u'HostID',
INTEGER(),
ForeignKey('hosts.HostID', ondelete='cascade'),
nullable=False
)
host = relationship("Host", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(Enum('inprogress', 'failed', 'ok'), nullable=False)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('realized')


(Forgive a bit of the 'messiness', these classes have been a bit organic as
we've been trying to improve the program flow and design.)

The basic process the application itself performs is to gather the
information from the user on which tiers (AppDeployment) and hosts
(HostDeployment) to deploy the software onto, then creates (or updates)
entries for all of them, setting their status to 'inprogress'.  Once this
is done, the base Deployment entry has its status set to 'queued', at which
point the installer daemon will take over and work its way through all the
entries connected to that Deployment entry and update the entries as each
individual deployment finishes.

Because the daemon is doing the state changes itself, I need the main
application to 'watch' the database for changes and inform the user when
they occur (the application does have a 'detached' state which simply exits
after it sets up the deployment, but that's orthogonal to the current
issue).  I could do this the "brute force" way, which is essentially:

1) Query the database for all the relevant entries
2) Check against the previous state to see which have 

Re: [sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

Answering my own question.

Here's a patch that seems to fix it, but I am uncertain about whether it 
breaks other things.  Use at your own risk, at least until someone more 
familiar with this code evaluates it.


Index: lib/sqlalchemy/orm/strategies.py
===
--- lib/sqlalchemy/orm/strategies.py(revision 114304)
+++ lib/sqlalchemy/orm/strategies.py(working copy)
@@ -1474,20 +1474,24 @@
 def _create_scalar_loader(self, context, key, _instance):
 def load_scalar_from_joined_new_row(state, dict_, row):
 # set a scalar object instance directly on the parent
 # object, bypassing InstrumentedAttribute event handlers.
 dict_[key] = _instance(row, None)

 def load_scalar_from_joined_existing_row(state, dict_, row):
 # call _instance on the row, even though the object has
 # been created, so that we further descend into properties
 existing = _instance(row, None)
+if key in dict_:
+assert dict_[key] is existing
+else:
+dict_[key] = existing
 if existing is not None \
 and key in dict_ \
 and existing is not dict_[key]:
 util.warn(
 "Multiple rows returned with "
 "uselist=False for eagerly-loaded attribute '%s' "
 % self)

 def load_scalar_from_joined_exec(state, dict_, row):
 _instance(row, None)

- Dave

On 9/2/2015 7:29 PM, Dave Vitek wrote:

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager 
when there are multiple joinpaths leading to the same row, and only 
some of those joinpaths are using contains_eager all they way down the 
joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: Global variables from inside render_item

2015-09-02 Thread Fayaz Yusuf Khan
Oh, I can reproduce the error without globals too.
Here:
from sqlalchemy.dialects import mysql
def render_item(type_, obj, autogen_context):
print "This is a global:", mysql
from sqlalchemy.dialects import mysql as mysql2
print "This is not a global:", mysql2

Output:
This is a global: None
This is not a global: 


On Wednesday, September 2, 2015 at 8:53:21 PM UTC+5:30, Michael Bayer wrote:
>
>
>
> On 9/2/15 3:13 AM, Fayaz Yusuf Khan wrote:
>
> Hi,
> I'm seeing this weird issue where several global variables in env.py are 
> set as null when accessed from render_item.
>
> Code: env.py
> def render_item(type_, obj, autogen_context):
> print globals()
>
>
> I'm not deeply familiar with the vagaries of what globals() does other 
> than I tend not to go near it except when doing eval / exec, so I don't 
> have any immediate insight on this one, sorry (as always, I'd pdb it here).
>
>
>
>
> Output:
> {'mysql': None, 'with_statement': None, 'PasswordType': None, 
> 'include_symbol': None, 'PriceType': None, 'JSONEncodedDict': None, 
> 'compare_type': None, 'PhoneNumberType': None, 'run_migrations_online': 
> None, '__package__': None, 'render_item': None, 'target_metadata': None, 
> 'Base': None, 'config': None, '__doc__': None, '__builtins__': 
> {'bytearray': , 'IndexError':  'exceptions.IndexError'>, 'all': , 'help': Type 
> help() for interactive help, or help(object) for help about object., 
> 'var.
>
> Recently updated to:
> alembic==0.8.2
> SQLAlchemy==1.0.8
>
> Any ideas why this might be happening?
>
> Thanks.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] contains_eager bug

2015-09-02 Thread Mike Bayer



On 9/2/15 9:57 PM, Dave Vitek wrote:

Answering my own question.

Here's a patch that seems to fix it, but I am uncertain about whether 
it breaks other things.  Use at your own risk, at least until someone 
more familiar with this code evaluates it.



this is a variant of a known issue, that of object X is being loaded in 
different contexts in the same row, with different loader strategies 
applied.  The first context that hits it wins.


I'm adding your test case to that issue at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3431/object-eager-loaded-on-scalar-relationship. 



the patch which is there seems to fix this test as well - it is 
extremely similar to your patch, so nice job!







Index: lib/sqlalchemy/orm/strategies.py
===
--- lib/sqlalchemy/orm/strategies.py(revision 114304)
+++ lib/sqlalchemy/orm/strategies.py(working copy)
@@ -1474,20 +1474,24 @@
 def _create_scalar_loader(self, context, key, _instance):
 def load_scalar_from_joined_new_row(state, dict_, row):
 # set a scalar object instance directly on the parent
 # object, bypassing InstrumentedAttribute event handlers.
 dict_[key] = _instance(row, None)

 def load_scalar_from_joined_existing_row(state, dict_, row):
 # call _instance on the row, even though the object has
 # been created, so that we further descend into properties
 existing = _instance(row, None)
+if key in dict_:
+assert dict_[key] is existing
+else:
+dict_[key] = existing
 if existing is not None \
 and key in dict_ \
 and existing is not dict_[key]:
 util.warn(
 "Multiple rows returned with "
 "uselist=False for eagerly-loaded attribute '%s' "
 % self)

 def load_scalar_from_joined_exec(state, dict_, row):
 _instance(row, None)

- Dave

On 9/2/2015 7:29 PM, Dave Vitek wrote:

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager 
when there are multiple joinpaths leading to the same row, and only 
some of those joinpaths are using contains_eager all they way down 
the joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

On 9/2/2015 10:08 PM, Mike Bayer wrote:



On 9/2/15 9:57 PM, Dave Vitek wrote:

Answering my own question.

Here's a patch that seems to fix it, but I am uncertain about whether 
it breaks other things.  Use at your own risk, at least until someone 
more familiar with this code evaluates it.



this is a variant of a known issue, that of object X is being loaded 
in different contexts in the same row, with different loader 
strategies applied.  The first context that hits it wins.


I'm adding your test case to that issue at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3431/object-eager-loaded-on-scalar-relationship. 



the patch which is there seems to fix this test as well - it is 
extremely similar to your patch, so nice job!




I have a question about your patch.  If "existing" is None and is not 
present in dict_, it does not get placed into dict_.  I note that 
load_scalar_from_joined_new_row puts it into dict_ regardless of whether 
it is None.  Is this intentional/good?








Index: lib/sqlalchemy/orm/strategies.py
===
--- lib/sqlalchemy/orm/strategies.py(revision 114304)
+++ lib/sqlalchemy/orm/strategies.py(working copy)
@@ -1474,20 +1474,24 @@
 def _create_scalar_loader(self, context, key, _instance):
 def load_scalar_from_joined_new_row(state, dict_, row):
 # set a scalar object instance directly on the parent
 # object, bypassing InstrumentedAttribute event handlers.
 dict_[key] = _instance(row, None)

 def load_scalar_from_joined_existing_row(state, dict_, row):
 # call _instance on the row, even though the object has
 # been created, so that we further descend into properties
 existing = _instance(row, None)
+if key in dict_:
+assert dict_[key] is existing
+else:
+dict_[key] = existing
 if existing is not None \
 and key in dict_ \
 and existing is not dict_[key]:
 util.warn(
 "Multiple rows returned with "
 "uselist=False for eagerly-loaded attribute '%s' "
 % self)

 def load_scalar_from_joined_exec(state, dict_, row):
 _instance(row, None)

- Dave

On 9/2/2015 7:29 PM, Dave Vitek wrote:

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager 
when there are multiple joinpaths leading to the same row, and only 
some of those joinpaths are using contains_eager all they way down 
the joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave







--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Filter by first in relationship

2015-09-02 Thread Ryan Eberhardt
Hello,

I have User and Session classes like this:

class User(Base):
__tablename__ = 'users'

   id = Column(Integer, primary_key=True)
   sessions = relationship('Session')

   @property
def last_login_time(self):
return sorted(self.sessions, reverse=True, key=lambda x: x.time)[0]
...

class Session(Base):
__tablename__ = 'sessions'

id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey('users.id'))
time = Column(DateTime)
...

The last_login_time property sorts the user's sessions by date and returns 
the most recent one. Now I want to query users using this property. I know 
I can use hybrid properties to do this, but I have no idea how to write the 
sql expression with sqlalchemy (i.e. I don't know how to get a user's most 
recent session using sqlalchemy expressions). Any ideas on how to do this?

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.