Is there a way to use events with the expression api? I see how they're
used with ORM and Core but not expressions.
Thanks,
Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiv
Is there a way to use events with the expression api?
I see how they're used with ORM and Core, but not expressions.
Thanks,
Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiv
, Brian Rayburn wrote:
>
> Hey all,
>
> In postgresql you can do this:
> ```
> SELECT t.*FROM unnest(ARRAY[1,2,3,2,3,5]) item_idLEFT JOIN items t on
> t.id=item_id
> ```
>
> Is there support for this sort of join on an array of id's in sqlalchemy?
>
> Best
Hey all,
In postgresql you can do this:
```
SELECT t.*FROM unnest(ARRAY[1,2,3,2,3,5]) item_idLEFT JOIN items t on
t.id=item_id
```
Is there support for this sort of join on an array of id's in sqlalchemy?
Best wishes,
Brian
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
That works!
Thank you.
Brian
On Friday, March 20, 2020 at 12:11:21 PM UTC-4, Mike Bayer wrote:
>
> OK one more addition to the recipe, please do this:
>
> DB_SCHEMA = "my_foo_schema"
>
> with connectable.connect() as connection:
> connection.execut
,
)
with context.begin_transaction():
context.run_migrations()
it works when my schema is 'public', when it matches metadata. my solution
will be to set my schema to 'public' and use this simple env.py.
thanks,
brian
On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>
> I just re
On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many sch
and the db.
thanks,
brian
On Thursday, March 19, 2020 at 7:41:54 PM UTC-4, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>>
>> so let me get this straight:
>>
>> 1. you have many schemas
>>
>>
>
re(
connection=connection,
include_schemas=True,
target_metadata=metadata,
version_table_schema=DB_SCHEMA,
)
with context.begin_transaction():
context.run_migrations()
>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
>
Here's my env.py. Thanks for the help.
Brian
On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with multi-tenant
> (schema_translate_map)?
&
.
It's detecting the alembic version table in the tenant schema. If I run the
autogenerate a second time with the new revision file it fails saying Taget
database is not up to date.
Thanks,
Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy-al
That works. I check if the version table exists in the schema and set
version_table_schema if it does. Then after run_migrations if the version
table didn't exist in the schema I move it to the schema.
Thanks!
Brian
On Tuesday, February 18, 2020 at 1:30:54 PM UTC-5, Mike Bayer wrote
after this?
Thanks,
Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion
; you will need to fill in the "schema" parameter explicitly when you call
> upon op.create_table()
>
>
>
> On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
>
> I'm having trouble using enums in conjunction with schema_translate_map
> for postgres migration
I'm having trouble using enums in conjunction with schema_translate_map for
postgres migrations.
My model, single table, single enum.
import enum
from sqlalchemy import MetaData, Enum, Column, Integer
from sqlalchemy.ext.declarative import declarative_base
metadata = MetaData()
Base =
On Monday, December 30, 2019 at 9:07:45 AM UTC-6, Mike Bayer wrote:
>
>
> On Sun, Dec 29, 2019, at 11:54 PM, Brian Paterni wrote:
>
> On Sunday, December 29, 2019 at 1:17:24 AM UTC-6, Mike Bayer wrote:
>
>
> I can't run the test app however 15 seems like your connection
On Sunday, December 29, 2019 at 1:17:24 AM UTC-6, Mike Bayer wrote:
>
>
> I can't run the test app however 15 seems like your connection pool is set
> up at its default size of 5 connections + 10 overflow, all connections are
> being checked out, and none are being returned.
>
Hm, is the issue
.
On Saturday, December 28, 2019 at 10:12:13 PM UTC-6, Brian Paterni wrote:
>
> Hi,
>
> I seemingly have a problem with flask/socketio/eventlet/sqlalchemy + MSSQL
> when >= 15 parallel requests have been made. I've built a test app:
>
> https://github.com/bpaterni/flask-app-
Hi,
I seemingly have a problem with flask/socketio/eventlet/sqlalchemy + MSSQL
when >= 15 parallel requests have been made. I've built a test app:
https://github.com/bpaterni/flask-app-simple-blocking-eventlet
that can be used to reproduce the problem. It will hang if >= 15 parallel
request
You and sqlalchemy never cease to impress. Thank you very much!
On Wed, Jun 26, 2019, 22:14 Mike Bayer wrote:
>
>
> On Wed, Jun 26, 2019, at 1:51 PM, Brian Maissy wrote:
>
> Background: I have a bunch of materialized views (postgres) that are
> dependent on each other. When I
Background: I have a bunch of materialized views (postgres) that are dependent
on each other. When I want to change one of them, I drop cascade it (postgres
does not provide a way to modify the query of an existing materialized view)
and recreate it and all of its dependents (which are dropped
Hi Tomek,
You actually want mysqlclient, which is the maintained fork of mysqldb:
https://pypi.org/project/mysqlclient/
Brian
On Mar 14, 2019, at 7:41 AM, Tomek Rożen
mailto:tomek.ro...@gmail.com>> wrote:
Hi,
'mysqldb' is the default driver, however it does not support python3. Any
y more generally for this
service.
Brian
On Mar 8, 2019, at 9:56 AM, Walt
mailto:waltas...@gmail.com>> wrote:
On Friday, March 8, 2019 at 11:32:01 AM UTC-6, Jonathan Vanasco wrote:
Do you control the HTTP API or is this someone else's system?
It's someone else's. I'm living in a world wher
created. Is that correct? Does the `query.execution_options`, or
something in session, accept that keyword?
On Monday, November 12, 2018 at 3:15:23 PM UTC-5, Mike Bayer wrote:
>
> On Mon, Nov 12, 2018 at 2:08 PM Brian Cherinka > wrote:
> >
> > What's the best way to set a
What's the best way to set a timeout for specific queries? I have a custom
query tool that uses SQLAlchemy to build and submit queries. This tool can
be used in a local python session with a database. I'm also using it to
allow queries in a Flask web-app. In general, I do not want to apply a
the existing dialect and monkey patch
max_identifier_length or create a new dialect?
Thanks,
Brian
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http
.
The first part works fine, the second part, not so much.
What's really the best way to do this?
Code is attached below.
Brian
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import SmallInteger, Float
from sqlalchemy import types
from sqlalchemy.dialects import mysql, oracle, postgresql
I'm running SQLAlchemy 1.2.12. When trying to autoload a DB2 table, it
gives me a "no such table" error for a table referenced in a foreign key,
even though that table exists.
import sqlalchemy
cnxstr = 'ibm_db_sa://xyzzy'
db2 = sqlalchemy.create_engine(cnxstr)
meta =
that into place, at least for the testing suite.
Brian
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full
On Tuesday, August 28, 2018 at 2:51:47 PM UTC-4, Mike Bayer wrote:
>
> On Tue, Aug 28, 2018 at 11:32 AM, 'Brian DeRocher' via sqlalchemy
> > wrote:
> > Hey all,
> >
> > I'm writing some automated tests for some legacy python code using a
> > psycopg2
this as it's not logged.
self.do_rollback(connection.connection)
Is this line really needed? What would it be rolling back? Can it be
avoided? When I disable this line of code, the transaction continues and
sqlalchemy can see the updates from psyopg2.
I've attached a demo file.
Thanks,
Brian
functions, public' fixes everything SQLA.
It's been a while since I've looked at my code that sets up the
DatabaseConnection. I must have had an original reason to exclude public
but it doesn't seem relevant anymore. Thanks for your help.
On Wednesday, August 1, 2018 at 2:18:55 PM UTC-4,
set search_path='functions' in order to use them. So
it's not ideal. Ideally, I'd like *sqlachemy.func* to understand functions
that live either in the "functions" or "public" schema. Any ideas on how
to fix this?
Cheers, Brian
--
SQLAlchemy -
The Python SQL Toolkit
, or the SQL FLOAT type? It doesn't
seem like there's a huge functional difference.
Brian
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http
h unit
of work, rather than create a fresh session each time.
Thanks,
Brian.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/hel
? Is there anyway someone
can create the tests, so I know I got this working correctly? Also, since
sqlite doesn't support sequences, how do I get automated testing to use PG.
A pull request is coming soon.
Thanks,
Brian
--
Brian DeRocher
Noblis | noblis.org | ☎ 703.610.1589 | ✉ brian.deroc
Your code is buggy. You assigned the engine object then your parameters are
actually a tuple which are immediately lost.
> On Aug 19, 2017, at 10:35 PM, Boris Epstein wrote:
>
> Hello all,
>
> My call to create_engine which used to work at one point now return a
>
g(128))
description = Column(Text)
is_default_schema = Column(Boolean)
but I'm not sure if primaryjoin is the proper argument for relationship and, if
it is, what the expression should be. Or is this something that's best handled
a different way?
Thanks,
Brian
--
SQLAlchemy
sh Session next time.
However if the framework already does a commit/rollback, why not just
allow the registry to retain the same session object?
Thanks,
Brian.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please pro
ething? I'm not really sure. What might be the a way to handle this without
just using raw_connection or writing a new dialect?
Thanks,
Brian
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal,
"postgresql://scott:tiger@localhost/test", echo='debug')
> Base.metadata.drop_all(e)
> Base.metadata.create_all(e)
>
> s = Session(e)
>
> s.add(A(data=[{"foo": "bar"}, {"bat": "hoho"}]))
>
> s.commit()
>
> a1 =
So i'm trying to insert an array of jsonb values into my database but I
can't seem to format it right, here's my code:
updated_old_passwords = []
updated_old_passwords.append({"index": 1, "password": hashed_password})
user.old_passwords = updated_old_passwords
Got it, thanks!
On Friday, December 23, 2016 at 12:11:12 PM UTC-8, Brian Clark wrote:
>
> Is there an update equivalent of this insert statement?
>
> inserts = [{"name": "jon", "age": 20"}, {"name": "ashley", &qu
Seems doable in raw SQL (using postgresql btw)
http://stackoverflow.com/questions/18797608/update-multiple-rows-in-same-query-using-postgresql
On Friday, December 23, 2016 at 12:11:12 PM UTC-8, Brian Clark wrote:
>
> Is there an update equivalent of this insert statement?
>
> inser
Is there an update equivalent of this insert statement?
inserts = [{"name": "jon", "age": 20"}, {"name": "ashley", "age": 22"}]
session.execute(
People.__table__.insert().values(
inserts
)
)
I have this right now but it's still slower than I'd like because it's
using
']))
On Friday, December 23, 2016 at 12:25:40 AM UTC-8, Brian Clark wrote:
>
> So I'm having an issue with a very slow insert, I'm inserting 223 items
> and it takes 20+ seconds to execute. Any advice on what I'm doing wrong and
> why it would be so slow? Using Postgresql 9.4.8
>
>
So I'm having an issue with a very slow insert, I'm inserting 223 items and
it takes 20+ seconds to execute. Any advice on what I'm doing wrong and why
it would be so slow? Using Postgresql 9.4.8
The line of code
LOG_OUTPUT('==PRE BULK==', True)
>
>
>
> I've yet to see an unambiguous statement of what "the raw SQL" is. If
> it is this:
>
> select c.pk,c.mangaid,c.manga_target_pk, n.z,
> (select (array_agg(unwave.restw))[0:5] as restwave from (select
> (unnest(w.wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as
> w) as
s.postgresql import *
from sqlalchemy.types import Float, Integer, String
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy import create_engine
# import sqlite3
# conn = sqlite3.connect('/Users/Brian/Work/python/manga/test_sqla.db')
# c = conn.cursor()
# c.execute(
Ok. Yeah, I have been trying many different ways of getting results. The
raw SQL that I'm trying to recreate in SQLA is this (for the restwave
column only), which works in postgresql. The limit was only there to do
the filter the results. You can ignore that limit.
manga=# select
Ok. Here is my test file. I tried to set it up as much as I could, but I
don't normally set up my db and sessions this way, so you may have to hack
a bit here and there to finish some setup. My original setup has classes
from two different schema. I don't know if that makes any difference.
_ = 'a'
> id = Column(Integer, primary_key=True)
> bs = relationship("B")
>
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
> a_id = Column(ForeignKey('a.id'))
>
>
> s = Session()
>
> q =
So I managed to get something to return using this definition of the
@expression, however, I'm not quite there yet.
@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
So I managed to get something to return using this definition of the
@expression, however, I'm not quite there yet.
@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
return select([func.sum(SavingsAccount.balance)]).\
where(SavingsAccount.user_id==cls.id).\
label('total_balance')
On Friday, July 29, 2016 at 5:29:45 PM UTC-4, Brian Cherinka wrote:
>
>
> Oh interesting. I didn't know that about the @exp
Oh interesting. I didn't know that about the @expression. I'll play
around with the as_scalar() as well, and see if I can get something to
work.
class Wavelength(Base):
__tablename__ = 'wavelength'
__table_args__ = {'autoload': True, 'schema': 'mangadatadb',
'extend_existing':
Traceback (most recent call last)
in ()
> 1 datadb.Cube.restwave
/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/ext/hybrid.pyc
in __get__(self, instance, owner)
738 def __get__(self, instance, owner):
739 if instance is None:
--&g
Thanks Mike. That fixed it!.
On Monday, May 23, 2016 at 10:18:15 AM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/23/2016 10:12 AM, Brian Cherinka wrote:
> >
> > Hi,
> >
> > It seems like the ARRAY option zero_indexes=True is broken for
> > 2-dimensi
- correct value
session.query(dapdb.EmLine.value[16][18]).first()
(4.962736845652115)
Cheers, Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Thanks Mike. That ARRAY_D class did the trick. Thanks for pointing it
out.
On Sunday, May 22, 2016 at 11:52:11 PM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/22/2016 07:12 PM, Brian Cherinka wrote:
> >
> > What's the proper way to return in an ORM query the value
What's the proper way to return in an ORM query the value of a Postgres
array attribute at a given specific index within the array?
I have a db table with a column called value, which is a 2d array, defined
as REAL[][].
My ModelClass is defined as
class EmLine(Base):
__tablename__ =
id_property class yourself, it should
> be compatible with 1.0. The commits are illustrated in
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3653 but you can probably
> just use the hybrid.py straight from the git repository with 1.0.
>
>
>
>
> On 05/10/2016 02:01 PM
I'm trying to build a query system where given a filter parameter name, I
can figure out which DeclarativeBase class it is attached to. I need to do
this for a mix of standard InstrumentedAttributes and Hybrid
Properties/Expressions. I have several Declarative Base classes with
hybrid
g that one of
you out there can fill me in on the missing concept.
- Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.co
()). Execute could lazily set the query up and fetch
would execute the SOAP request, download all rows (unless the SOAP service
supports pagination) and cache them locally.
Brian
> On Apr 3, 2016, at 1:27 PM, Nathan Nelson <nrnel...@gmail.com> wrote:
>
> hood.
--
You receiv
:
>
> Hello.
>
> I think it would be (much) easier to simply rebuild the query from scratch
> before each run. IMHO the time to build the query is not that big a factor
> to
> justify the added source code complexity.
>
> HTH,
>
> Ladislav Lenart
>
>
&g
Also, I've noticed that when you update the bindparams
q = q.params(x='1234')
and then try to print the whereclause, the parameters are not updated. Yet
in the statement, they are updated.
print
q.query.whereclause.compile(dialect=postgresql.dialect(),compile_kwargs={'literal_binds':True})
>
>
>
> well you need a list of names so from a mapped class you can get:
>
> for name in inspect(MyClass).column_attrs.keys():
> if name in :
> q = q.filter_by(name = bindparam(name))
>
> though I'd think if you're dynamically building the query you'd have the
> values already,
object? And it seems like this
bindparam is a nice way to allow for flexible attribute changes without
resetting the query.
Cheers, Brian
On Wednesday, March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to
> be able to ch
March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to
> be able to change the values afterwards?
>
> Simon
>
> > On 2 Mar 2016, at 21:59, Brian Cherinka <havo...@gmail.com >
> wro
to
modify any parameter they set after the fact, but I have a crazy amount of
parameters to explicitly do this for.
Cheers, Brian
On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote:
>
>
>
>
>
--
You received this message because you are subscribed to the Google Grou
Hi,
After a query has been constructed with some filter conditions applied, but
before the query has been run, what's the best way to replace the attribute
in the filter clause?
Let's say I have a query like this
q = session.query(Cube).join(Version).filter(Version.version == 'v1_5_1')
and
sampledb schema is not really an option,
unfortunately. I'm at a loss here.
If it should be possible, is there a procedure somewhere documented on how
to get that working?
Thanks for any help.
Cheers, Brian
--
You received this message because you are subscribed to the Google Groups
"sqlal
that a bit since I have a large number of tables to declare. Thanks for
your explanations and help though. I appreciate it.
On Sunday, February 14, 2016 at 5:01:19 PM UTC-5, Brian Cherinka wrote:
>
> What is the proper way to get pluralized shortened names for many-to-many
> tables when usin
What is the proper way to get pluralized shortened names for many-to-many
tables when using automap? I currently have it set to generate pluralized
lowercase names for collections instead of the default "_collection". This
is what I want for one-to-many or many-to-one relationships, but not
Hi,
I'm trying to use automap a schema, and let it generate all classes and
relationships between my tables. It seems to work well for all
relationships except for one-to-one. I know to set a one-to-one
relationship, you must apply the uselist=True keyword in relationship().
What's the
>
>
> you'd need to implement a generate_relationship function as described at
> http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#custom-relationship-arguments
>
> which applies the rules you want in order to establish those relationships
> that you'd like to be one-to-one.
.
Am I missing any fundamental concepts related to table inheritance?
Thanks everyone,
Brian Leach
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchem
definition. This is very strange.
Brian
On Friday, January 15, 2016 at 6:31:23 PM UTC-5, Simon King wrote:
>
> You shouldn’t need to define the columns. Here’s another test script:
>
> ###
> import math
>
> import sqlalchemy as sa
> import
Ahh. Thanks. Here is the class side then. Still None.
In [14]: print datadb.Sample.nsa_logmstar
None
Brian
On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote:
>
> "Sample()" is an instance. "Sample" is the class. Try:
>
> print data
urn sa.func.log(cls.nsa_mstar)
>
>
> if __name__ == '__main__':
> sm = saorm.sessionmaker()
> session = sm()
> print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9)
>
>
> And here's the output:
>
>
> SELECT sample.pk AS sample_pk
> FROM
er the class definition
> produce?
>
> Simon
>
> > On 15 Jan 2016, at 19:10, Brian Cherinka <havo...@gmail.com
> > wrote:
> >
> > Actually, the class definition is entirely what I posted in the original
> message. I didn't cut anything out of th
tion matters here? Thanks for
all your help.
Brian
On Friday, January 15, 2016 at 5:02:03 PM UTC-5, Brian Cherinka wrote:
>
> Here is the print immediately after my original class definition:
>
> print 'sample nsa log mstar', Sample.nsa_logmstar
>
> and the result
>
> samp
the columns
col1 = list of column 1
col2 = list of column 2
or_(and_(table.col_1==p, table.col_2==col2[i]) for i,p in enumerate(col1))
Thanks for the help.
On Tuesday, October 27, 2015 at 2:05:31 PM UTC+11, Brian Cherinka wrote:
>
>
> What's the best way to query, with the ORM, based
the
objects that have column 1 values of a, or c, but I need to constrain those
to a+b, and c+d
Does this make sense?
Cheers, Brian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving e
Hi Michael,
Thanks for your response. It helped a lot. I ended up going with the
quick and dirty query.from_obj[0] method you described. That was faster to
implement and served my purposes exactly.
Cheers, Brian
>
>
--
You received this message because you are subscribed to the
t supports this, but it does seem like the
SQLite dialect could support this via the equivalent of this:
@reflection.cache
def get_schema_names
dl = connection.execute("PRAGMA database_list")
return [r[1] for r in dl]
Is this reasonable? Could it be included in the SQLite dialect?
, or is there a better alternative?
I ask because my database may have decimals larger than double precision, and
integers larger than 64 bits, and I'm confused as to what the call:
type(row[0]).
...would return in this case.
Brian
--
You received this message because you are subscribed to the Google
(TableB.option3 X )
but I don't want to join to TableB if I don't have to. I have many
different tables where this kind of situation applies, and it seems
inefficient to join to all other tables just in case I may need to filter
on something.
Any thoughts, help or suggestions?
Thanks, Brian
--
You
the new object in the way I am, and I suspect
a bug. Surely if SQLAlchemy has just issued a SELECT .. FOR UPDATE then the
object should be updated with the values of that SELECT?
Regards,
Brian.
-
from __future__ import absolute_import, division, print_function,
unicode_literals
from
class into
a function so postgresql will understand it? I'm using SQLalchemy 1.0.0
and PostgreSQL 9.3. Any help would be appreciated. Thanks.
Cheers, Brian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group
=me.engine)
me.Session = scoped_session(sessionmaker(bind=me.engine, autocommit=True,
expire_on_commit=expire_on_commit))
Cheers, Brian
On Wednesday, May 20, 2015 at 12:51:36 PM UTC-4, Michael Bayer wrote:
On 5/20/15 12:09 PM, Brian Cherinka
,
Brian
On Apr 18, 2015, at 2:47 PM, Mike Bayer
mike...@zzzcomputing.commailto:mike...@zzzcomputing.com wrote:
On 4/17/15 6:58 PM, Van Klaveren, Brian N. wrote:
Hi,
I'm investigating the use and dependency on SQLAlchemy for a long-term
astronomy project. Given Version 1.0 just came out, I've
, but these
sorts of projects typically outlive the software they are built on and are
often underfunded as far as software maintenance goes, so we try to plan
accordingly.
(Of course, some people just give up and through everything in VMs behind
firewalls)
Brian
--
You received this message because
I'm having some difficulty using SQLAlchemy's jsonb operators to produce my
desired SQL.
Intended SQL:
SELECT *
FROM foo
WHERE foo.data-'key1' ? 'a'
...where `foo.data` is formatted like this:
{
'key1': ['a', 'b', 'c'],
'key2': ['d', 'e', 'f']
}
So, I'm
://brian@10.0.1.10:5432/test
e = create_engine(database_url, echo=True)
Base.metadata.create_all(e)
sess = Session(e)
# Insert data
user1 = Foo(id=1, data={'key1': ['a', 'b', 'c'], 'key2': ['d', 'e', 'f']})
user2 = Foo(id=2, data={'key1': ['g', 'h', 'i'], 'key2': ['j', 'k', 'l']})
user3 = Foo(id=3
`type_coerce()` did the trick. Thanks, Mike!
On Wednesday, March 18, 2015 at 12:55:57 PM UTC-4, Michael Bayer wrote:
try using the type_coerce() function instead of cast, it should give you
the
has_key() but won’t change the SQL. (type_cast(Foo.data[‘key’],
JSONB).has_key())
just a
the other database.
On 21 January 2015 at 12:31, Brian Glogower bglogo...@ifwe.co wrote:
Simon, thanks for your response. Let me wrap my head around this and try
it out.
Brian
On 21 January 2015 at 04:59, Simon King si...@simonking.org.uk wrote:
You don't need to convert it to a Table object
Simon, thanks for your response. Let me wrap my head around this and try it
out.
Brian
On 21 January 2015 at 04:59, Simon King si...@simonking.org.uk wrote:
You don't need to convert it to a Table object, but you probably do
need to add 'schema': 'whatever' to the __table_args__ dictionary
': 'InnoDB'}
id = Column(u'HostID', INTEGER(), primary_key=True)
hostname = Column(String(length=30))
Can you please give an example how to use schema with a query.join(), for
my scenario (two sessions, one for each DB connection)?
Thanks,
Brian
On 20 January 2015 at 16:12, Michael Bayer
1 - 100 of 141 matches
Mail list logo