Works great, thanks!
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received
I have a custom type implementing enums (no idea if there's something
better now, but it's used in many places so
replacing it is not an option atm). Currently I'm using render_item to
simply import the type and the enum and pass the
enum to the type and it works fine.
However, in the alembic
Here's a MVCE-style example showing the problem I have:
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import *
Base = declarative_base()
class Type(Base):
__tablename__ = 'types'
id = Column(Integer, primary_key=True)
def
I have this relationship which adds a `ContributionType.proposed_abstracts`
backref that contains only abstracts not flagged as deleted.
contrib_type = relationship(
'ContributionType', lazy=True, foreign_keys=contrib_type_id,
backref=backref('proposed_abstracts',
`from_self().exists()` seems to produce an unnecessarily complex query
(still containing all the original columns)
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Would you be interested in a PR adding `Query.row_exists()` or even
`Query.row_exists(disable_eagerloads=True)` which would also disable
eagerloads by default?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide
> I would normally do session.query(Foo).count()
COUNT is somewhat expensive compared to just checking whether rows exist,
especially if lots of rows match (2.2M rows in the example):
In [2]: %timeit -n1 -r1 EventPerson.query.count()
1 loop, best of 1: 135 ms per loop
In [3]: %timeit -n1 -r1
Is there any shorter/prettier way for this?
session.query(session.query(Foo).exists()).scalar()
It's not hard to add a custom method to the base query class that returns
self.session.query(self.exists()).scalar()
but it feels like something that should be part of SQLAlchemy.
Also, is
Unless there'll be a release fixing this soon-ish: Is there any workaround
that doesn't require patching the sqlalchemy to avoid the issue?
Otherwise I'd probably go for a hack like this:
@contextmanager
def dirty_hack():
orig = sqlalchemy.orm.properties._orm_full_deannotate
Yes, works fine with this change.
On Thursday, June 9, 2016 at 4:37:31 PM UTC+2, Mike Bayer wrote:
>
>
> if you can confirm the query is correct with this patch:
>
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group
I've already tried not specifying a name - in that case it's `anon_2` in
the error.
Here's an MCVE:
https://gist.github.com/ThiefMaster/593e5a78f08d6323eb1b88270256baa7
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this
I'm trying to add a `deep_children_count` column property to one of my
models.
As a regular property it works perfectly fine but I'd like to make it a
column property so I don't have to spam extra queries if I need the counts
for multiple objects.
So I tried this:
cat_alias =
I think there's a misunderstanding - I don't want to manually populate the
relationship, I want to avoid spamming queries if I get e.g. 10 categories
and need the parent chains for all of them.
Here's a pseudo-ish example of what I'd like to do (without queries in the
loop):
categories =
I have a Category model that has (among other things) a `id` and
`parent_id` since my categories are organized in a tree.
@property
def chain_query(self):
"""Get a query object for the category chain.
The query retrieves the root category first and then all the
Thought something like that.. i did actually find that it's the backref
causing it by stepping through tons of SA code ;)
So I guess setting it to None explicitly on creation is the correct way to
avoid the SELECT?
--
You received this message because you are subscribed to the Google Groups
Check this testcase:
https://gist.github.com/ThiefMaster/913446490d0e4c31776d
When assigning an object to the relationship attribute a SELECT is sent, but
this does not happen when explicitly setting the attribute to None before
assigning
the object to it.
If the SELECT being issued is not a
n Mon, Nov 23, 2015 at 6:55 PM Mike Bayer <mike...@zzzcomputing.com> wrote:
>
>
> On 11/23/2015 12:43 PM, Adrian wrote:
> > Hello,
> >
> > I have the following problem - I recently upgraded to the 1.0+ branch
> > from 0.9 and now the PostgreSQL table inherita
I attached a script that reproduces the problem. It actually only happens
if the metadata contains a schema, then the tablename in the INHERITS()
clause get quoted, which causes the problem.
On Monday, November 23, 2015 at 7:13:27 PM UTC+1, Adrian wrote:
>
> I actually I just found the p
That's true now that you are saying it, I actually implemented it myself
before using a simple @compiles with CreateTable.
On Mon, Nov 23, 2015 at 9:33 PM Mike Bayer <mike...@zzzcomputing.com> wrote:
>
>
> On 11/23/2015 03:15 PM, Adrian wrote:
> > I attached a script that r
it to work? metadata.sorted_tables returns the
proper table order (master table first, dependencies later) though but
tables are not created in that order by metadata.create_all().
Thanks,
Adrian
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
That works and solves it, thanks!
On Mon, Nov 23, 2015 at 9:37 PM Mike Bayer <mike...@zzzcomputing.com> wrote:
>
>
> On 11/23/2015 03:15 PM, Adrian wrote:
> > I attached a script that reproduces the problem. It actually only
> > happens if the metadata contains a
For some methods/properties on a model it might be useful to memoize its
result.
There are some common decoratos such as cached_property from werkzeug which
simply add the
value to the object's __dict__ after retrieving it the first time (thus not
calling the property's getter again).
Or you
I tried this code:
@listens_for(AttachmentFolder.all_attachments, 'append')
def _attachment_added(target, value, *unused):
target.modified_dt = now_utc()
However AttachmentFolder.all_attachments is a backref so it doesn't exist
at import time (I usually
register listeners right after the
I hadn't seen that part of the documentation - doing it that way works fine
now!
I ended up using a signal to update `revisions` automatically when setting
`current_revision`:
I'm trying to store old versions of (some of) the data in one of my tables.
To do so, I'm thinking about models like this (not including anything not
relevant to the case):
class EventNote(db.Model):
id = db.Column(db.Integer, primary_key=True)
latest_revision = db.relationship(
Currently I have this code which does the job, but it feels extremely dirty:
https://gist.github.com/ThiefMaster/f7a7f7651245ec97a256
My `has_extension` function executes SQL to check if the given postgres
extension is installed or not.
Something like DDL execute_if would be perfect, but from
but that looks pretty hack-ish.
Cheers
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email
I have a User model with an association proxy referencing the Email model,
so I can access the user's email via user.email.
Since I'm soft-deleting users and require emails for non-deleted users to
be unique, I have a unique constraint on my email table with a `WHERE not
is_user_deleted`.
In
Nevermind. I had to use `set` instead of `append` in the attribute event.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
Ugh, somehow my reply sent by email nerver arrived here... here's my code:
https://gist.github.com/ThiefMaster/40cd1f91e2a792150496
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it,
In my user have I have an association proxy so I can access all email
addresses of the user via User.all_emails.
For a simple exact search I simply
.filter(User.all_emails.contains('f...@example.com')).
Is it also possible to use e.g. a LIKE match (besides manually joining the
Emails table and
That's the first thing I've tried. Unfortunately it doesn't work...
--- 1 User.find_all(User.all_emails.any(UserEmail.email.like('%adrian%')))
/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/ext/associationproxy.pyc
in any(self, criterion, **kwargs)
367
368
I just tried updating from 0.9.9 to 1.0.2 and noticed that this code is now
broken (tuple object has no attribute schema):
def _before_create(target, connection, **kw):
schemas = {table.schema for table in kw['tables']}
for schema in schemas:
Is what I'm trying to be possible assuming I cannot add any code to the
User model?
In the future there might be plugins in my application which could contain
favorites, but while plugins can add their own models, they are never
allowed to directly modify a class in the application core.
--
In case it's unclear what exactly I'm trying to do, here's the version with
the relationship defined right in the User model that works fine.
I'd like to do this exact same thing, but somehow define the relationship
outside the User model. Preferably by using the normal declarative syntax
to
Sure, no problem with that.
I'll add a small self-contained example for it tomorrow.
- Adrian
On 25.03.2015 14:21 Michael Bayer wrote:
Im trying to avoid having to write a full example for you from scratch so if
you could provide everything in one example, both models and where you want
to spread things around so
much (which would be the case if I defined the relationship in the User
model).
-- Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
I have been using the automap extension with postgres, with an inheritance
structure using the joined inheritance pattern. I could not figure out a
way to have this reflected from the DB so I define the classes for this
part of my schema explicitly, and when automap initializes, these classes
,
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com
#sqlalchemy.orm.events.MapperEvents.instrument_class
On Feb 6, 2014, at 4:32 AM, Adrian Robert adrian.b.rob...@gmail.com wrote:
One other point, I was trying out the dogpile cache example and ran into
(after I stuck a .encode('utf-8') into the key mangler since I'm using
Python-3 and pylibmc):
_pickle.PicklingError: Can't pickle
Thanks, that works beautifully.
I had noticed name_for_scalar_relationship parameter but I guess wasn't
confident enough that I understood what was going on to try it. :-[
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this
Hi All,
I have a few partitioned tables in my PostgreSQL database but I do not know
yet how to make the ORM relationship() with partition
constraint-exclusionhttp://www.postgresql.org/docs/9.3/static/ddl-partitioning.html#DDL-PARTITIONING-CONSTRAINT-EXCLUSION
on
the instance level.
Never mind,
the problem was that I specified the clause in a secondaryjoin and not in
the primaryjoin of the relationship().
On Thu, Dec 5, 2013 at 10:44 AM, Adrian adrian.schre...@gmail.com wrote:
Hi All,
I have a few partitioned tables in my PostgreSQL database but I do not
know yet how
or if this kind of mapping is simply not supported.
On Thu, Dec 5, 2013 at 3:31 PM, Michael Bayer mike...@zzzcomputing.comwrote:
On Dec 5, 2013, at 6:57 AM, Adrian Schreyer adrian.schre...@gmail.com
wrote:
Actually that was a bit too early but I tracked the problem down to the
many-to-many
=[j.c.partitioned_id, j.c.second_other_id])
or you can just ignore those extra attributes on some of your Partitioned
objects.
On Dec 5, 2013, at 11:03 AM, Adrian Schreyer adrian.schre...@gmail.com
wrote:
Given the three mappings *First*, *Second* and *Partitioned*, I want to
declare
in the example I gave.
On Dec 5, 2013, at 1:55 PM, Adrian Schreyer adrian.schre...@gmail.com
wrote:
The partitioned relationship actually referred to the tertiary table in
both the primary and secondary join - the problem for me was that in the
primaryjoin
primaryjoin=and_(First.first_id
Thank you, very much.
I actually did try to use the actually Column, but I could figure out how
to resolve my interdependencies since my column_property is actually a
subselect, and apparently I didn't test it on my test case.
--
You received this message because you are subscribed to the
I have a column_property on a polymorphic base class. When I
joinedload/subqueryload a derived class the colum_property makes the query
fail.
class A(Base):
__tablename__ = a
id = Column(Integer, primary_key=True)
type= Column(String(40), nullable=False)
__mapper_args__
': class
'credoscript.models.aromaticring.AromaticRing' class
'sqlalchemy.util.langhelpers.symbol'
Any ideas what went wrong?
---
AttributeErrorTraceback (most recent call last)
/home/adrian/ipython
I created a gist with example code https://gist.github.com/1223926
query.sql shows you the basic SQL query of what I am trying to do - fetching
the Residue as an entity and the 12 summed values from the subquery.
orm-code.py is the orm code for the upper part of the query (the part I am
I have seen that it is possible to get an entity from a subquery with the
aliased(entity,statement) construct. Is there also a way to get more than
one entity from a subquery, for example 2?
Cheers
Adrian
--
You received this message because you are subscribed to the Google Groups
class have some overridden __eq__(), __cmp__(), __hash__()
on it ? I think there might be an issue here but I need a lot more
specifics.
On Jul 1, 2011, at 6:34 AM, Adrian wrote:
I just tested it and session.execute(query.statement) returns the
proper resultset. The 'similarity
I found the problem now - the __hash__() function I had did not return
an integer, it returned a tuple of the composite primary key. I
changed it now and it works, thanks for your help!
On Jul 4, 8:50 am, Adrian adr...@schreyer.me wrote:
Yes, the __eq__() and __hash__() functions are overridden
, False, 0.50561797752809)]
On Jun 30, 3:27 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Jun 30, 2011, at 9:23 AM, Adrian wrote:
SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
---
I have a weird problem with orm
SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
---
I have a weird problem with orm queries that contain custom functions,
in this case from postgres contrib modules. When I do a query like
this
session.query(Entity,
start the paster (Turbogears) server, but after
~5-10 minutes the server dies and there are hundreds of these errors
in the log -- help!! I need to get this server back up ASAP!
Thanks,
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post
...but I asked this on their mailing
list and their answer was that what I was doing was correct --
obviously it's not if I'm getting these AssertionErrors...
Thanks for the quick reply, as usual!
On May 20, 9:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
On May 20, 2011, at 9:45 AM, Adrian
Ok, I'll definitely do some quality debugging...
Just to be clear -- I **don't** have to worry about closing my
sessions in each controller?
On May 20, 6:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On May 20, 2011, at 4:12 PM, Adrian wrote:
So if it is the latter, that the Session
from my code to get raw speed?
Thanks,
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com
Awesome, I'll work through these suggestions -- thanks for the speedy
reply!
On May 2, 11:29 am, Michael Bayer mike...@zzzcomputing.com wrote:
On May 2, 2011, at 11:14 AM, Adrian wrote:
I'm facing some interesting speed issues with my database that only
seem to crop up within sqlalchemy
to see if anyone has had success with an
implementation of database views in sqlalchemy, and possibly examples
of those cases.
Thanks,
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch
Thanks for the quick reply, this is exactly what I was looking for!
Thanks again,
Adrian
On Nov 8, 2:29 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Nov 8, 2010, at 1:16 PM, Adrian wrote:
Hi all,
This is a topic that has been discussed before, but I haven't been
able
= 'spPlate-3586-55181.fits'
autoflush and autocommit are both set to False.
It seems like a straightforward query so I'm confused as to what could
be getting hung up.
Thanks for any insight,
Adrian
On Jul 16, 10:24 pm, Michael Bayer mike...@zzzcomputing.com wrote:
You absolutely need to turn in echoing
arrays that would add to the overhead of a fetch.
There definitely are columns of PG arrays ~4000 elements each, so back
to my first email it seems like the culprit here could be the ARRAY's
Thanks for your help,
Adrian
On Jul 19, 10:10 am, Michael Bayer mike...@zzzcomputing.com wrote:
On Jul 19
seconds
Thanks,
Adrian
On Jul 19, 2010, at 12:24 PM, David Gardner wrote:
Try running that query directly against the database see how long that takes.
Also try running explain on that query make sure it is using your indexes
properly.
Since you are only using a single filter make sure
), session.commit().
I thought that might be another clue, thanks!
On Jul 19, 1:53 pm, Adrian Price-Whelan adrian@gmail.com wrote:
Here is some more detailed information trying the query multiple ways:
Piping the command into psql and writing to a tmp file takes 12 seconds (tmp
file is 241MB
First off, thanks for your quick replies!
I will look into this, but I can tell you that the arrays are strictly numbers
and the array columns are type numeric[]
Thanks again,
Adrian
On Jul 19, 2010, at 3:47 PM, Michael Bayer wrote:
On Jul 19, 2010, at 1:53 PM, Adrian Price-Whelan wrote
the filesystem it took less than a
second, but the query on the database took around 25 seconds to complete. Has
anyone else had issues with array types slowing down queries or does this sound
more like another issue?
Thanks!
Adrian
--
You received this message because you are subscribed to the Google
Hi,
I'm using the following table (shortened, removed unnecessary columns)
to store a menu tree.
class MenuNode(Base):
__tablename__ = 'menu'
id = Column(Integer, primary_key=True, nullable=False,
unique=True)
parent_id = Column(Integer, ForeignKey('menu.id',
onupdate='CASCADE',
Does specifying cascade='all, delete-orphan' on the parent
relationship accomplish what you're after?
delete-orphan doesn't work in self-relational relationships (there are
always some nodes without a parent).
However, moving passive_deletes=True into the backref() fixed it.
--
Adrian
--
You
Heyho!
Has anybody worked with a Sybase Anywhere (ASA 9 -- yes, very old ...)
database? I may need to build a simple CRUD (actually onnly R and U ;-)
frontend to some legacy application. (I probably will give TurbeGears a try
for this.)
I do have a JDBC driver, and I *think* ODBC should work
))) that flattens it to a simple list. What what be the best way to
implement this?
Cheers,
Adrian
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email
The read-only version was all I needed, thanks.
On Jan 17, 3:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Jan 17, 2010, at 9:20 AM, Adrian wrote:
Hi,
is there an easy way to apply a function to the items returned by
association_proxy? Currently, I have a setup like this: A-B-C
Heyho!
[multi-column primary key where one column is autoincrement int]
On Wednesday 16 December 2009 05.29:54 Daniel Falk wrote:
The true problem here
is with sqlite, which tries to make a smart choice about whether to
autoincrement or not. And it gets it wrong. SQLAlchemy is correct to
Heyho!
On Wednesday 16 December 2009 16:36:10 Michael Bayer wrote:
You need to either use the default keyword and specify a
function or SQL expression that will generate new identifiers, or just set
up the PK attributes on your new objects before adding them to the
session.
... or just
Heyho!
My small blog-style web site should support article versioning, so:
class Entry(DeclarativeBase):
id = Column(Integer, autoincrement=True, primary_key=True)
version = Column(Integer, primary_key=True, default=0)
... and more stuff (content, author, ...)
it seems
Heyho!
On Friday 06 November 2009 02.46:11 Jon Nelson wrote:
... was performing an individual
INSERT for every single row.
Don't know sqlalchemy good enough, but for big bulk imports on the SQL side,
shouldn't COPY be used? Which is as far as I know pg-specific / non-SQL
standard.
cheers
On Tuesday 06 October 2009 14.45:33 Yannick Gingras wrote:
[...]
Is there another way to do it? Something that would be portable and
to both MySQL and Postgres would be great.
Since both pg and mysql hava a native enum type, it's only a matter of
writing the appropriate code in the SQL
Heyho!
Is there a tutorial on vertical partitioning?
I have a table Entry and a table EntryFlags (1:1 relation from
EntryFlags to Entry). The idea is that while there is a large number of
Entry rows only a small number has flags set (and thus needs an entry in
EntryFlags; note that they
On Wednesday 30 September 2009 21.58:55 Kevin Horn wrote:
I have a table Entry and a table EntryFlags (1:1 relation from
EntryFlags to Entry). The idea is that while there is a large number
of Entry rows only a small number has flags set (and thus needs an
entry in EntryFlags; note that
On Saturday 22 August 2009 01.08:05 David Bolen wrote:
Adrian von Bidder avbid...@fortytwo.ch writes:
Ideas comments?
For what it's worth, I'd think that the best sort of audit would be
something done in the database itself, since it would audit any
changes whether done through any
Heyho!
Instead of creating changeby / changed fields on all my tables, I'm
planning to write some model classes where changes would be recorded in a
separate audit trail table (the obvious benefit beyond not requiring the
additional fields is that I can preserve the history as far back as I
On Wednesday 17 June 2009 19.08:10 klaus wrote:
... whether a query will yield any result ...
The best approximation
seems to be
query.first() is not None
which can select a lot of columns. I often see
query.count() 0
which can become quite expensive on a DBMS like PostgreSQL.
On Saturday 06 June 2009 17.39:20 naktinis wrote:
I think this was not the case, since I didn't expect the merged result
to be ordered.
To be more precise, the query looks like:
q1 = Thing.query().filter(...).order_by(Thing.a.desc()).limit(1)
q2 =
On Saturday 06 June 2009 14.18:33 naktinis wrote:
I want to use union on two queries, which have different order:
q1 = Thing.query().order_by(Thing.a)
q2 = Thing.query().order_by(Thing.b)
q = q1.union(q2).all()
SQL doesn't work as you think it does here.
A UNION does not concatenate the
[web logs - db]
On Tuesday 26 May 2009 00.27:03 Michael Bayer wrote:
the best thing to do would be to experiment with some various schemas
and see what works best
Also, it's extremely important to keep in mind that SQL databases can only
work well with big tables if you create the right
On Friday 22 May 2009 23.00:05 Werner F. Bruhin wrote:
What do you want to do with the autoincrement column? Often these are
used for primary keys, which in turn get used as foreign keys.
I want to use the id as filename; the table will cache some info that comes
from the file. Using it as a
On Friday 22 May 2009 01.59:13 Michael Bayer wrote:
otherwise if you have any advice on how to get 0.4/0.3
delisted from such a prominent place on Google, that would be
appreciated.
Since removing them entirely is an option for you, perhaps just completely
remove them from search engines
On Friday 22 May 2009 08.43:09 Alexandre Conrad wrote:
Don't you want that non-null column to be a foreign key ?
Would that make a difference?
cheers
-- vbi
2009/5/21 Adrian von Bidder avbid...@fortytwo.ch:
Hi,
Is it possible to fetch the values of an autoincrement field without
On Friday 22 May 2009 12.01:05 Iwan wrote:
Naïvely, I thought you'd create an X, flush it, and then catch any
IntegrityError's thrown. [...]
I know that PostgreSQL can't continue in a transaction after an error, you
have to roll back the transaction. I don't know what the SQL standard says
On Friday 22 May 2009 13.58:34 Alexandre Conrad wrote:
Hello Adrian,
2009/5/22 Adrian von Bidder avbid...@fortytwo.ch:
On Friday 22 May 2009 08.43:09 Alexandre Conrad wrote:
Don't you want that non-null column to be a foreign key ?
Would that make a difference?
That's what a foreign
Hi,
Is it possible to fetch the values of an autoincrement field without
flushing the object to the DB?
(In postgres, I obviously can manually fetch nextval of the automatically
generated sequence, but I lose the portability that way ...)
Why?
Because I need the id to generate data that will
I have exactly the same problem with 0.5.3. On one machine the mapping
works fine with 0.5.2 on another with 0.5.3 I get the error you
mentioned.
On Apr 2, 3:36 pm, Andreas Jung li...@zopyx.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I am getting the following error after
I guess the solution to my problem is simple, although I did not
manage to find it.
The problem is as follows: I calculate the bray-curtis distance
between an input and the rows in my table and give the value an alias
('brayCurtis'). What I want is to order the resultSet by brayCurtis
and return
, Michael Bayer [EMAIL PROTECTED] wrote:
On Apr 23, 2008, at 6:55 AM, Adrian wrote:
I guess the solution to my problem is simple, although I did not
manage to find it.
The problem is as follows: I calculate the bray-curtis distance
between an input and the rows in my table and give
I am trying to change the default column type mapping in sqlalchemy.
Analogous to the description in the MySQLdb User's Guide (http://mysql-
python.sourceforge.net/MySQLdb.html) I tried the following.
from MySQLdb.constants import FIELD_TYPE
my_conv = { FIELD_TYPE.DECIMAL: float }
ENGINE =
I am a bit confused by the behavior for the methods all() and one() if
the Query would return an empty result set. In the case of all() it
returns an empty list whereas one() will throw an exception
(sqlalchemy.exceptions.InvalidRequestError). I am sure there was a
reason to implement as it is
97 matches
Mail list logo