Hi Mark,
I don't know much about sqlite perfomance, but 390k inserts in 4 minuts are
1625 inserts/second which I think is pretty impressive :D
One of the things that affects most insert and update performance is index
set up on fields. For each indexed field inserted, the db must write the
data
Hi Michael,
Thank you very much for your response!
I'll try to code some workaround, and post it here if it works.
Thank you again!
2011/8/3 Michael Bayer mike...@zzzcomputing.com
On Aug 3, 2011, at 12:32 PM, Pau Tallada wrote:
Hi!
I have a model with a many-to-many relation, similar
On Aug 3, 2011, at 8:38 PM, Mark Erbaugh wrote:
I'm using SA (with SQLite) with a schema like:
A - B - C - D
where - means that the tables have a one to many relationship
I'm populating a sample data set where there are 25 rows in A, 25 rows in B
for each row in A, 25 rows in C for
Dear all,
I have the following query that is made of 3 queries on only ONE
TABLE:
1. Step simple normal query
somequery=session.query(tab.columns['name'],tab.columns['id']).filter(tab.columns['value']==6)
2. Step: finding the max serial number for each serie
I would like to use an array comparison in a query, but with each array
element being the result of a function. I do this by making the array with:
terms = [func.dmetaphone(t) for t in terms.split()]
When I use this array in a comparison I get an error can't adapt type
'Function' because it is
PG ARRAY comparisons and such aren't automatically supported right now. Such
an expression needs to be coerced into a clause element of some kind. Perhaps
a ClauseList:
from sqlalchemy.sql.expression import ClauseList
terms = ClauseList(*terms)
which would produce a comma separated
FWIW, I tried the map_to() method but still received the PK error.
The following method, however, worked fine:
ss = SqlSoup(db.engine)
meta = ss._metadata
tbl_vrmf = sa.Table(vRMF, meta, autoload=True)
vrmf_pks = [tbl_vrmf.c.dateId, tbl_vrmf.c.ident, tbl_vrmf.c.mnum]
vrmf = ss.map(tbl_vrmf,
On Aug 4, 2011, at 9:22 AM, Michael Bayer wrote:
The range of speedups here would be between 30% and 80%, with direct usage of
connection/session .execute() with Table metadata giving you the 80%.
Thanks. I'll look into your suggestions
I'm not sure what transaction is in
On Aug 4, 2011, at 1:20 PM, Mark Erbaugh wrote:
Originally, I thought transaction was from the standard Python library, but
upon research, it looks like it's from the transaction package that is part
of Zope. It's included in the Pyramid installation.
Pyramid installs the zope
I suppose then the simplest solution is to make a function in the database
that will execute a function on each element of an array and use:
.having(func.array_agg(metaphones.columns.mphone).op('@')(func.metaphone_array(terms)))
This seems to work fine.
Thanks,
Jason
--
You received this
On Aug 4, 2011, at 9:22 AM, Michael Bayer wrote:
On Aug 3, 2011, at 8:38 PM, Mark Erbaugh wrote:
I'm using SA (with SQLite) with a schema like:
A - B - C - D
where - means that the tables have a one to many relationship
I'm populating a sample data set where there are 25 rows in
On Aug 4, 2011, at 2:48 PM, Mark Erbaugh wrote:
Thanks again for the help. I decided to time the various approaches. My
original approach took 4:23 (minutes: seconds). Note: all my times included
data generation and insertion into a SQLite on-disk database.
This took 3:36
17%...
Hey,
Tried adding cascade to Rating's backref call like so:
subrating = relationship(SubRating, backref=backref(rating,
cascade=all, delete-orphan
uselist=False))
This unfortunately doesn't work - when I delete a Rating, the
according Subratings are NOT removed.
What am I doing wrong?
Table A has a one to many relationship with Table B. There may be zero or more
rows in B for each row in A.
I would like to have a query that retrieves all the rows in table A joined with
the first related row in table B (if one exists). In this case, each row in
table B has a DATE field and
awkardly and inefficiently from a SQL perspective. contains_eager() with an
explicit query() would produce better result
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
import datetime
class A(Base):
Thanks,
Could you explain how to do contains_eager with an explicit query(). I tried
putting a query inside a call to contains_eager, but get an error:
ArgumentError: mapper option expects string key or list of attributes
Mark
On Aug 4, 2011, at 6:39 PM, Michael Bayer wrote:
awkardly and
Hi there,
I have a data driven database schema that I am trying to implement in
sqlalchemy. Here's how the tables look like:
user
user_id | |
user_properties
property_id | property_name | property_description
user_properties_data
user_id | property_id | property_value
What I would
So I had been working on this tiny project now and then. And here's the poc.
http://paste.pound-python.org/show/10578/
I think I'm somewhat misusing the _set_parent() here though.
On Sunday, July 24, 2011 06:52:45 PM Michael Bayer wrote:
On Jul 24, 2011, at 8:39 AM, Fayaz Yusuf Khan wrote:
The
18 matches
Mail list logo