Hi,
I am trying to find some efficient way to query objects stored in the
session.
Do I have to loop through the session to find all objects which names are
John for example or there is some more efficient way to do this.
The filter() would be perfect but it is not applicable for the session
Hello,
I have got a query of the following type:
rows = session.query(*[func.count().over().label(count)]+map(lambda column:
MyClass.__dict__[columns],columns)).filter(...).limit(n).offset(m).all()
it returns the number of results together with values of selected columns. The
problem is when
],**columns)).filter(...).distinct().limit(n)**.offset(m).all()*
does not?
Thanks
Am Mittwoch, 16. Mai 2012 12:07:26 UTC+2 schrieb Eduardo:
Hello,
I have got a query of the following type:
rows = session.query(*[func.count().over().label(count)]+map(lambda column:
MyClass.__dict__[columns
as the input argument for the count() function.
Is there any way to solve this?
Thanks*
Am Mittwoch, 16. Mai 2012 12:07:26 UTC+2 schrieb Eduardo:
Hello,
I have got a query of the following type:
rows = session.query(*[func.count().over().label(count)]+map(lambda column:
MyClass.__dict__[columns
Am Mittwoch, 16. Mai 2012 12:07:26 UTC+2 schrieb Eduardo:
Hello,
I have got a query of the following type:
rows = session.query(*[func.count().over().label(count)]+map(lambda column:
MyClass.__dict__[columns],columns)).filter(...).limit(n).offset(m).all()
it returns the number
*SORRY
I want to create query that will return as a first element the number of
hits *
*SELECT COUNT(*) FROM(SELECT (DISTINCT(col1,col2,col3)) FROM mytable where
col1='something**' LIMIT 1 OFFSET 2)
and in the second part should be values of the hits
**SELECT (DISTINCT(col1,col2,col3)) FROM
) is a Postgresql-specific extension (see
http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-DISTINCT )
On May 16, 2012, at 1:50 PM, Eduardo wrote:
*SORRY
I want to create query that will return as a first element the number of
hits *
*SELECT COUNT(*) FROM(SELECT (DISTINCT(col1
Hi,
Is there any function in sqlalchemy that filters out duplicates?
For example the following rows satisfy a query:
1. (john, 23 , lpgh )
2.(steve , 35 , dbr )
3. (john ,76, qwe)
4. (mark, 35, epz)
I would like that my query results contain only one row with john (either 1
or 3 which one is not
Hi,
In order to avoid bottlenecks I am force to limit the number of returned
results using LIMIT and OFFSET.
Since I am not returning all results upon a query I need to include the
number of hits in the result.
somequery.count()
somequery.limit(n).offset(m).all()
The problem is that response
Hi,
Is it possible to set this option to off by using sqlalchemy?
Thanks
Ed
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/JGkEvUVvoncJ.
To post to this group,
Hi,
I create a certain number of objects in a loop, I check if each of the
objects is already stored in the session and if it is not I add it to
the session.
session = Session()
for item in items:
item1=create_item()
if not item1 in session:
session.add(item1)
session.commit()
The
Hi,
The keys of the following dictionary are column names while values are
lists of options for the column in question:
colvsopt={'column1':['option1','option2'],'column2':
['option3','option4l'],.}
I need to write a query:
SELECT * FROM SOMETABLE WHERE (column1=option1 OR column1=option2)
Dear all,
I make a query to find the highest version of a document and then I
want to set a flag to column actual to 1 (TRUE) to all entries
satisfying the query. I dont want to loop through hits but to set this
flag in a one stroke with update. However I get this error:
Hi,
I made simple query to a potgres DB:
___
engine=create_engine(path)
metadata=MetaData()
Session=sessionmaker(bind=engine)
session=Session()
session.execute('SELECT value FROM table)
.
.
session.close()
_
Hello,
I have a query which results (columns id,name) I need to order
alphabetically by name.
Then I need to take a slice of the results and to keep them ordered:
results=somequery.order_by(datab.columns['name'])
, 2011, at 1:00 PM, Eduardo wrote:
Dear all,
I am trying to limit group_by function only on the rows that satisfy
certain query
if I use group_by(table.c.something) this pertains to whole table.
I tried to use elements of the tuple rendered as a result of a query
as arguments (as it has
This I want to do:
1) I want to find all names that satisfy this query
column_names = session.query(tab.c.name).filter(tab.c.value==354)
2) Among those rows I want to find those with the highest id ( all
rows with the same name are fist grouped and only the one with the
highest id is retained the
Dear All,
I have a list of elements for which I need to establish if they are in
a tadabase. I can make for each element a separate query but I am
afraid that that is not the best approach what is the best practice in
this case?
Thanks
--
You received this message because you are subscribed to
Dear all,
I am trying to limit group_by function only on the rows that satisfy
certain query
if I use group_by(table.c.something) this pertains to whole table.
I tried to use elements of the tuple rendered as a result of a query
as arguments (as it has been suggested on this forum) but it did not
Dear all,
How can I determine the number of objects (the memory capacity) that a
session can take? How can I determine the size of an object?
Thanks
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to
GROUP BY fde.ck1.LKUT.RAT-ES.vertic.6hpa.low.
6hdfjks.rihfjkdp1.fhsdjk00-1900.nhgtec,.
On Aug 11, 7:34 am, Andrew Taumoefolau zen...@gmail.com wrote:
Hi Eduardo,
group_by accepts strings, so this is certainly possible. You might do it like
so:
# build our column names query
# execute
Dear all,
I am trying to find a way to limit group_by arguments of one query
only to the values of some other query.
Is this doable? If yes how to do that.
This is an example how query looks like:
query1.group_by(sesion.query(tab.columns['name']).filter(datab.columns['value']==354).all())
Thanks
Dear all,
I have the following query that is made of 3 queries on only ONE
TABLE:
1. Step simple normal query
somequery=session.query(tab.columns['name'],tab.columns['id']).filter(tab.columns['value']==6)
2. Step: finding the max serial number for each serie
Daer all
I have a following problem :
Session = sessionmaker(bind=engine)
session = Session()
for item in items:
item1 = session.query(item)
if len(item1)==0:
session.add(item1)
session.commit()
The session is valid for a loop and I write items in a database during
each
Dear all,
I have 2 queries:
1)
query1=session.query(func.max(datab.columns['counter']).label('t1')).group_by(datab.columns['site'])
query2=session.query(tab.columns['name'],tab.columns['id'],tab.columns['counter']).filter(datab.columns['visits']==15)
Now I would like to get all pairs
I have the following statement :
SELECT name, id, division, value,
FROM (
SELECT name, id, division,value, max(value) over
(partition by division) as max_val
FROM tab1
)
WHERE value = max_val
I try to turn this sql statement into a Query object
I tried
@googlegroups.com]
On Behalf Of Eduardo
Sent: 18 July 2011 15:54
To: sqlalchemy
Subject: [sqlalchemy] Re: information about filed create_engine
Yes, I use wsgi server of the python library bottle and I don't have
any problem but when I want to use the same script via the apache web
server I get
.
Is there any other Error or some universal sqlalchemy error that can
indicate me where the problem is?
Thanks
On Jul 14, 3:43 pm, King Simon-NFHD78
simon.k...@motorolasolutions.com wrote:
Eduardo wrote:
On Jul 14, 10:49 am, King Simon-NFHD78
simon.k...@motorolasolutions.com wrote:
Eduardo
: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com]
On Behalf Of Eduardo
Sent: 18 July 2011 14:12
To: sqlalchemy
Subject: [sqlalchemy] Re: information about filed create_engine
I dont get any log. The access strings from the local and wsgi
applications are identical so
Then what is the purpose of the intersection method? It looks to me as
a (bad) alternative to chained filtering!! Can you think of any case
when intersection is better choice than filters?
On Jun 23, 4:08 am, Mike Conley mconl...@gmail.com wrote:
On Tue, Jun 21, 2011 at 6:05 AM, Eduardo ruche
When I use the same script with a standalone application it works but
when I try to run it as a wsgi application it fails (wsgi logs does
not contain any information regarding the failure!)
On Jul 14, 10:16 am, King Simon-NFHD78
simon.k...@motorolasolutions.com wrote:
Eduardo wrote
On Jul
My application only queries the database there are no inputs and
therefore no transactions involved.
On Jul 14, 10:49 am, King Simon-NFHD78
simon.k...@motorolasolutions.com wrote:
Eduardo wrote
When I use the same script with a standalone application it works but
when I try to run
What is the best practice: to chain filters or to collect queries in a
list and then apply intersect_all()?
Thanks
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from
Dear Sirs,
My problem is following as follows I have multiple of queries that I
save in a list. When I have all my queires in the list I apply
intersect_all method
q=qlist[0].intersect_all(*qlist[1:])
I get my result simply by
res=q.all()
However I would like to group results according to some
Hi,
I want to create the following table:
Table('Error', metadata,
Column('Type', String),
Column('reference', String),
Column('context', String),
Column ('Timestamp', DateTime, primary_key=True),)
with the oracle DB. I receive this:
sqlalchemy.exc.Error:
Dear all,
My application create parallel processes that access and query a
database. My problem is that I can not use engine or connection object
in any of these processes because they can not be pickled (requirement
for the input arguments for functions that trigger processes). Is
there any way
Hi,
I saw that there is a way to serialize a query object is there a way
to serialize an engine object?
Thanks
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from
Dear all,
I am writing an application to scan a directory system and store
metadata in DB.
For each directory I create a separate process in which scanning and
metadata feed is performed.
Now I have following problems:
1) I am forced to start a session in each process and bind them for
the engine
exception. How could I fix it?
Thanks in advance,
Eduardo.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr
the proxy_set
var to the composite_field somehow?
Regards,
Eduardo Robles Elvira.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy
On Fri, Aug 20, 2010 at 10:34 AM, Daniel Kluev dan.kl...@gmail.com wrote:
On Fri, Aug 20, 2010 at 7:39 AM, Eduardo Robles Elvira edu...@gmail.com
wrote:
Now, these functions reimplemented in the inherited class models might
be called by the jobs view. It would be very convenient if directly
/9dfa457a7903f684/dd102bcb86dd905b
Any ideas, suggestions?
Regards,
Eduardo RE
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy
Hi,
There are some sphinx system messages on:
http://www.sqlalchemy.org/docs/sphinxtest/intro.html
Reference Documentation¶
*
System Message: WARNING/2
(/home/classic/dev/sphinx/doc/build/intro.rst)
undefined label: datamapping – if you don't give a link
caption
43 matches
Mail list logo