I suggest that you use an indexer like tsearch2 or lucene / xapian / ...
Indexes cannot be used with the LIKE operator.
José de Paula Eufrásio Júnior wrote:
What's the better form of doing that? To minize DB usage and stuff...
--
Julien Cigar
Belgian Biodiversity Platform
On 11/29/06, José de Paula Eufrásio Júnior [EMAIL PROTECTED] wrote:
What's the better form of doing that? To minize DB usage and stuff...
That all depends on the back end that you'll be using and how portable
you want your code to be. IMHO, for searching a database with large
amounts of
if thats true, thats a bug. can you make me a small test case ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe
On 11/29/06, Lee McFadden [EMAIL PROTECTED] wrote:
That all depends on the back end that you'll be using and how portable
you want your code to be. IMHO, for searching a database with large
amounts of text, MySQL's full text indexing and searching features are
unparalleled[1]. With a few
i doubt this was any faster in previous releases since the basic
metholodgy of cascade hasnt changed; when you attach object B to object
A, it cascades the save-update operation across the entire graph
represented by B. While there was one little fix a while back so that
it wouldnt do cascade if
On 11/29/06, José de Paula Eufrásio Júnior [EMAIL PROTECTED] wrote:
Responding to myself: mixing InnoDB and MyISAM seems impossible. Looks
like if a key on refers to other table, both table have to use the
same engine... As I use a lot of many-to-many, I ended with all my
tables MyISAM :P
On 11/29/06, Lee McFadden [EMAIL PROTECTED] wrote:
And how I create arbitrary queries like that:
select post_title, post_body from post where match (post_title,
post_body) against ('nasty midgets');
on SA?
match_query = post_table.select(MATCH (post_title, post_body) AGAINST (:q))
I want to use order by and limit in a sub select, but it doesn't seem
to work:
Code
import pkg_resources
pkg_resources.require( sqlalchemy )
pkg_resources.require( pysqlite )
from sqlalchemy import *
metadata = BoundMetaData( 'sqlite:tmp/test.db' )
metadata.engine.echo = True
My proposal for a talk on SqlSoup was accepted. It looks like someone
else's talk on SA itself was accepted too. Woot! :)
--
Jonathan Ellis
http://spyced.blogspot.com
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
On 11/29/06, Jonathan Ellis [EMAIL PROTECTED] wrote:
My proposal for a talk on SqlSoup was accepted. It looks like someone
else's talk on SA itself was accepted too. Woot! :)
I'm not seeing a list of accepted talks on us.pycon.org; any links?
Right now you can only see the status of proposals you submitted
yourself, but the final schedule is probably only a couple days away
from being announced.
On 11/29/06, Karl Guertin [EMAIL PROTECTED] wrote:
On 11/29/06, Jonathan Ellis [EMAIL PROTECTED] wrote:
My proposal for a talk on
in chaneset 2120 i made a change to the general contract of the in_()
function:
- sending a selectable to an IN no longer creates a union out of
multiple
selects; only one selectable to an IN is allowed now (make a union
yourself
if union is needed; explicit better than implicit, dont guess,
Michael Bayer wrote:
i doubt this was any faster in previous releases since the basic
metholodgy of cascade hasnt changed
Probably wasn't, I've just been testing with larger data sets lately.
so ive added your test with an extra assertion that the session in fact
contains 611 instances to
well things like this, i.e. cascade not going over the same field of
objects over and over again, are big and obvious. smaller things, its
mostly the attributes package that adds the overhead in...i put that
package through a huge overhaul some versions ago to simplify it, and i
ran it
14 matches
Mail list logo