I'll try the selectable. That's a good idea.
This won't work as a view -- it'll run too slow. It could be a view of a
function, but then it's not really prototyping. I'm trying to figure out
an interim solution on the SqlAlchemy side. Right now I'm using some
custom objects that
I have a rather complex query that I've written in SQL
• The result-set is 6 columns (1 = timestamp, 2-6 = fkeys on tables/objects)
• The query involves 15 tables
• The query is about 100 lines of raw SQL
• The query is not fun, and apt to change
• There are 2 placeholders/bind variables to be
FWIW, another option is to pull in all the addresses and use a class method
to filter.
class User():
def all_emails_like(self, expression):
return [e for e in self.all_emails if
regex_match(expression, e)]
I've found that depending on your app/db and the size of
SqlAlchemy needs 2 things for a database work:
1. A python database driver
2. A SqlAlchemy dialect (that tells SqlAlchemy how to write sql for the
driver)
So...
1. H2 doesn't seem to have any Python drivers or other support. I just
came across an old forum post that talked about possibly
Just be warned of this method -- make sure you're using the most recent
SqlAlchemy version in the .9x or 1.x branches. Earlier versions would not
apply LIMIT or OFFSET into the bind.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe
I've ran into similar issues like this before -- but on different databases.
I think it's bad to think of the problem as transmitting 31k/s -- which
assumes a particular issue; and much better to think of it as processing
31k/s, which gives a lot more room for interpretation.
Looking on
As a heavy user, an occasional contributor, and the person who recently
mined PyPi for all the historical SqlAlchemy data to generate the new
release history matrix...
I don't think you have anything to really worry about for long term use.
The majority of updates over the past 7 years that
PyPi doesn't always let you see old releases easily anymore.
In order to get to older versions off PyPi, you need to use one of 2 tricks:
1. If you know the version number:
visit https://pypi.python.org/pypi/SQLAlchemy/{VERSION_NUMBER}
e.g.
I've had issues like this in the past, and in the worst situations your
services can end up giving your system a ddos.
what i usually do to avoid this, is implement a delay on the connection
pool itself, and have each service cycle through a set of delays with a
different starting point in
I have an include file that generates a handful of timestamp clauses:
def sql_now():
return sqlalchemy.sql.text((CURRENT_TIMESTAMP AT TIME ZONE 'UTC'))
def sql_now_minus_10_minutes():
return sqlalchemy.sql.text((CURRENT_TIMESTAMP AT TIME ZONE 'UTC' -
INTERVAL '10 MINUTES'))
One of them needs
oh, a HUGE thanks!
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to
I wish I saw this earlier. This thread basically describes a situation
I've had, and explains a few of the weird hacks I've had to use.
If you need a workaround, I've been getting around this by using a union on
2 subqueries, querying the union columns, and using plaintext sql to order
(the
On Wednesday, April 1, 2015 at 2:56:31 AM UTC-4, jo wrote:
*session.query(Rischio.c.codice).select_from(Rischio).filter(Rischio.c.peso_gruppo
== '1') #**true*
*session.query(Rischio.c.codice).select_from(Rischio).filter(Rischio.c.peso_gruppo
== '0') #false*
I don't think that
Thanks!
The workaround is best for my purposes. I need to keep the relationships
for the app, but they were benching to triple the migration script's
timing.Using the engine directly to get around this, everything
migrated overnight.
_delete = dbSession.execute(
If you're just worried about warnings in Flake8, you can turn them off
line-by-line:
filter(
Rischio.c.peso_gruppo == True # noqa
)\
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop
I'm running a migration script, and the deletes via the ORM were creating a
performance issue. (trying to load 4 relationships 9million times adds up)
I couldn't find any docs for this (just some references on update cascades)
so i sidestepped the issue by just running the Engine's delete on
On Monday, March 30, 2015 at 9:06:48 PM UTC-4, Bao Niu wrote:
To make sure that I understand it right, did you mean hybrid attribute
methods when you mentioned property methods here? Thanks
No, just simple property methods. A super-simple first Minimum Viable
Product/iteration might be:
If you're not ready for 1.x you should also make sure that you don't
upgrade with easy_install or pip = 1.4; both of which will pull the 1.x
branch.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving
Just a quick warning to others:
I noticed this warning when running a maintenance script that used the
Windowed Range Query example:
/sqlalchemy/sql/elements.py:3736: SAWarning: Textual SQL expression
'rownum % 100=1' should be explicitly declared as text('rownum % 100=1')
(this warning
The easiest way is to use a mixin:
http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/mixins.html
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
SqlAlchemy doesn't natively support traditional database views. They're
often handled by reflecting an existing database view as a database table
(http://docs.sqlalchemy.org/en/latest/core/reflection.html#reflecting-views).
There are also some API functions that deal with getting view
Yeah, there's no reason to touch eval -- and a lot of reasons not to.
Security issues aside, when you make a mistake the error will be
completely unintelligible.
You can create joins dynamically very easily by just iteratively building
up on it, and using getattr() if needed.
If you're doing
any reason why you're not building a query like this?
query = db.session.query(label('sid',
distinct(a[1].c.patient_sid)))
if n = 2
query = query.\
join(a[2],a[2].c.patient_sid==a[1].c.patient_sid)
if n = 3
query = query.\
On Sunday, March 22, 2015 at 7:01:35 PM UTC-4, Bao Niu wrote:
Also because sql datetime datatype doesn't persist timezone information.
Therefore, I tried to store all the time information as strings.
If your database doesn't support timezones, I think it would be easiest to
convert
Wow. This is noticeably faster.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to
On Monday, March 16, 2015 at 10:22:27 AM UTC-4, Simon King wrote:
It sounds like flask-user assumes that you are using
Flask-SQLAlchemy's special declarative_base class. I think that as
long as you use Flask-SQLAlchemy's session (which is a scoped_session)
and declarative_base
Thats a mixin class that can be added onto any object inheriting from
declared_base. You only need one of the `columns_as_dict` options. I
showed 3 variations: one that gives all columns, one that lets you specify
which columns to include, and another that shows which columns to exclude.
Use a mixin on the classes to create a dict of the column values. Then
create a new object with that dict.
You can customize the mixin to include or exclude columns as you wish.
I use this approach for revision tracking.
Class Duplicable(object)
@property
def columns_as_dict(self):
What version of SqlAlchemy are you using, and what error are you seeing?
I use the association_proxy for M2M, M2O, O2O all the time.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from
On Sunday, February 22, 2015 at 10:09:18 PM UTC-5, Ed Rahn wrote:
Let mw know if you need anything else!
What version of SqlAlchemy are you using? That may affect someone's answer.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To
The simplest way using your code would be:
for _candidate_data in candidates:
_existing = session.query(CompassCandidate).filter(CompassCandidate.id ==
_candidate_data.id).first()
if not _existing:
# add the new item
session.add(_candidate_data)
else:
# translate the attributes from the new item
This looks to be your problem:
id = Column(String(64), primary_key=True)
`id` is a primary key, which means it must be unique. But you're not
setting it to a unique value. Most people will use a Serial column for
this.
--
You received this message because you are subscribed to the
A serial would usually be an integer that is tied to a sequence (postgres,
oracle) or auto_increment (mysql).
I see in your example that you have a unicode string -- `ORaE9+aCdP0`
If that is coming from another db, then you've already migrated that
candidate.
you'll should do a search for
On Saturday, February 14, 2015 at 9:58:26 PM UTC-5, Ed Rahn wrote:
This seems like a fairly common use case, do people just not care about
it or how do they handle it?
This isn't common, and looks like an anti-pattern.
Consider these lines from your emails - they're not really compatible
Just to add to Michael's reply -- keep in mind that how you handle the
session (commit/rollback/close) will impact the state of any objects the
session loaded if you're using the ORM and that can have implications in
your app.
The default behavior of sessions is to have `expire_on_commit=True`
If there is a graphical sql designer you prefer -- just use that to create
your sql/database.
Once you create the SQL, you can use `sqlalcodegen` to convert it to
sqlalchemy models.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe
Simon,
why not use the association_proxy?
You just described this:
from sqlalchemy.ext.associationproxy import association_proxy
class Foo(Base):
_bar = relationship(Bar, uselist=False, lazy='joined')
qux = association_proxy('_bar', 'qux')
--
You received this message because you are
Yeah, I use this use-case a lot.
I never actually search with it -- just usually map the scalar for display
needs.
But it appear to recognize the join and automaps if needed:
s.query(Foo).filter(Foo.qux == 'abc').first()
generates
SELECT foo.id AS foo_id, foo.id_bar AS foo_id_bar,
That's what the association proxy does:
http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/associationproxy.html
On Tuesday, February 10, 2015 at 8:47:29 PM UTC-5, Dun Peal wrote:
I have a declarative base called Foo, instances of which get most of their
attributes from a Foo SQL
I just wanted to suggest another approach that may work, at least as a
fallback.
When I upload files to Amazon S3 I track the activity in a logging table
(via sqlalchemy) that looks a bit like this:
id | filename | timestamp_upload_start | upload_status (bool) |
timestamp_deleted
Before
I use the same stack quite often.
For something like that, I usually just have the testing script create the
database:
os.system(psql -Utestuser testdatabase path/to/script.sql)
you can store the db credentials in your shell's env variables.
you could do everything through sqlalchemy,
This is a longshot, but does applying .label('another_name') on one of your
columns work?
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
On Saturday, January 31, 2015 at 7:41:15 PM UTC-5, Fabio Caritas
Barrionuevo da Luz wrote:
Hello, how to log all SQL executed during database introspection?
I would like to understand how exactly SQLAlchemy does on introspection
the database.
The easiest way is to turn on ALL sql
This should generate your second query:
q = s.query(Foo)\
.join(
Boo,
Foo.id == Boo.id
)\
.join(
Bar,
Boo.id == Bar.id
)\
.first()
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To
You could use sqlacodegen (https://pypi.python.org/pypi/sqlacodegen) to
generate your model:
• generate both models
• generate one model, then regex the other
if you make a lookup table/function to map one column name to another, you
could also loop over the columns. something sort of like
Unless I understand you wrong, this should be all you need:
def as_dictionary(self):
song = {
id: self.id,
info: self.info.as_dictionary() }
return song
FWIW, I just use a base class / mixin that has this method:
class UtilityObject(object): def
On Friday, January 23, 2015 at 12:39:02 PM UTC-5, SF Markus Elfring wrote:
Is the reason good enough to avoid the repeated specification
of corresponding meta-data?
Is it safer to maintain and manage column attributes for some
tables only at a single place?
How are you currently
On Friday, January 23, 2015 at 1:10:37 PM UTC-5, SF Markus Elfring wrote:
How often do you need to fill these data structures in
a concurrent way?
Does parallel table creation become more interesting then?
Often and Not at All.
--
You received this message because you are subscribed
On Friday, January 23, 2015 at 11:30:33 AM UTC-5, SF Markus Elfring wrote:
Are there any chances that this database software
implementation will become robust and safe against
the discussed race condition?
I would not count on this happening in the near future as it doesn't seem
to be
On Friday, January 23, 2015 at 12:48:46 PM UTC-5, SF Markus Elfring wrote:
Should the Python class be sufficient for the definition
of a table structure?
If you're using the declarative syntax, yes. It's common to have a
`models.py` file that simply defines the classes in one place;
Using IF NOT EXISTS would not solve this problem in a high concurrency
scenario.
There would still be a race condition within the Postgres internal
functions. This is because of how Postgres checks for existing tables and
creates new ones with its internal bookkeeping. It's explained in
+1 to mike's suggestion of batches to weed out potential dupes. that will
save a lot.
you'd also have to do some pre-processing within your 1000, to make sure
one member of that set won't violate a constraint created by another.
if your DB supports savepoints, you can also add in a
On Thursday, January 22, 2015 at 2:40:19 PM UTC-5, SF Markus Elfring wrote:
I am surprised that this database software show such (unexpected)
behaviour.
Aside from what Michael wrote...
The reason why you're seeing an IntegrityError like that, is because
Postgres is raising an integrity
On Tuesday, January 20, 2015 at 5:34:27 PM UTC-5, Brian Glogower wrote:
Thanks for the idea. Do you have an example?
I don't have a personal example handle, but from the docs...
http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-literal-sql
On Tuesday, January 20, 2015 at 7:12:18 PM UTC-5, Michael Bayer wrote:
why is text() needed here?these could be the Table objects set up with
“schema=‘schema name’” to start with, then you’d just do the join with
query.join().
It's not, I just didn't think of it. But I also saw
I haven't seen anyone bring this up before. If you get stuck and no better
answer shows up here... I'd try just having a single session with tables
from both DBs in it, and using raw SQL to populate the ORM objects -- using
the MySQL native cross-database query format. It's not elegant, but I
Take a look at sqlacodegen
https://pypi.python.org/pypi/sqlacodegen/
It can build the Python classes (sqlalchemy declarative) from your existing
database.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and
On Tuesday, January 13, 2015 at 8:12:49 PM UTC-5, Michael Bayer wrote:
If you really want to disable bound parameters there’s a flag
“literal_binds” that’s on Compiler, but it isn’t going to work for things
like dates unless you implement renderers for them.
IIRC, `literal_binds`
It's hard to tell what could cause the issue:
How many horses are there?
What is going on in that other function?
Are there foreign key checks involved with the commit?
If you want to break a huge batch into smaller commits, there is the
`Windowed Range Query` strategy
--
Would this work for you:
- download the source from github
- in your environment, `python setup.py develop`
the reason why i suggest using the github source, is that you can insert
whatever breakpoints/traces you need in a new branch, and quickly see what
is modified with a git command or
yeah, holding onto the all() result is pretty key :)
A hacky trick I use on web projects is to have a sqlalchemy_persistance
array on the `request` object. i just append everything I load that might
be needed again onto that.
--
You received this message because you are subscribed to
I just want to point out something in Simon's first response that might
have been missed:
Simon batched the query, then used a `commit()` at the end of each batch.
That `commit` is pretty important, because it limits his code to only
keeping SqlAlchemy objects for the batch -- not every item
It's really confusing. I ran into this a few months ago and wanted to stab
things.
I can't answer your question, but I can push you in the right direction:
The trick is in looking at the actual objects that are returned/expected by
each item. Some operations will return objects that require
Based on your specs, I think it would be simplest to store the rows in the
DB then just run reports on them. You'd have a solution in minutes. You
should be doing a transaction per Excel file, not per row. You could
probably create a function or view that generates the record you want.
--
On Thursday, December 18, 2014 12:42:03 PM UTC-5, dewey wrote:
Per file?? Even if it's a really large file?? Is the only limit on
transaction size available client memory or are their constraints on the
server side as well?
Usually, yes. If you fail partway through the file, you'll
Take a look at the unit tests for the ORM itself, that might set you
straight:
https://bitbucket.org/zzzeek/sqlalchemy/src/7cd4362924dd0133a604d4a0c52f1566acbd31ff/test/orm/test_transaction.py?at=master
--
You received this message because you are subscribed to the Google Groups
sqlalchemy
Just a quick warning on this pattern (which I use as well)...
When dealing with Edits/Updates and Database Migrations, you might run into
issues where any mix of SqlAlchemy commands simply will not make the mapper
happy.
A workaround is to ignore the session and do these operations using the
Have you looked at using the `inspect` tool?
It could solve all your issues within an event
http://docs.sqlalchemy.org/en/rel_0_9/core/inspection.html
1. You can check to see if an object is `.transient` or not. if it is,
then it's not bound to a session and you don't have to validate.
2.
delete everything I said and listen to Mike!
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send
Ok this works great! Thanks!
The approach is not optimized for speed at all, but it's acceptable for now. A
given query might take 112ms with this, and 14ms with my optimized select --
however this lets me use the ORMs eagerloading and I do extensive caching, so
the speed issues are offset
This is a simplified example of my issue. I wrote a detailed example, but
it might be too confusing.
There are three classes:
Entity
only has one of user_id or username
id
user_id
username
profile = relationship( EntityProfile based on user_id )
On Wednesday, December 3, 2014 4:23:31 PM UTC-5, Ams Fwd wrote:
I would recommend just storing them on disk and let the OS VMM deal with
caching for speed. If you are not constrained for space I would
recommend not zlib-ing it either.
I'll second storing them to disk. Large object
If you need to handle large objects, you should look into DBAPI drivers
that can stream results. The only drivers i know that can handle a stream
are psycopg2 (postgres) and oursql (mysql).
There have been a handful of recipes/threads of people using streams for
blobs in the archives.
I
You need to share your code, in a form that can repeat the error, for
anyone to help.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
SqlAlchemy returns column results as a `KeyedTuple`. It's similar to
`collections.namedtuple` in the standard library
Looking at your code, my guess is that you had something like this:
ids = session.query(Model.id).filter(...).all()
and are then passing those ids into a query like this:
These might be backend specific and not supported in other databases... but
there are some popular postgres tricks to deal with this:
* in the constraint, coalesce NULL into an empty string (or other value).
* use multiple indexes
both are actually discussed in the comments to this question:
Read up on the `uselist=False` option for relationships
-- http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html
It would look something like this:
class Parent(Base):
children = relationship(Child, primaryjoin=Parent.id==Child.parent_id)
eldest_child = relationship(Child,
Notice the error:
'Invalid parameter type. param-index=2 param-type=list
And then the values you're putting in on index 2:
(u'url', None, [2L]
You're submitting a list, instead of single value.
Your problem is this line:
ins_id = result.inserted_primary_key
according to the docs
This is a well known mysql-ism
I remember reading that a short-term fix to this... is to change
VARCHAR(256) to VARCHAR(255)
It's fixed in some versions of the db
(https://answers.launchpad.net/maria/+question/241612)
It looks like 5.6.3 is the earliest
:
In addition to what Mike said...
My guess is that you probably have an issue with rq. I had run into issues
with celery (similar) where I spawned too many background processes. Any
given web request required 2 database connections -- one for the web
request, and a second one in the celery
After spending the morning, I realized this is actually a larger problem
with SqlAlchemy mapping this existing database.
Depending on how I structure the deletes, I get either an AssertionError or
a sqlalchemy.exc.CircularDependencyError.
I should have mentioned the model:
class
For the next person:
I was able to get around this by doing the following:
* do not eagerload any of the relations, probably best to keep them out of
the identity map altogether
* use the sql expressions api to handle any migrations/work
dbSession.execute(
model.core.Foo.__table__\
the columns are stored in a `c` attribute of the table.
# available on both Class and Instance
[ i for i in p.__table__.c ]
[ i for i in Person.__table__.c ]
# also available using sqlalchemy.orm.class_mapper
# p = Person() instance
dict((col.name, getattr(p, col.name)) for col in
forgot to mention:
1) i think there is a `.columns` property too, but `.c` is a shortcut.
2) there are a few more detailed explanations in the archives here and on
stackoverflow
3) this is only the columns. relationships, fkeys, etc are stored
differently
--
You received this message because
I may try and work this out with my test case later -- I tried just about
every permutation, but should have logged them.
Thankfully the other half of SqlAlchemy could handle this. The ORM
relationships are only used on a Web Display (and some db migration work).
Everything is added as
This error doesn't make sense to me:
foo = dbSession.query(Foo).options(joinedload('bar'))
dbSession.delete(foo.bar)
dbSession.flush()
dbSession.delete(foo)
dbSession.flush()
I'm getting the AssertionError on the `.delete(foo)`, but I've already
deleted/flushed foo.bar
--
I've been going batty on this all morning.
I have a permissions check routine that repeatedly queries for a certain
Foo2Bar table
class Foo2Bar(Base):
__tablename__ = 'foo_2_bar'
id_foo = Column(Integer, ForeignKey(foo.id), primary_key=True)
id_bar = Column(Integer,
Thanks!
I didn't realize that objects are cleaned up with scope like normal python
objects. I thought they were in the session for the lifetime of the
session. This was driving me crazy.
This is just a web request, so I'm now appending the result into
`request.persistanceArray`. instantly
I agree with what Simon said, and want to point out that he is assuming
that that Foo.bar is a relationship of Foo to Bar based on Foo.bar_id =
Bar.id
But you should understand that these are two completely different queries
AND you haven't described your model at all.
No worries. I wanted to avoid those two patterns. I had played around
with them, and wasn't too excited by the SQL and the work I'd probably need
to optimize the query plan. Was hoping someone may have figured out a
simpler way to achieve the same effect.
It made more sense to just do a
I didn't realize this was an Insert.
This doesn't work in 0.9 or in master...
Base = declarative_base()
class Foo(Base):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
aaa = Column(Integer, )
bbb = Column(Integer, )
q =
I've been staring at this a few hours, and can't figure out how to adapt a
join/query into a relationship. The archives and docs don't seem to help
much.
It seems reminiscent to an earlier question by Simon King (
https://groups.google.com/d/topic/sqlalchemy/OtI4Z8v7gRs/discussion ) and
some
what DOESNT sqlalchemy do?
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to
Usually for this sort of stuff, I serialize the object's data into a JSON
dict ( object columns to JSON dict, object relations to a dict, list of
dicts, or reference to another object). ( Custom dump/load is needed to
handle Timestamp, Floats, etc). You might be able to iterate over the data
On Friday, October 24, 2014 10:39:43 AM UTC-4, Peter Waller wrote:
I was also hoping to just use yaml to avoid writing custom dumping code,
since it seems in general like a useful capability. So I may yet try and
find the underlying bug and fix it.
It might not be a bug, and the effect of
I believe you have to specify a dialect in order to get the binds presented
in the right format.
This is the utility function i use for debugging. you could easily adapt
it to return a sqlite statement.
https://gist.github.com/jvanasco/69daa58aeb0e921cdbbe
that being said -- I don't think
does anyone know if its possible to implement some form of eagerloading or
class attribute where only the presence (or first record, or count) of
relations are emitted?
I have a few queries where i'm loading 100+ rows, but I only need to know
whether or not any entries for the relationship
To get the first form, you'd create separate queries and turn them into a
subquery.
(I'm not sure if this would work on a relationship, but this would be the
form for columns)
sq_t_queued =
db.session.query(Task.tasks_queued).count().subquery('t_queued')
sq_t_running =
I've been staring at this for a while, and can't figure out a way to make
the mapper happy:
i have 3 Classes (tables):
* List (list)
* ListItem (list_item)
* ItemType1 (item_type_1)
* ItemType2 (item_type_2)
* ItemType3 (item_type_3)
until now i've been using a joinedload
subqueryload(‘list_item’),
joinedload('list_item.item_type_1'),
joinedload('list_item.item_type_2'),
joinedload('list_item.item_type_3'),
ah! so sqlalchemy is smart enough to magically map the joinedloads onto
the subqueryload!
I never would have guessed that!
--
You received this
601 - 700 of 1035 matches
Mail list logo