Roger Binns has incorporated support for CouchDB into apsw by using
SQLite virtual tables. Although apsw cannot be used directly by
SQLAlchemy (as it is not dbapi compliant), you can pass an apsw
connection to pysqlite connect() and then use that connection as a
SQLAlchemy engine. I did some
I would like to use the backup API from SA (as part of our testing
infrastructure for setting up :memory: databases from a known,
persisted state). Also, UDFs (both scalar and aggregate) and virtual
tables are very nicely suported by APSW.
pjjH
http://www.sqlalchemy.org/trac/ticket/1866
Say you have a denormalized table with columns phone1, phone2, phone3
and you would like to map the class so that the .phones property is an
iterable.
e.g. if I have data like
user_id, phone1, phone2, phone3
1, 1234, 5678, 9012
2, 3456,7890,1234
I would like to say something like
for p in
Is there any support yet in SQLAlchemy for Common Table Expressions
e.g.
WITH foo(blurp, bletch) AS (SELECT bar*2, bletch from banana where
bletch 3)
SELECT f.* FROM foo as f
where blurp 1
I have not been following SA development for some months (due to a job
move) so I am not yet up to speed
I see that a ticket has just been opened up on this topic.
http://www.sqlalchemy.org/trac/ticket/1859
I will follow up there.
thanks,
pjjH
On Jul 29, 8:33 am, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
Is there any support yet in SQLAlchemy for Common Table Expressions
e.g
I had some (working) code with a typeadaptor that used the
process_result_value style. I changed this to use the result_processor
style and returned a calleable and some tests started to fail. It
seems that with the process_result_value form, value is a datetime
object. When using
at the same time rather than doing them
individually or having a single common bind for the session ... a lot
of our applications access data across multiple data-servers and with
multiple-logins
pjjH
On Sep 24, 12:12 am, Michael Bayer mike...@zzzcomputing.com wrote:
On Sep 23, 2009, at 6:36 PM, phrrn
I have a hidden WriterSession which I am using behind the scenes to
manage a number of API entries that write data in bulk e.g. upsert
(MappedClass, iterator_that_returns_dicts). I want the session to only
look at its own binds and to ignore any that are in place on the
metadata collection. I
__getattribute__? I assume he meant __setattr__?
pjjH
On Aug 25, 9:26 am, Martijn Faassen faas...@startifact.com wrote:
Hey,
phrrn...@googlemail.com wrote:
I implemented a very crude flush/commit-time version of this today
that disables all modifications. Michael suggested the use
, phrrn...@googlemail.com wrote:
One of the very nice things about using SQLAlchemy is that since so
much of the grunt-work is taken care of for you, it gives you the
opportunity to come up with (potentially) hare-brained schemes like
the one I just thought of. We would like to do reading
I implemented a very crude flush/commit-time version of this today
that disables all modifications. Michael suggested the use of
connection_callable to provide fine-grained control of which engine to
use for modifications. I haven't gone through all the details yet but
it seems to work for my
One of the very nice things about using SQLAlchemy is that since so
much of the grunt-work is taken care of for you, it gives you the
opportunity to come up with (potentially) hare-brained schemes like
the one I just thought of. We would like to do reading of data with
one set of logins and do
We are rolling out some SQLAlchemy infrastructure at work and came
across an interesting issue. We are using the mssql dialect on top of
pyodbc to talk to both SQL Server and Sybase dataservers. We use bound
metadata and scoped session_maker with autocommit=True. First off, the
problem we
Thanks for the information Mike. I do have a listener in place already
but decided to poke in the configuration directly when creating the
engine:
'connect_args' : {'autocommit' : True,
}
I found some very interesting results by experimenting with vanilla
pyodbc and
of sybasect until we run into the limitation
of the mssql dialect.
pjjH
On Aug 6, 8:47 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Aug 6, 2009, at 8:39 PM, phrrn...@googlemail.com wrote:
Thanks for the information Mike. I do have a listener in place already
but decided to poke
The FreeTDS driver needs to be passed a value encoded in the encoding
specified in the 'client charset' part of the odbc.ini configuration
file for the data-source in question. FreeTDS will use iconv to re-
encode the data in UCS-2 (which is either what is used by SQL Server
or is equivalent to
The spatialite extensions for SQLlite include a *read-only* virtual
table adapator (VirtualText) for CSV files.
http://www.gaia-gis.it/spatialite-2.2/spatialite-2.2_manual.html#t7
If the data-volumes are reasonably small, and the csv clients are read-
only and the frequency of updates to the
I have a MapperExtension which defines one method, instrument class.
This method in turn pokes in a number of convenience methods such as
lookup, search and a custom __str__ and __repr__. We do this dynamic
instrumentation to avoid having to inherit from a common base class.
I think you should be able to say metadata.bind = engine
pjjH
On Jul 10, 8:30 am, jo jose.soa...@sferacarta.com wrote:
Hi all,
I'm trying migrate from 0.3.10 to 0.4 and there's the following error:
... engine = database.metadata.engine
AttributeError: 'MetaData' object has no
I found the problem: it is with the instrumentation_finders stuff you
helped us with months ago (see below). The problem can be reproduced
with this:
class DummyMapperExtension(MapperExtension):
pass
mapper(ModelPresentValue, model_present_value,
extension=DummyMapperExtension())
If the
you distill it into a small test case...it seems suspect that
UPDATE would break. more likely is, __init__ doesn't work correctly or
perhaps history events fail.
phrrn...@googlemail.com wrote:
I found the problem: it is with the instrumentation_finders stuff you
helped us with months ago
...@googlemail.com
phrrn...@googlemail.com wrote:
Sure. I'll work on the distillation after lunch. It is not that the
UPDATE is broken, per se, it is more that update should not be called.
The code below works OK on an empty database but fails on subsequent
runs when pre-existing data is found
Cool. Thanks for taking a look at it.
pjjH
On Jul 10, 4:02 pm, Michael Bayer mike...@zzzcomputing.com wrote:
phrrn...@googlemail.com wrote:
assembling the test was a good exercise as I can see that the problem
exists with 0.5.3 but not against 0.5.4p2 or a checked out copy of
svn
, is there any way I can modify the SybaseInteger
or Sybase SQL Dialect class to cast all columns which are integers?
Thanks again!
On Jun 30, 11:02 am, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
as a note, I made some attempts with the sybase dialect and I can say
with quoting (and also may
not). So this issue would reside within the DBAPI in use. Just
as a note, I made some attempts with the sybase dialect and I can say
that FreeTDS with sybase is not usable at all since it doesn't render
bind parameters correctly. phrrn
as a note, I made some attempts with the sybase dialect and I can say
that FreeTDS with sybase is not usable at all since it doesn't render
bind parameters correctly.
This is correct if one is using the FreeTDS implementation of the CT-
Lib API but is not the case when using pyodbc:
:
phrrn...@googlemail.com wrote:
I am learning fixtures right now to see if I can use it to
successfully solve this problem.
What is fixtures?
Chris
--
Simplistix - Content Management, Zope Python Consulting
-http://www.simplistix.co.uk
The MetaData object has a sorted_tables method/property that returns a
list of Table objects. Also, MetaData.tables[] can be accessed by
table-name (fully qualified including schema, if any). Table.columns
has name and type information. Assuming a MetaData instance md, here
is how you would list
I am learning fixtures right now to see if I can use it to
successfully solve this problem.
pjjH
On Jun 24, 6:11 pm, Chris Withers ch...@simplistix.co.uk wrote:
Hi All,
I'm wondering what the common idiom is for unit testing w.r.t. data and
transactions...
So, coming from my ZODB
You should put this into your .freetds.conf file to ensure that
FreeTDS will tell iconv to do the right thing (my understanding is
that all unicode data is encoded to UCS-2 by FreeTDS)
tds version = 8.0
client charset = UTF-8
SQL Alchemy create_engine has an encoding kwarg:
I was about to follow up to Mike's suggestion of a CAST with That
will never work because ... but I decided to try it out and .. it
works!!!:
for i in e.execute(sql.text(select result = dateadd(day, CONVERT
(INTEGER,:days), getdate())), dict(days = 7)):
print i
(datetime.datetime(2009, 6,
I have something like this to serialize a result-set to delimited file-
format. It is not very pretty and probably not at all pythonic but I
find it handy.
pjjH
def as_delimited(q, *args):
csvdata = StringIO()
w = writer(csvdata, delimiter='|')
for i in q.values(*args):
)
cache[pk_as_tuple] = o
On Apr 15, 6:26 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
Beautiful. I am impressed at the elegance of this sample and the
others also.
pjjH
On Apr 15, 3:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:
for SQLA integration you'd probably
I did just that this afternoon to link across heterogeneous
dataservers. Modulo the inevitable awkwardness from having two
different names for the same thing (i.e. Asset and EjvAsset), this was
very easy:
class_mapper(Asset).add_properties({
'EjvAsset' : relation(EjvAsset,
My employers have a custom Python type (derived from datetime) for
dealing with dates and datetimes.Let's call it 'BananaDate'. They
would like BananaDate be used by SQL Alchemy. The standard way of
doing this appears to be with a TypeDecorator:
class BananaDate(types.TypeDecorator):
from
but aliased to
the corrosponding SA types
from banana.dbo import BananaDate as Date
from banana.dbo import BananaTimestamp as DateTime
On Apr 24, 5:00 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
My employers have a custom Python type (derived from datetime) for
dealing
I tried autocode but I found it limiting due to the use of SA's
MetaData collection as the model. An alternative approach has worked
quite well: persist the results introspection queries to a set of
physical tables and wrap those (metadata) tables with SA. The code-
generation is now a standard
Hi,
I would like to export a 'lookup' interface to programmers that does
object caching (a bounded size cache with a LRU replacement policy)
and prefetching of small tables for read-only reference data. This is
a rewrite in Python over SQL Alchemy of similar functionality in Perl
(originally
are in the distribution in examples/query_caching. I use a variant of the
with_cache_key version for my own purposes.
phrrn...@googlemail.com wrote:
Hi,
I would like to export a 'lookup' interface to programmers that does
object caching (a bounded size cache with a LRU replacement policy
I am trudging through the unit tests for the Sybase backend and found
an interesting one in test_nonreflected_fk_raises in engine/
reflection.py. A couple of the drivers look like they skip over the
column if it is not contained within table.c but go ahead and create
the primary_key in any case!
I uploaded a patch to trac
On Mar 13, 12:51 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
opened ticket 1341 for this.
http://www.sqlalchemy.org/trac/ticket/1341
--~--~-~--~~~---~--~~
You received this message because you are subscribed
q = s.query(_PrimaryKey).filter(_PrimaryKey.TABLE_NAME ==
'banana')
NameError: name '_PrimaryKey' is not defined
pjjH
On Apr 3, 11:34 am, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
I copied the list of import statements from the module file
(deshaw.dbo.chimera) to the driver file
3, 12:57 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
I did RTFM athttp://www.python.org/doc/2.5.2/ref/import.htmland now
have the symbols explicitly enumerated in the __all__ attribute in the
module. However, I am still getting this error when I attempt to do
an ORM query
...@zzzcomputing.com wrote:
what does compile_mappers() say?can you call this mapper, and at the
same time all mappers within the entire application have been called ?
phrrn...@googlemail.com wrote:
I copied the list of import statements from the module file
(deshaw.dbo.chimera) to the driver
as
sqlalchemy.orm.properties.Comparator
On Apr 3, 1:51 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
I don't see any output from it at all. I am calling it after importing
everything but before making any queries. The mapper registry looks
unsurprising (albeit based on a guess of what it does)
pjjH
live without the docstring fanciness for now but it has been
educational tracking down the various problems.
pjjH
On Apr 3, 3:04 pm, Michael Bayer mike...@zzzcomputing.com wrote:
phrrn...@googlemail.com wrote:
# class AttachNotesAsDocstring(interfaces.InstrumentationManager):
# def
again for the debugging help.
pjjH
On Apr 3, 3:53 pm, Michael Bayer mike...@zzzcomputing.com wrote:
any chance of producing a *small* test case for this
phrrn...@googlemail.com wrote:
Yes, the erroneous behavior goes away when this code is commented out.
No, an empty instrumentation manager
This code works when executed within a if __name__ == '__main__'
block in the .py that contains the model:
s = MySession(bind=e)
q = s.query(DatabaseTable).filter(DatabaseTable.TABLE_CAT=='credit')
for i in q:
print i
However, if I take it out and put it in a separate file, I get an
opened ticket 1341 for this.
http://www.sqlalchemy.org/trac/ticket/1341
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To
I am encountering a problem with getting the unit tests to run on
Sybase because of cascades of errors originating from failure to drop
a table that is referenced by a FK constraint in another table. When
attempting to drop the people table, I need the SybaseSQLSchemaDropper
to emit SQL like
dependency so
that there's no need for ALTER. in the case that two tables have a mutual
foreign key dependency, one of the ForeignKey objects has the
use_alter=True flag set so that just the one FK gets dropped first via
ALTER.
phrrn...@googlemail.com wrote:
I am encountering a problem
Then I must have a bug in the FK introspection. Which unit tests would
you suggest getting running first? Is there one that specifically
tests foreign key stuff?
pjjH
On Mar 13, 3:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
phrrn...@googlemail.com wrote:
Hi Mike,
the situation I
, force) for x in schema.split
('.')])
return result
On Mar 10, 5:30 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
As it happens, this works on the Sybase dialect without fixing the
quoting at all! Apparently SQL such as this is happily accepted by
Sybase:
SELECT
Hi,
I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
now in a primitive but usable condition for simple applications. My
employers are fine with contributing the code back to the project and
I intended to coordinate with Mike Bayer about this shortly. In the
meantime, we would
:
you can install the dialect using a setuptools entry point. SQLAlchemy
looks for dialect modules using the sqlalchemy.databases entry point
name, so in this case you might name it sqlalchemy.databases.sybase-ase.
phrrn...@googlemail.com wrote:
Hi,
I have done a fair bit on a Sybase ASE
kirtland j...@discorporate.us wrote:
It'd look like this:
http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/...
Your dialect will be available to SA after you 'python setup.py install'
or 'python setup.py develop' in your -ase distribution.
phrrn...@googlemail.com wrote
Thanks to all the help from the group, I am now facing an issue much
earlier than anticipated: how to manage Alchemy in -- apologies for
using a dreadfully overused phrase -- an 'Enterprise Setting'. This
really boils down to matters of scale: tens of thousands of tables in
hundreds of databases
else:
-(schema, tname, colname) = m.group(1, 2, 3)
+m1 = m[2:]
+m1.reverse()
+schema = '.'.join(m1)
+
On Mar 5, 7:21 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
OK. If it might be as easy as that, I will have a go
This is beautiful and exactly what I want. Thanks very much.
pjjH
On Mar 7, 12:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Mar 6, 2009, at 10:37 PM, phrrn...@googlemail.com wrote:
I have some metadata on table and some of the columns and would like
to surface
I have some metadata on table and some of the columns and would like
to surface these as docstrings on the mapped class and columns.
If table foo has columns i, j, k with comments 'apple', 'banana',
'pear', respectively, and the table is mapped via class Foo then I
would like the programmer to
)
pjjH
On Feb 27, 4:29 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
Yes, it is based off the mssql code but I made some modifications to
it to take care of situations like, for example, where there is an
identity column but it is not the primary key (and hence not a
'sequence
OK. If it might be as easy as that, I will have a go and see how well
it works.
pjjH
On Mar 5, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
phrrn...@googlemail.com wrote:
Sybase (and SQL Server) support cross-database JOINs (Sybase even
supports cross-database foreign-key
It is common (for us) to have tables with an identity column and that
column *not* be the primary key. I am dealing with this by querying
the system catalogs and caching the result within the table meta-data
object itself (probably a bad idea but doing it for expediency). The
system catalog query
Some of the unit-tests have parameters which don't refer to column
names so this is the latest formulation:
if identity_column in [t.c[key].name for key in
self.compiled_parameters[0] if hasattr(t.c, key)]:
On Mar 4, 5:34 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
It is common
Because of the problems with a Sybase SA backend on both python-sybase
and pyodbc, I changed the poolclass to AssertionPool and found a bunch
of unexpected (to me) situations where AssertionPool failed. Based on
a casual examination of the code, it seems that one connection should
be sufficient
On systems where I do not have administrative access, I emulate an
auxilliary table of numbers with a subquery that has a bunch of UNION
ALL statements in it. I would like to do same with SQL Alchemy but
have not been able to figure out yet how to do it. My current effort
is almost there. I need
Thanks. I wrapped it as ' (original_sql) as foo' as Sybase needs a
name for the derived table.You have helped to get primary key and
index introspection working on Sybase!
One interesting and useful bit of information is that one can't use
placeholders in the WHERE clause for anything other than
%
(expr)s',
column_name: 'col_name%(expr)
s',
sql_functions.char_length : lambda x: LEN
(%s) % x,
}
)
On Mar 3, 5:46 pm, Rick Morrison rickmorri...@gmail.com wrote:
On Tue, Mar 3, 2009 at 5:31 PM, phrrn...@googlemail.com
, phrrn...@googlemail.com wrote:
Thanks Michael. I have a sybase.py passing *some* unit tests with both
pyodbc and the Sybase driver, both running on Solaris 10 x86 against
ASE 15. This is a hack that seems to work for the Sybase DBAPI module.
I do have access to lots and lots of different
I think that the SQL Alchemy SchemaItem objects can be reasonably code-
generated from the schema. I don't feel the same way about the mapped
classes: the naming of classes and the naming of the various relations
is something that benefits from some thought and design, more than
seems practical
] +
self._last_inserted_ids[1:]
if self.IDENTITY_INSERT:
self.cursor.execute(SET IDENTITY_INSERT %s OFF %
self.dialect.identifier_preparer.format_table
(self.compiled.statement.table))
On Feb 27, 2:02 pm, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
OK. I will do
and schema 'prod' in database banana).
Then what does one do about remote objects mapped in via CIS (e.g.
REMOTE.production.dbo.very_important_table) (actually this is a
problem with SQL Server also)
pjjH
On Feb 27, 4:05 pm, Michael Bayer mike...@zzzcomputing.com wrote:
phrrn...@googlemail.com
I am doing some work on a SA engine for Sybase Adaptive Server
Enterprise (ASE) on top of both pyodbc and the Sybase DB-API driver.
The existing sybase engine for SA only works with Sybase Anywhere
(ASA).
There is a problem with named parameters with the Sybase driver in
that the placeholders
['database'] = opts['database']
return ([opts['host'], opts['username'], opts['password']],
dictArgs)
On Feb 26, 4:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Feb 26, 2009, at 3:55 PM, phrrn...@googlemail.com wrote:
I am doing some work on a SA engine for Sybase Adaptive
74 matches
Mail list logo