As it happens, this works on the Sybase dialect without fixing the
quoting at all!  Apparently SQL such as this is happily accepted by
Sybase:

SELECT [fdcommon.dbo].organization.org_id,
[fdcommon.dbo].organization.abbrev
FROM [fdcommon.dbo].organization JOIN [fdcommon.dbo].org_type
ON [fdcommon.dbo].org_type.org_type_id =
[fdcommon.dbo].organization.org_type

I resorted to some brute-force list operations rather than regular
expressions to parse out the component names (see diff below). I will
fix the quoting shortly (within the next day or so) and submit a
single diff.

thanks,

pjjH


Index: schema.py

===================================================================

--- schema.py   (revision 5816)

+++ schema.py   (working copy)

@@ -876,17 +876,22 @@

                 raise exc.ArgumentError(
                     "Parent column '%s' does not descend from a "
                     "table-attached Column" % str(self.parent))
-            m = re.match(r"^(.+?)(?:\.(.+?))?(?:\.(.+?))?$",
self._colspec,
-                         re.UNICODE)
+            m = self._colspec.split('.')
             if m is None:
                 raise exc.ArgumentError(
                     "Invalid foreign key column specification: %s" %
                     self._colspec)
-            if m.group(3) is None:
-                (tname, colname) = m.group(1, 2)
+
+            m.reverse()
+            (colname, tname) = m[0:2]
+
+            if m[2] is None:
                 schema = None
             else:
-                (schema, tname, colname) = m.group(1, 2, 3)
+                m1 = m[2:]
+                m1.reverse()
+                schema = '.'.join(m1)
+


On Mar 5, 7:21 pm, "phrrn...@googlemail.com" <phrrn...@googlemail.com>
wrote:
> OK. If it might be as easy as that, I will have a go and see how well
> it works.
>
> pjjH
>
> On Mar 5, 4:31 pm, "Michael Bayer" <mike...@zzzcomputing.com> wrote:
>
> > phrrn...@googlemail.com wrote:
>
> > > Sybase (and SQL Server) support cross-database JOINs (Sybase even
> > > supports cross-database foreign-key constraints). There are four
> > > components to an object identifier:
>
> > > 1 = Object name
> > > 2 = Schema name
> > > 3 = Database name
> > > 4 = Server name
>
> > > the dataserver, database and schema are assumed for one-part
> > > identifiers (e.g. 'foo'), dataserver and database assumed for two-part
> > > identifiers (e.g. 'dbo.foo') and, finally, dataserver assumed for
> > > three-part identifiers ('production.dbo.foo')
>
> > > e.g. SELECT foo.* FROM BANANA.production.dbo.foo
> > >        SELECT foo.* FROM production.dbo.foo
> > >        SELECT foo.* FROM production..foo -- same as the previous query
> > > if the callers default schema is dbo
> > >        SELECT foo.* FROM dbo.foo
> > >        SELECT foo.* FROM foo
> > >        SELECT foo.* FROM ..foo
> > > I am not so interested in supporting four-part identifiers in SA but I
> > > would like to figure out how to support three-part identifiers as very
> > > many of our databases have cross-database references.
>
> > > One natural (to me!) way of doing this is to add a 'database' property
> > > to the Table and ForeignKeyConstraint schema items and have the Sybase/
> > > SQL Server dialects always emit fully-qualified three-part identifiers
> > > for table names.
>
> > we have an element on Table called "schema".  I had in mind that schema
> > should accept dotted names, so SQLA generally doesn't need to get
> > involved.  The only change needed is to the IdentifierPreparer, such that
> > when quote_schema is called, it separates the name along the dot first so
> > that it can quote each token separately.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to