I have just run afoul of one of those "features that make no sense except 
when combined with the test suite."    In BaseDatabaseFeatures it looks 
like this:

   # Does the backend allow very long model names without error?
    supports_long_model_names = True

What this flag actually does is (when False) eliminate generation of one 
specific test table when setting up the test suite. The test creation logic 
only considers the setting of the flag on the 'default' database, so if, as 
in the case I was attempting, 'default' is PostgreSQL and 'other' is 
ms-sql, the construction of the test configuration fails.  I wanted to test 
that configuration specifically, since I think it is one of the wiser ways 
to run django with SQL Server involved. But I digress...

My point is that an example exists in the core code, and that, indeed, it 
is not very maintainer friendly.  In addition to not addressing the 
question of "how long is 'long'?", it requires a full text search of the 
source code to figure out the relationship.  If, instead, the failing setup 
module were coded with actual vendor names, and a comment like "# vendor_x 
table names are limited to 132 characters", it would be much easier to 
maintain the tests correctly, IMHO.

I maintain a test suite which has four target database engines, and the 
test code has plenty of examples if skipping (or modifying) a test based on 
code like "if db.engine is in ['mssql', 'postgress']:". I find that such an 
arrangement works very well.


On Tuesday, May 28, 2013 12:26:44 AM UTC-6, Anssi Kääriäinen wrote:
>
> On 27 touko, 20:15, Shai Berger <s...@platonix.com> wrote: 
> > Hi Carl, 
> > 
> > On Monday 27 May 2013 19:37:55 Carl Meyer wrote: 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > > Hi Shai, 
> > 
> > > On 05/27/2013 09:26 AM, Shai Berger wrote: 
> > > > I'm working on fixing some failing tests under Oracle, and I ran 
> into 
> > 
> > > >       commands_sql.tests.SQLCommandsTestCase.test_sql_all() 
> > 
> > > > [...] 
> > 
> > > > For now, I will only special-case Oracle -- that should solve a 
> standing, 
> > > > release-blocker bug, and not change the semantics of the test 
> otherwise; 
> > > > but I'd like to achieve something better, more general and 
> > > > 3rd-party-backend- friendly, for the future. 
> > 
> > > It seems to me that ideally a test for backend-specific behavior 
> should 
> > > become a test for that backend (and thus skipped on other backends). 
> > > This also solves the third-party-backend problem; said backend should 
> > > have its own tests as needed, and Django's tests for backend-specific 
> > > behavior should be skipped under any unrecognized backend. 
> > 
> > I agree in general, but this can lead to DRY violations if we're not 
> careful. 
> > A better example of the problem is a test next to the one I needed to 
> fix: 
> > 
> >     def test_sql_delete(self): 
> >         app = models.get_app('commands_sql') 
> >         output = sql_delete(app, no_style(), 
> connections[DEFAULT_DB_ALIAS]) 
> >         # Oracle produces DROP SEQUENCE and DROP TABLE for this command. 
> >         if connections[DEFAULT_DB_ALIAS].vendor == 'oracle': 
> >             sql = output[1].lower() 
> >         else: 
> >             sql = output[0].lower() 
> >         six.assertRegex(self, sql, r'^drop table .commands_sql_book.*') 
> > 
> > Does any backend beside Oracle produce extra SQL for this command? Does 
> the 
> > different command index for Oracle justify separating this test into two 
> > separate methods? And if you separate, would you mark the non-oracle 
> case with 
> > skipIf(Oracle) or with skipUnless(sqlite or postgres or mysql) ? 
> > 
> > I think a better solution for this is to keep the original method, and 
> mark it 
> > with skipUnless(is_core_db) -- we'd need to define is_core_db for that, 
> of 
> > course; this could also serve as an easy-to-grep marker for "general 
> > functionality test, with backend variations" -- which I think would be 
> quite 
> > useful for 3rd-party backend writers. 
> > 
> > Shai. 
>
> Traditionally a backend features has been used for this. So, add some 
> feature like "produces_extra_sql_for_drop_table", if so we skip based 
> on that. This unfortunately leads to features that make no sense 
> except when combined with the test suite. 
>
> One idea is to somehow include a subclass of problematic test classes 
> in each backend. The backend can alter or skip those tests that are 
> backend specific. There are a couple of test apps that should work in 
> this way (introspection, backends, inspectdb, maybe more). 
>
> I don't know how to make this actually work so that Django's test 
> runner finds the correct test class from the backend's test module 
> instead of the generic backends test class. Optimally a backend could 
> subclass any test class to specialize or skip tests. 
>
> For 3rd party backend support in general: My opinion is that we should 
> include them all in core. The rules would be: 
>   1. Core committers do not need to make sure all backends pass tests 
> before committing features. 
>   2. There are separate committers for the backends, they make sure 
> that their backend will pass tests eventually. 
>   3. When a release is nearing we will try to make sure each backend 
> passes tests. If the maintainer of the backend doesn't have time to 
> fix their backend, then we will try to find a new maintainer. If we 
> can't find a new maintainer, then we simply don't care that the 
> backend isn't working. That is, if nobody cares to maintain the 
> backend, then it just isn't important enough to maintain. 
>   4. The maintainers have complete control over their backend. They 
> don't need design decisions if a change concerns only their backend. 
> Of course, if they do really stupid decisions then we will find a new 
> maintainer. 
>
> I _like_ this proposal.  In particular, it removes the necessity for core 
developers to run the test suite on ms-sql.  My last test run required over 
six hours to get through the suite when running on the SQL Server machine 
itself.  When run on Linux with my test proxy server it took even longer. [ 
The good news it that there were less errors when running the proxy than 
when running on Windows!]  With server throughput like that, having a 
second test team is really the only practical solution.  I get basically 
one test run per day -- crank it up when I am ready to go home, and see the 
results in the morning.

If Shai's recent contributions are an indication of how such a system 
> might work, then the system is going to work very well... 
>
>  - Anssi 
>
 

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to