On Fri, 2006-06-30 at 02:59 +0000, [EMAIL PROTECTED] wrote:
> Malcolm,
> 
> Thank you. You questions have actually answered a few of mine. For
> those that are left, I'll try to clarify...
> 
> > So wouldn't the analogous situation in your case be to also reserve
> > django_test_a and django_test_b names as databases we will create in the
> > test framework and then use those?
> 
> The runtests.py that I checked in today does something similar --
> increments a counter for each test database it creates. But that
> doesn't solve the problem, which I can see I'm not explaining very
> well. An example might help. In the implementation I'm working on, a
> DATABASES property is added to settings, which is a dict, like this:
> 
> DATABASES = {
>     'a': { 'DATABASE_ENGINE': 'postgres',
>            'DATABASE_NAME': 'whatever',
>            # ...
>            },
>     'b': { 'DATABASE_ENGINE': 'postgres',
>            'DATABASE_NAME': 'something',
>            # ...
>            },
>     'tmp': { 'DATABASE_ENGINE': 'sqlite3',
>              'DATABASE_NAME': ':memory:'
>              }
>     }
> 
> What I need for testing are predictable keys in that dict, since those
> are the names of the connections. The name is what a model specifies to
> indicate that it uses a non-default connection, like:
> 
> class Zoo(models.Model):
>     # ...
>     class Meta:
>         db_connection = 'a'
> 
> (Likewise for transactions: transaction.commit(['a','b'])... ) So if my
> test models need connection 'a', I can either have the tests fail if
> there's no 'a' in DATABASES, or (if settings are reset between tests) I
> can set the settings I need inside of the tests themselves. The spirit
> of earlier answers seems to be "don't touch settings," so I'll go with
> the error option unless it just won't work at all, or someone has a
> better idea.

I understand what you are wanting to do. I'm wondering why you can't
create the dictionary inside runtests.py and then poke it inside
settings.DATABASE? Just like we do with settings.DATABASE_NAME (see line
160 of the current trunk's runtest.py). The settings file passed in by
the user then just specifies what database driver(s) to use and they
don't need to set things up according to what the tests want: you can do
that yourself inside runtests.py. That way your keys are completely
predictable and the burden on the user for setup is kept small.

Sure, writing to the settings class is officially "no-no", but breaking
the rules a little in the internal test framework supporting machinery
is not that horrible (and you won't be breaking the rules, so much as
following the pattern). It's tightly tied to the core anyway, so it's
not like we're going to not update things if conf.settings changes
internally. We already write to settings.DEBUG and settings.DATABASE
inside the test framework.

I guess one change you might want to make/require in the passed in
settings files is a list of available database drivers. So that you if
you want to generate a DATABASES dictionary containing multiple
connections to different databases, you know what you can choose from.


> 
> > Now much variation is there going to be in your test frameworks? From
> > your earlier descrption of how the configuration works, it sounded like
> > the settings were pretty fixed: a mapping of strings to database params.
> > So can't you just use the same settings throughout, or are there various
> > setups that induce different behaviour? Our current tests do not play
> > around with configuration at all, so I'm wondering if you need to do
> > this or if we can just assume a fixed settings setup.
> 
> There shouldn't be any need to change settings, beyond what I've
> described above. If the right thing to do is to just punt on the tests
> if they can't find the settings they need, then there's no need to
> change any settings at all.
> 
> > The documentation of this should go in the docs, not the tests.
> 
> This is where I really misunderstood things, and I think that's why
> some of my earlier questions were sort of obtuse. I thought that all of
> the docs on the site were built from the doctests. I hadn't looked at
> the *.txt files in docs to see that they aren't meant to be executable.
> So that really answers the biggest question, which was "how can I write
> tests for these settings values and have them get into the docs" -- the
> answer is, I don't, the docs and tests are separate. On the one hand
> that makes things a lot simpler, but on the other hand it makes me
> concerned about docs being wrong or out of sync with the tests -- how
> do you generally handle that?

The way I tend to work (can't speak for anybody else) when I'm checking
in patches is always to be asking myself whether there is a test and/or
a doc change required. This does require a good familiarity with the
docs, because some things (e.g. relations between models) are documented
in various places in more than one document. And sometimes something
gets missed, but a bug report always appears and it gets fixed quickly.

Certainly there are advantages to fully executable documentation, but
there are also disadvantages in that you have to fully set up the
supporting models, etc, for every line of example code you want in the
docs. This restricts the type of documentation examples you can use a
little bit and means maintaining more infrastructure to separate out the
executable bits from the restructured text, etc. Might be nice one day,
but not a showstopper at the moment.

At the moment, with the size of Django's documentation, I don't think
we're suffering from the docs drifting from reality too much. It's easy
to fix the small errors that creep in and everybody seems to be
sufficiently paranoid about getting the docs right.

> > I'm sure you are not just throwing these out to see how they sound, so
> > I'm probably missing something. Is changing the settings in the midst of
> > a test run necessary?
> 
> Mainly I'm trying to figure out how you all want the tests structured,
> since the django test suite is not very much like a test suite I would
> put together. :)

Yeah, I found that too when I started poking around. But it's not a bad
little setup. We can test most model- and template-level items easily
enough and it strikes a reasonable balance between persnickety and
maintainable. Plus we get the nice examples out of it.

Malcolm



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers
-~----------~----~----~----~------~----~------~--~---

Reply via email to