On Mon, 26 Jan 2015 22:05:44 +0100, Antoine Pitrou <[email protected]> wrote:
> On Mon, 26 Jan 2015 12:22:20 -0800
> Ethan Furman <[email protected]> wrote:
> > On 01/26/2015 12:09 PM, Antoine Pitrou wrote:
> > > On Mon, 26 Jan 2015 12:06:26 -0800
> > > Ethan Furman <[email protected]> wrote:
> > >> It destroy's the chaining value and pretty much makes the improvement
> > >> not an improvement. If there's a possibility that
> > >> the same key could be in more than one of the dictionaries then you
> > >> still have to do the
> > >>
> > >> dict.update(another_dict)
> > >
> > > So what? Is the situation where chaining is desirable common enough?
> >
> > Common enough to not break it, yes.
>
> Really? What are the use cases?
My use case is a configuration method that takes keyword parameters.
In tests I want to specify a bunch of default values for the
configuration, but I want individual test methods to be able
to override those values. So I have a bunch of code that does
the equivalent of:
from test.support import default_config
[...]
def _prep(self, config_overrides):
config = default.config.copy()
config.update(config_overrides)
my_config_object.load(**config)
....
With the current proposal I could instead do:
def _prep(self, config_overrides):
my_config_object.load(**default_config, **config_overrides)
I suspect I have run into situations like this elsewhere as well, but
this is the one from one of my current projects.
That said, I must admit to being a bit ambivalent about this, since
we otherwise are careful to disallow duplicate argument names.
So, instead we could write:
my_config_object.load(**{**default_config, **config_overrides})
since dict literals *do* allow duplicate keys.
--David
_______________________________________________
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com