RE: Questioning the effects of multiple assignment
Original message From: dn via Python-list Date: 7/7/20 16:04 (GMT+10:00) To: 'Python' Subject: Questioning the effects of multiple assignment TLDR; if you are a Python 'Master' then feel free to skim the first part (which you should know hands-down), until the excerpts from 'the manual' and from there I'll be interested in learning from you...Yesterday I asked a junior prog to expand an __init__() to accept either a series of (>1) scalars (as it does now), or to take similar values but presented as a tuple. He was a bit concerned that I didn't want to introduce a (separate) keyword-argument, until 'we' remembered starred-parameters. He then set about experimenting. Here's a dichotomy that surfaced as part of our 'play':-(my problem is: I can't (reasonably) answer his question...)If you read this code:NB The print-ing does not follow the input-sequence, because that's the point to be made... >>> def f( a, *b, c=0 ):Shouldn't that def be ...>>> def f(a, c=0, *b):???... print( a, type( a ) )... print( c, type( c ) )... print( b )... >>> f( 1, 'two', 3, 'four' )[I had to force "c" to become a keyword argument, but other than that, we'll be using these three parameters and four argument-values, again]Question 1: did you correctly predict the output?1 0 ('two', 3, 'four')Ahah, "c" went to default because there was no way to identify when the "*b" 'stopped' and "c" started - so all the values 'went' to become "b" (were all "consumed by"...).Why did I also print "b" differently?Building tension!Please read on, gentle reader...Let's make two small changes:- amend the last line of the function to be similar:... print( b, type( b ) )- make proper use of the function's API: >>> f( 1, 'two', 3, c='four' )Question 2: can you predict the output of "a"? Well duh!(same as before)1 Question 3: has your predicted output of "c" changed? Yes? Good!(Web.Refs below, explain; should you wish...)four Question 4: can you correctly predict the content of "b" and its type?('two', 3) That makes sense, doesn't it? The arguments were presented to the function as a tuple, and those not assigned to a scalar value ("a" and "c") were kept as a tuple when assigned to "b".Jolly good show, chaps!(which made my young(er) colleague very happy, because now he could see that by checking the length of the parameter, such would reveal if the arguments were being passed as scalars or as a tuple.Aside: however, it made me think how handy it would be if the newly-drafted PEP 622 -- Structural Pattern Matching were available today (proposed for v3.10, https://www.python.org/dev/peps/pep-0622/) because (YAGNI-aside) we could then more-easily empower the API to accept other/more collections!Why am I writing then?Because during the same conversations I was 'demonstrating'/training/playing with some code that is (apparently) very similar - and yet, it's not. Oops!Sticking with the same, toy-data, let's code: >>> a, *b, c = 1, 'two', 3, 'four' >>> a, type( a ) >>> c, type( c ) >>> b, type( b )Question 5: what do you expect "a" and "c" to deliver in this context?(1, )('four', )Happy so far?Question 6: (for maximum effect, re-read snippets from above, then) what do you expect from "b"?(['two', 3], )List? A list? What's this "list" stuff???When "b" was a parameter (above) it was assigned a tuple!Are you as shocked as I?Have you learned something?(will it ever be useful?)Has the world stopped turning?Can you explain why these two (apparently) logical assignment processes have been designed to realise different result-objects?NB The list cf tuple difference is 'legal' - at least in the sense that it is documented/expected behavior:-Python Reference Manual: 7.2. Assignment statementsAssignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects:...An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to rightA list of the remaining items in the iterable is then assigned to the starred target (the list can be empty).https://docs.python.org/3/reference/simple_stmts.html#assignment-statementsPython Reference Manual: 6.3.4. CallsA call calls a callable object (e.g., a function) with a possibly empty series of arguments:...If there are more positional arguments than there are formal parameter slots, a TypeError exception is raised, unless a formal parameter using the syntax *identifier is present; in this case, that formal parameter receives a tuple containing the excess positional arguments (or an empty tuple if there were no excess positional arguments).https://docs.python.org/dev/reference/expressions.html#calls-- Regards,=dn-- https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-li
Re: Questioning the effects of multiple assignment
> > Can you explain why these two (apparently) logical assignment processes > have been designed to realise different result-objects? The reason is because of the conventions chosen in PEP 3132, which implemented the feature in the first place. It was considered to return a tuple for the consistency w/ *args that you initially expected, but the author offered the following reasoning: > Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the result harder. So, it was essentially a practicality > purity situation, where it was considered to be more useful to be able to easily transform the result rather than being consistent with *args. If it resulted in a tuple, it would be immutable; this IMO makes sense for *args, but not necessarily for * unpacking in assignments. The syntax is highly similar, but they are used for rather different purposes. That being said, I can certainly understand how the behavior is surprising at first. There's a bit more details in the mailing list discussion that started the PEP, see https://mail.python.org/pipermail/python-3000/2007-May/007300.html. On Tue, Jul 7, 2020 at 2:04 AM dn via Python-list wrote: > TLDR; if you are a Python 'Master' then feel free to skim the first part > (which you should know hands-down), until the excerpts from 'the manual' > and from there I'll be interested in learning from you... > > > Yesterday I asked a junior prog to expand an __init__() to accept either > a series of (>1) scalars (as it does now), or to take similar values but > presented as a tuple. He was a bit concerned that I didn't want to > introduce a (separate) keyword-argument, until 'we' remembered > starred-parameters. He then set about experimenting. Here's a dichotomy > that surfaced as part of our 'play':- > (my problem is: I can't (reasonably) answer his question...) > > > If you read this code: > NB The print-ing does not follow the input-sequence, because that's the > point to be made... > > >>> def f( a, *b, c=0 ): > ... print( a, type( a ) ) > ... print( c, type( c ) ) > ... print( b ) > ... > >>> f( 1, 'two', 3, 'four' ) > > [I had to force "c" to become a keyword argument, but other than that, > we'll be using these three parameters and four argument-values, again] > > > Question 1: did you correctly predict the output? > > 1 > 0 > ('two', 3, 'four') > > Ahah, "c" went to default because there was no way to identify when the > "*b" 'stopped' and "c" started - so all the values 'went' to become "b" > (were all "consumed by"...). > > Why did I also print "b" differently? > Building tension! > Please read on, gentle reader... > > > Let's make two small changes: > - amend the last line of the function to be similar: > ... print( b, type( b ) ) > - make proper use of the function's API: > >>> f( 1, 'two', 3, c='four' ) > > > Question 2: can you predict the output of "a"? Well duh! > (same as before) > > 1 > > > Question 3: has your predicted output of "c" changed? Yes? Good! > (Web.Refs below, explain; should you wish...) > > four > > > Question 4: can you correctly predict the content of "b" and its type? > > ('two', 3) > > That makes sense, doesn't it? The arguments were presented to the > function as a tuple, and those not assigned to a scalar value ("a" and > "c") were kept as a tuple when assigned to "b". > Jolly good show, chaps! > > (which made my young(er) colleague very happy, because now he could see > that by checking the length of the parameter, such would reveal if the > arguments were being passed as scalars or as a tuple. > > Aside: however, it made me think how handy it would be if the > newly-drafted PEP 622 -- Structural Pattern Matching were available > today (proposed for v3.10, https://www.python.org/dev/peps/pep-0622/) > because (YAGNI-aside) we could then more-easily empower the API to > accept other/more collections! > > > Why am I writing then? > > Because during the same conversations I was > 'demonstrating'/training/playing with some code that is (apparently) > very similar - and yet, it's not. Oops! > > > Sticking with the same, toy-data, let's code: > > >>> a, *b, c = 1, 'two', 3, 'four' > >>> a, type( a ) > >>> c, type( c ) > >>> b, type( b ) > > > Question 5: what do you expect "a" and "c" to deliver in this context? > > (1, ) > ('four', ) > > Happy so far? > > > Question 6: (for maximum effect, re-read snippets from above, then) what > do you expect from "b"? > > (['two', 3], ) > > List? A list? What's this "list" stuff??? > When "b" was a parameter (above) it was assigned a tuple! > > > Are you as shocked as I? > Have you learned something? > (will it ever be useful?) > Has the world stopped turning? > > > Can you explain why these two (apparently) logical assignment processes > have been designed to realise different result-objects? > > > NB The list cf tuple difference is 'legal' - at least in the sense that > it is documented/ex
Re: Bulletproof json.dump?
On 2020-07-06, Adam Funk wrote: > On 2020-07-06, Chris Angelico wrote: >> On Mon, Jul 6, 2020 at 10:11 PM Jon Ribbens via Python-list >> wrote: >>> While I agree entirely with your point, there is however perhaps room >>> for a bit more helpfulness from the json module. There is no sensible >>> reason I can think of that it refuses to serialize sets, for example. >> >> Sets don't exist in JSON. I think that's a sensible reason. > > I don't agree. Tuples & lists don't exist separately in JSON, but > both are serializable (to the same thing). Non-string keys aren't > allowed in JSON, but it silently converts numbers to strings instead > of barfing. Typically, I've been using sets to deduplicate values as > I go along, & having to walk through the whole object changing them to > lists before serialization strikes me as the kind of pointless labor > that I expect when I'm using Java. ;-) Here's another "I'd expect to have to deal with this sort of thing in Java" example I just ran into: >>> r = requests.head(url, allow_redirects=True) >>> print(json.dumps(r.headers, indent=2)) ... TypeError: Object of type CaseInsensitiveDict is not JSON serializable >>> print(json.dumps(dict(r.headers), indent=2)) { "Content-Type": "text/html; charset=utf-8", "Server": "openresty", ... } -- I'm after rebellion --- I'll settle for lies. -- https://mail.python.org/mailman/listinfo/python-list
Access last element after iteration
Hi all After iterating over a sequence, the final element is still accessible. In this case, the variable 'i' still references the integer 4. Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> for i in range(5): ... print(i) ... 0 1 2 3 4 >>> print(i) 4 >>> Is this guaranteed in Python, or should it not be relied on? If the latter, and you wanted to do something additional using the last element, I assume that this would be the way to do it - >>> for i in range(5): ... print(i) ... j = i ... 0 1 2 3 4 >>> print(j) 4 >>> Alternatively, this also works, but is this one guaranteed? >>> for i in range(5): ... print(i) ... else: ... print() ... print(i) ... 0 1 2 3 4 4 >>> Frank Millman -- https://mail.python.org/mailman/listinfo/python-list
Re: Access last element after iteration
On 2020-07-07, Frank Millman wrote: > After iterating over a sequence, the final element is still accessible. > In this case, the variable 'i' still references the integer 4. ... > Is this guaranteed in Python, or should it not be relied on? It is guaranteed, *except* if the sequence is empty and therefore the loop never executes at all, the variable will not have been assigned to and therefore may not exist. -- https://mail.python.org/mailman/listinfo/python-list
Re: Access last element after iteration
On Tue, Jul 7, 2020 at 10:28 PM Frank Millman wrote: > > Hi all > > After iterating over a sequence, the final element is still accessible. > In this case, the variable 'i' still references the integer 4. > Yes, it's guaranteed. It isn't often useful; but the variant where there's a "break" in the loop most certainly is. If you hit the break, the iteration variable will still have whatever it had at the end. This is a great use of 'else' (arguably the primary use of it). You do something like: for thing in iterable: if want(thing): break else: thing = None If the iterable is empty, you go to the else. If you don't find the thing you want, you go to the else. But if you find it and break, thing has the thing you wanted. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Bulletproof json.dump?
On Mon, Jul 6, 2020 at 6:37 AM Adam Funk wrote: > Is there a "bulletproof" version of json.dump somewhere that will > convert bytes to str, any other iterables to list, etc., so you can > just get your data into a file & keep working? > Is the data only being read by python programs? If so, consider using pickle: https://docs.python.org/3/library/pickle.html Unlike json dumping, the goal of pickle is to represent objects as exactly as possible and *not* to be interoperable with other languages. If you're using json to pass data between python and some other language, you don't want to silently convert bytes to strings. If you have a bytestring of utf-8 data, you want to utf-8 decode it before passing it to json.dumps. Likewise, if you have latin-1 data, you want to latin-1 decode it. There is no universal and correct bytes-to-string conversion. On Mon, Jul 6, 2020 at 9:45 AM Chris Angelico wrote: > Maybe what we need is to fork out the default JSON encoder into two, > or have a "strict=True" or "strict=False" flag. In non-strict mode, > round-tripping is not guaranteed, and various types will be folded to > each other - mainly, many built-in and stdlib types will be > represented in strings. In strict mode, compliance with the RFC is > ensured (so ValueError will be raised on inf/nan), and everything > should round-trip safely. > Wouldn't it be reasonable to represent this as an encoder which is provided by `json`? i.e. from json import dumps, UnsafeJSONEncoder ... json.dumps(foo, cls=UnsafeJSONEncoder) Emphasizing the "Unsafe" part of this and introducing people to the idea of setting an encoder also seems nice. On Mon, Jul 6, 2020 at 9:12 AM Chris Angelico wrote: > On Mon, Jul 6, 2020 at 11:06 PM Jon Ribbens via Python-list > wrote: > > > The 'json' module already fails to provide round-trip functionality: > > > > >>> for data in ({True: 1}, {1: 2}, (1, 2)): > > ... if json.loads(json.dumps(data)) != data: > > ... print('oops', data, json.loads(json.dumps(data))) > > ... > > oops {True: 1} {'true': 1} > > oops {1: 2} {'1': 2} > > oops (1, 2) [1, 2] > > There's a fundamental limitation of JSON in that it requires string > keys, so this is an obvious transformation. I suppose you could call > that one a bug too, but it's very useful and not too dangerous. (And > then there's the tuple-to-list transformation, which I think probably > shouldn't happen, although I don't think that's likely to cause issues > either.) Ideally, all of these bits of support for non-JSON types should be opt-in, not opt-out. But it's not worth making a breaking change to the stdlib over this. Especially for new programmers, the notion that deserialize(serialize(x)) != x just seems like a recipe for subtle bugs. You're never guaranteed that the deserialized object will match the original, but shouldn't one of the goals of a de/serialization library be to get it as close as is reasonable? I've seen people do things which boil down to json.loads(x)["some_id"] == UUID(...) plenty of times. It's obviously wrong and the fix is easy, but isn't making the default json encoder less strict just encouraging this type of bug? Comparing JSON data against non-JSON types is part of the same category of errors: conflating JSON with dictionaries. It's very easy for people to make this mistake, especially since JSON syntax is a subset of python dict syntax, so I don't think `json.dumps` should be encouraging it. On Tue, Jul 7, 2020 at 6:52 AM Adam Funk wrote: > Here's another "I'd expect to have to deal with this sort of thing in > Java" example I just ran into: > > >>> r = requests.head(url, allow_redirects=True) > >>> print(json.dumps(r.headers, indent=2)) > ... > TypeError: Object of type CaseInsensitiveDict is not JSON serializable > >>> print(json.dumps(dict(r.headers), indent=2)) > { > "Content-Type": "text/html; charset=utf-8", > "Server": "openresty", > ... > } > Why should the JSON encoder know about an arbitrary dict-like type? It might implement Mapping, but there's no way for json.dumps to know that in the general case (because not everything which implements Mapping actually inherits from the Mapping ABC). Converting it to a type which json.dumps understands is a reasonable constraint. Also, wouldn't it be fair, if your object is "case insensitive" to serialize it as { "CONTENT-TYPE": ... } or { "content-type": ... } or ... ? `r.headers["content-type"]` presumably gets a hit. `json.loads(json.dumps(dict(r.headers)))["content-type"]` will get a KeyError. This seems very much out of scope for the json package because it's not clear what it's supposed to do with this type. Libraries should ask users to specify what they mean and not make potentially harmful assumptions. Best, -Stephen -- https://mail.python.org/mailman/listinfo/python-list
Re: Bulletproof json.dump?
Try jsonlight.dumps it'll just work. Le mar. 7 juil. 2020 à 12:53, Adam Funk a écrit : > On 2020-07-06, Adam Funk wrote: > > > On 2020-07-06, Chris Angelico wrote: > >> On Mon, Jul 6, 2020 at 10:11 PM Jon Ribbens via Python-list > >> wrote: > > >>> While I agree entirely with your point, there is however perhaps room > >>> for a bit more helpfulness from the json module. There is no sensible > >>> reason I can think of that it refuses to serialize sets, for example. > >> > >> Sets don't exist in JSON. I think that's a sensible reason. > > > > I don't agree. Tuples & lists don't exist separately in JSON, but > > both are serializable (to the same thing). Non-string keys aren't > > allowed in JSON, but it silently converts numbers to strings instead > > of barfing. Typically, I've been using sets to deduplicate values as > > I go along, & having to walk through the whole object changing them to > > lists before serialization strikes me as the kind of pointless labor > > that I expect when I'm using Java. ;-) > > Here's another "I'd expect to have to deal with this sort of thing in > Java" example I just ran into: > > > >>> r = requests.head(url, allow_redirects=True) > >>> print(json.dumps(r.headers, indent=2)) > ... > TypeError: Object of type CaseInsensitiveDict is not JSON serializable > >>> print(json.dumps(dict(r.headers), indent=2)) > { > "Content-Type": "text/html; charset=utf-8", > "Server": "openresty", > ... > } > > > -- > I'm after rebellion --- I'll settle for lies. > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: Access last element after iteration
On 8/07/20 12:45 AM, Chris Angelico wrote: On Tue, Jul 7, 2020 at 10:28 PM Frank Millman wrote: Hi all After iterating over a sequence, the final element is still accessible. In this case, the variable 'i' still references the integer 4. Yes, it's guaranteed. It isn't often useful; but the variant where there's a "break" in the loop most certainly is. If you hit the break, the iteration variable will still have whatever it had at the end. This is a great use of 'else' (arguably the primary use of it). You do something like: for thing in iterable: if want(thing): break else: thing = None If the iterable is empty, you go to the else. If you don't find the thing you want, you go to the else. But if you find it and break, thing has the thing you wanted. It wasn't clear if the OP was interested in the value of the pertinent index or that of the indexed element from the iterable/sequence. However, the techniques apply - adding enumerate() if the source is a collection (for example). Am impressed to see a constructive use of the else: clause! It is a most pythonic construction used in that mode. OTOH/IMHO, one of the harder philosophical lessons to learn, is that (in Python at least) an exception is not necessarily an error - as in 'catastrophe'! Accordingly, there are many examples where 'success' in such a search-pattern might be terminated with raise. The 'calling routine'/outer-block would then be able to utilise a try...except...else...finally structure - arguably both 'richer' and better-understood by 'the average pythonista' than for...else/while...else (per previous discussions 'here') Your thoughts? Apologies to OP, if am 'hi-jacking' original post. -- Regards =dn -- https://mail.python.org/mailman/listinfo/python-list
Re: Questioning the effects of multiple assignment
On 7/07/20 7:21 PM, Mike Dewhirst wrote: Original message For comparison, here's the original form:- >>> def f( a, *b, c=0 ): ... print( a, type( a ) ) ... print( c, type( c ) ) ... print( b ) ... >>> f( 1, 'two', 3, 'four' ) 1 0 ('two', 3, 'four') Shouldn't that def be ... >>> def f(a, c=0, *b): ??? It might appear that way, but in v3.7*, they are not:- >>> def f(a, c=0, *b): ... print( a, type( a ) ) ... print( c, type( c ) ) ... print( b, type( b ) ) ... >>> f( 1, 'two', 3, 'four' ) 1 two # (3, 'four') # and even worse when we attempt to specify "c" as a keyword argument: >>> f( 1, 'two', 3, c='four' ) Traceback (most recent call last): File "", line 1, in TypeError: f() got multiple values for argument 'c' >>> f( 1, c='four', 'two', 3 ) File "", line 1 SyntaxError: positional argument follows keyword argument >>> f( 1, 'four', 'two', 3 ) 1 four ('two', 3) Please remember also, that the idea of interposing a keyword-argument was by way of illustration. The original spec was to expand the API to accept either a series of scalars or equivalent values as a tuple, without (also) changing the API to require the new/tuple option be implemented as a keyword-argument. However, to answer the question: the method of assigning the arguments' values to parameters is to start from the left, but upon reaching a *identifier to re-start by allocating from the right. This will leave zero or more values 'in the middle', to be tuple-ified as the *identifier. Otherwise, if the allocation were l-to-r and "greedy", there would never be any values assigned to 'later' parameters! For your reading pleasure:- Python Reference Manual: 6.3.4. Calls A call calls a callable object (e.g., a function) with a possibly empty series of arguments: ... If there are more positional arguments than there are formal parameter slots, a TypeError exception is raised, unless a formal parameter using the syntax *identifier is present; in this case, that formal parameter receives a tuple containing the excess positional arguments (or an empty tuple if there were no excess positional arguments). https://docs.python.org/dev/reference/expressions.html#calls * I was remiss in not stating that this project is (?still) running with Python v3.7. Apologies! (it was established some time ago, and evidently the client has not seen fit to even consider upgrading as part of any sprint, to-date. Note to self...) So, please be aware of: https://docs.python.org/3/whatsnew/3.8.html#positional-only-parameters https://www.python.org/dev/peps/pep-0570/ If you are running a more recent release, perhaps you might like to re-run the snippets, experiment, and document any contrary findings? -- Regards =dn -- https://mail.python.org/mailman/listinfo/python-list
Re: Questioning the effects of multiple assignment
On 7/07/20 7:44 PM, Kyle Stanley wrote: Can you explain why these two (apparently) logical assignment processes have been designed to realise different result-objects? The reason is because of the conventions chosen in PEP 3132, which implemented the feature in the first place. It was considered to return a tuple for the consistency w/ *args that you initially expected, but the author offered the following reasoning: > Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the result harder. So, it was essentially a practicality > purity situation, where it was considered to be more useful to be able to easily transform the result rather than being consistent with *args. If it resulted in a tuple, it would be immutable; this IMO makes sense for *args, but not necessarily for * unpacking in assignments. The syntax is highly similar, but they are used for rather different purposes. That being said, I can certainly understand how the behavior is surprising at first. There's a bit more details in the mailing list discussion that started the PEP, see https://mail.python.org/pipermail/python-3000/2007-May/007300.html. Thank you - I had failed to find that discussion, but it and the explanation above, make perfect sense. You can color me hobgoblin for expecting ?'consistency'! - and whilst I'm liberally (mis-)quoting: I'm not going to argue with the better minds of the giants, upon whose shoulders I stand... Continuing on, but instead of considering the handling of argument/parameters to be 'authoritative' (which we were, from the perspective of 'consistency'); perhaps consider the assignment decision as "authoritative" and consider if the calling-behavior should be made consistent:- One of the features of Python's sub-routines, which I enjoy, is summarised in two ways: 1 the many books on 'programming style' (for other languages) which talk about a function's signature needing to separate 'input-parameters', from 'output-parameters' to enhance readability. - in Python we have parameters (let's say: only for input), neatly separated from 'output' values which don't get-a-mention until the return statement (see also "typing" - although the "return" might be considerably separated from the "->"). Python:1, those others:nil! 2 a perennial-question is: "are Python's function-arguments passed by-reference, passed by-value, or what?" - in Python we pass by assignment and 'the rules' vary according to mutability. (in themselves a frequent 'gotcha' for newcomers, but once understood, make perfect sense and realise powerful opportunities) Python:2, those others:nil - still! (IMHO) A matter of style, which I like to follow [is it TDD's influence? - or does it actually come-from reading about DBC (Design by Contract*)?] is the injunction that one *not* vary the value of a parameter inside a method/function. (useful in 'open-box testing' to check both the API and that input+process agrees with the expected and actual output, but irrelevant for 'closed-box testing') This also has the effect of side-stepping any unintended issues caused by changing the values of mutable parameters! (although sometimes it's a equally-good idea to do-so!) Accordingly, binding argument-values to mutable parameters (instead of an immutable tuple) might cause problems/"side-effects", were those parameters' values changed within the function! Making sense to you? -- Regards =dn -- https://mail.python.org/mailman/listinfo/python-list
Installing Basemap in Jupyter Notebook
Hi I have a trouble in installing basemap in my local Jupyter Notebook. I used the code below. But it did not work. !conda install -c conda-forge basemap==1.3.0 matplotlib==2.2.2 -y How can I install basemap in my Jupyter Notebook? Thanks Best Regards Mich -- https://mail.python.org/mailman/listinfo/python-list
Re: Questioning the effects of multiple assignment
> > A matter of style, which I like to follow [is it TDD's influence? - or > does it actually come-from reading about DBC (Design by Contract*)?] is > the injunction that one *not* vary the value of a parameter inside a > method/function. > (useful in 'open-box testing' to check both the API and that > input+process agrees with the expected and actual output, but irrelevant > for 'closed-box testing') > This also has the effect of side-stepping any unintended issues caused > by changing the values of mutable parameters! > (although sometimes it's a equally-good idea to do-so!) > > Accordingly, binding argument-values to mutable parameters (instead of > an immutable tuple) might cause problems/"side-effects", were those parameters' values changed within the function! > Making sense to you? > I think I can see where you're going with this, and it makes me wonder if it might be a reasonable idea to have an explicit syntax to be able to change the behavior to store those values in a tuple instead of a list. The programming style of making use of immutability as much as possible to avoid side effects is quite common, and becoming increasingly so from what I've seen of recent programming trends. If something along those lines is something you'd be interested in and have some real-world examples of where it could specifically be useful (I currently don't), it might be worth pursuing further on python-ideas. Due to the backwards compatibility issues, I don't think we can realistically make the default change from a list to a tuple, but that doesn't mean having a means to explicitly specify that you want the immutability is unreasonable. You'd also likely have to argue against why being able to do it during assignment is advantageous compared to simply doing it immediately after the initial unpacking assignment. E.g. :: >>> a, *b, c = 1, 'two', 3, 'four' >>> b = tuple(b) (I'd like to particularly emphasize the importance of having some compelling real-world examples in the proposal if you decide to pursue this, as otherwise it would likely be dismissed as YAGNI.) On Tue, Jul 7, 2020 at 8:26 PM dn via Python-list wrote: > On 7/07/20 7:44 PM, Kyle Stanley wrote: > > Can you explain why these two (apparently) logical assignment > processes > > have been designed to realise different result-objects? > > > > > > The reason is because of the conventions chosen in PEP 3132, which > > implemented the feature in the first place. It was considered to return > > a tuple for the consistency w/ *args that you initially expected, but > > the author offered the following reasoning: > > > > > Make the starred target a tuple instead of a list. This would be > > consistent with a function's *args, but make further processing of the > > result harder. > > > > So, it was essentially a practicality > purity situation, where it was > > considered to be more useful to be able to easily transform the result > > rather than being consistent with *args. If it resulted in a tuple, it > > would be immutable; this IMO makes sense for *args, but not necessarily > > for * unpacking in assignments. The syntax is highly similar, but they > > are used for rather different purposes. That being said, I can certainly > > understand how the behavior is surprising at first. > > > > There's a bit more details in the mailing list discussion that started > > the PEP, see > > https://mail.python.org/pipermail/python-3000/2007-May/007300.html. > > > Thank you - I had failed to find that discussion, but it and the > explanation above, make perfect sense. > > > You can color me hobgoblin for expecting ?'consistency'! - and whilst > I'm liberally (mis-)quoting: I'm not going to argue with the better > minds of the giants, upon whose shoulders I stand... > > > Continuing on, but instead of considering the handling of > argument/parameters to be 'authoritative' (which we were, from the > perspective of 'consistency'); perhaps consider the assignment decision > as "authoritative" and consider if the calling-behavior should be made > consistent:- > > > One of the features of Python's sub-routines, which I enjoy, is > summarised in two ways: > > 1 the many books on 'programming style' (for other languages) which talk > about a function's signature needing to separate 'input-parameters', > from 'output-parameters' to enhance readability. > - in Python we have parameters (let's say: only for input), neatly > separated from 'output' values which don't get-a-mention until the > return statement > (see also "typing" - although the "return" might be considerably > separated from the "->"). > Python:1, those others:nil! > > 2 a perennial-question is: "are Python's function-arguments passed > by-reference, passed by-value, or what?" > - in Python we pass by assignment and 'the rules' vary according to > mutability. > (in themselves a frequent 'gotcha' for newcomers, but once understood, > make perfect sense and realise powerful opportunities) > Python:2