Re: Error in or
On Thu, Jun 11, 2015 at 10:39 AM, subhabrata.bane...@gmail.com wrote: On Thursday, June 11, 2015 at 9:20:59 PM UTC+5:30, Ian wrote: On Thu, Jun 11, 2015 at 9:40 AM, if write this it is working fine, but if I write if (AND in inp1) or (OR in inp1) or (NOT in inp1) or ( in inp1) or ( in inp1) or (MAYBE in inp1) or (( in inp1) or (* in inp1) or (''' ''' in inp1): the portion of (''' ''' in inp1) is not working. Not working how? I copy-pasted the line and it appears to work fine. Dear Sir, Thank you for your kind reply. Nice to know your reply, but I am trying to send you my experiment, please see my results, def input1(n): inp1=raw_input(PRINT YOUR QUERY:) if (AND in inp1) or (OR in inp1) or (NOT in inp1) or ( in inp1) or ( in inp1) or (MAYBE in inp1) or (( in inp1) or (* in inp1) or (''' ''' in inp1): print FINE input1(1) PRINT YOUR QUERY:Java input1(1) PRINT YOUR QUERY:Obama in London input1(1) PRINT YOUR QUERY:Obama AND Bush FINE input1(1) PRINT YOUR QUERY:Obama OR Bush FINE you may get better my problem. The substring that you're looking for has spaces around the symbol. The example inputs that you gave don't have spaces around the symbols, so they don't contain the substring. The triple quotes are also unnecessary, though harmless -- it's not a multiline string, and there are no ' symbols to escape in the string. Try replacing the substring with just this: ''. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to find number of whole weeks between dates?
On Wed, Jun 10, 2015 at 2:11 PM, Sebastian M Cheung via Python-list python-list@python.org wrote: On Wednesday, June 10, 2015 at 6:06:09 PM UTC+1, Sebastian M Cheung wrote: Say in 2014 April to May whole weeks would be 7th, 14th 28th April and May would be 5th, 12th and 19th. So expecting 7 whole weeks in total What I mean is given two dates I want to find WHOLE weeks, so if given the 2014 calendar and function has two inputs (4th and 5th month) then 7th, 14th, 21st and 28th from April with 28th April week carrying into May, and then 5th, 12th and 19th May to give total of 7 whole weeks, because 26th May is not a whole week and will not be counted. So the two dates being passed are actually months? The calendar module already suggested should be useful for this. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to find number of whole weeks between dates?
On Wed, Jun 10, 2015 at 11:05 AM, Sebastian M Cheung via Python-list python-list@python.org wrote: Say in 2014 April to May whole weeks would be 7th, 14th 28th April and May would be 5th, 12th and 19th. So expecting 7 whole weeks in total from datetime import date d1 = date(2014, 4, 7) d2 = date(2014, 5, 19) d2 - d1 datetime.timedelta(42) (d2 - d1).days 42 (d2 - d1).days // 7 6 -- https://mail.python.org/mailman/listinfo/python-list
Re: Testing random
On Wed, Jun 10, 2015 at 11:03 AM, Thomas 'PointedEars' Lahn pointede...@web.de wrote: Jussi Piitulainen wrote: Thomas 'PointedEars' Lahn writes: Jussi Piitulainen wrote: Thomas 'PointedEars' Lahn writes: 8 3 6 3 1 2 6 8 2 1 6. There are more than four hundred thousand ways to get those numbers in some order. (11! / 2! / 2! / 2! / 3! / 2! = 415800) Fallacy. Order is irrelevant here. You need to consider every sequence that leads to the observed counts. No, you need _not_, because – I repeat – the probability of getting a sequence of length n from a set of 9 numbers whereas the probability of picking a number is evenly distributed, is (1∕9)ⁿ [(1/9)^n, or 1/9 to the nth, for those who do to see it because of lack of Unicode support at their system]. *Always.* *No matter* which numbers are in it. *No matter* in which order they are. AISB, order is *irrelevant* here. *Completely.* Order is relevant because, for instance, there are n differently ordered sequences that contain n-1 1s and one 2, while there is only one sequence that contains n 1s. While each of those individual sequences are indeed equiprobable, the overall probability of getting a sequence that contains n-1 1s and one 2 is n times the probability of getting a sequence that contains n 1s. The context of this whole thread is about the probability of getting a sequence where every number occurs at least once. The order that they occur in doesn't matter, but the number of possible permutations does, because every one of those permutations is a distinct sequence contributing an equal amount to the total overall probability. The probability of 123456789 and 1 are equal. The probability of a sequence containing all nine numbers and a sequence containing only 1s are *not* equal. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to find number of whole weeks between dates?
On Wed, Jun 10, 2015 at 8:01 PM, Sebastian M Cheung via Python-list python-list@python.org wrote: yes just whole weeks given any two months, I did looked into calendar module but couldn't find specifically what i need. cal.monthdays2calendar(2014, 4) + cal.monthdays2calendar(2014, 5) [[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)], [(7, 0), (8, 1), (9, 2), (10, 3), (11, 4), (12, 5), (13, 6)], [(14, 0), (15, 1), (16, 2), (17, 3), (18, 4), (19, 5), (20, 6)], [(21, 0), (22, 1), (23, 2), (24, 3), (25, 4), (26, 5), (27, 6)], [(28, 0), (29, 1), (30, 2), (0, 3), (0, 4), (0, 5), (0, 6)], [(0, 0), (0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6)], [(5, 0), (6, 1), (7, 2), (8, 3), (9, 4), (10, 5), (11, 6)], [(12, 0), (13, 1), (14, 2), (15, 3), (16, 4), (17, 5), (18, 6)], [(19, 0), (20, 1), (21, 2), (22, 3), (23, 4), (24, 5), (25, 6)], [(26, 0), (27, 1), (28, 2), (29, 3), (30, 4), (31, 5), (0, 6)]] You just need to: 1) Trim the first and last weeks off since they contain invalid dates. 2) Merge the overlapping last week of April and first week of May. 3) Count the resulting number of weeks in the list. Alternatively, the dateutil.rrule module could probably be used to do this fairly easily, but it's a third-party module and not part of the standard library. https://labix.org/python-dateutil -- https://mail.python.org/mailman/listinfo/python-list
Re: How to find number of whole weeks between dates?
On Wed, Jun 10, 2015 at 9:19 PM, Michael Torrie torr...@gmail.com wrote: On 06/10/2015 02:11 PM, Sebastian M Cheung via Python-list wrote: On Wednesday, June 10, 2015 at 6:06:09 PM UTC+1, Sebastian M Cheung wrote: Say in 2014 April to May whole weeks would be 7th, 14th 28th April and May would be 5th, 12th and 19th. So expecting 7 whole weeks in total What I mean is given two dates I want to find WHOLE weeks, so if given the 2014 calendar and function has two inputs (4th and 5th month) then 7th, 14th, 21st and 28th from April with 28th April week carrying into May, and then 5th, 12th and 19th May to give total of 7 whole weeks, because 26th May is not a whole week and will not be counted. Hope thats clear. I think Joel had the right idea. First calculate the rough number of weeks by taking the number of days between the date and divide by seven. Then check to see what the start date's day of week is, and adjust the rough week count down by one if it's not the first day of the week. I'm not sure if you have to check the end date's day of week or not. I kind of think checking the first one only is sufficient, but I could be wrong. You'll have to code it up and test it, which I assume you've been doing up to this point, even though you haven't shared any code. I don't think the logic is quite right. Consider: cal = calendar.TextCalendar() print(cal.formatmonth(2014, 6)) June 2014 Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 date(2014, 7, 1) - date(2014, 6, 1) datetime.timedelta(30) _.days // 7 - 1 3 -- https://mail.python.org/mailman/listinfo/python-list
Re: Testing random
On Sun, Jun 7, 2015 at 10:44 AM, Chris Angelico ros...@gmail.com wrote: On Mon, Jun 8, 2015 at 2:36 AM, Thomas 'PointedEars' Lahn pointede...@web.de wrote: The greater the multiplier, the lower the chance that any element will have no hits. Wrong. [ex falso quodlibet] Huh. Do you want to explain how, mathematically, I am wrong, or do you want to join the RUE in my ignore list? My best speculation is that he's either objecting to the generality of your statement (it's false if the probability of some element occurring is zero or eventually degrades to zero), or misreading the word multiplier to the conclusion that the value of each element is being multiplied rather than the number of trials. Or trolling; I suppose that's always an option too. -- https://mail.python.org/mailman/listinfo/python-list
Re: Get html DOM tree by only basic builtin moudles
On Fri, Jun 5, 2015 at 12:10 PM, Wesley nisp...@gmail.com wrote: Hi Laura, Sure, I got special requirement that just parse html file into DOM tree, by only general basic modules, and based on my DOM tree structure, draft an bitmap. So, could you give me an direction how to get the DOM tree? Currently, I just think out to use something like stack, I mean, maybe read the file line by line, adding to a stack data structure(list for example), and, then, got the parent/child relation .etc I don't know if what I said is easy to achieve, I am just trying. Any better suggestions will be great appreciated. If you want to recreate the same DOM structure that would be created by a browser, the standardized algorithm to do so is very complicated, but you can find it at http://www.w3.org/TR/2011/WD-html5-20110113/parsing.html. If you're not necessarily seeking perfect fidelity, I would encourage you to try to find some way to incorporate beautifulsoup into your project. It likely won't produce the same structure that a real browser would, but it should do well enough to scrape from even badly malformed html. I recommend against using an XML parser, because HTML isn't XML, and such a parser may choke even on perfectly valid HTML such as this: !DOCTYPE html html headtitleDocument/title/head body First line br Second line /body /html -- https://mail.python.org/mailman/listinfo/python-list
Re: How to inverse a particle emitter
On Thu, Jun 4, 2015 at 5:47 PM, stephenpprane...@gmail.com wrote: On Thursday, June 4, 2015 at 4:15:29 PM UTC-7, stephenp...@gmail.com wrote: hey, i really need help, im a straight up beginner in scripting and i need to figure out how to make an inverted particle emitter using python in maya unfortunitly i have to make this using python not mel I'm sorry if I come across as dismissive, but as I stated before this is not the place to be looking for expertise on Maya scripting, regardless of what language you might happen to be using. A little bit of googling turns up several useful-looking tutorials: http://www.maya-python.com/ http://zurbrigg.com/maya-python/category/beginning-python-for-maya https://www.youtube.com/watch?v=eXFGeZZbMzQ as well as a web forum which looks to be quite active: https://groups.google.com/forum/#!forum/python_inside_maya -- https://mail.python.org/mailman/listinfo/python-list
Re: How to inverse a particle emitter
On Thu, Jun 4, 2015 at 5:15 PM, stephenpprane...@gmail.com wrote: hey, i really need help, im a straight up beginner in scripting and i need to figure out how to make an inverted particle emitter using python in maya No idea. This sounds more like a Maya question than a Python question. Maybe there is a Maya forum that would be a better place to direct this question. -- https://mail.python.org/mailman/listinfo/python-list
Re: Can Python function return multiple data?
On Wed, Jun 3, 2015 at 3:56 PM, Mark Lawrence breamore...@yahoo.co.uk wrote: Now does Python pass by value or by reference? Happily sits back and waits for 10**6 emails to arrive as this is discussed for the 10**6th time. Troll. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to access the low digits of a list
On Wed, Jun 3, 2015 at 3:08 PM, Rustom Mody rustompm...@gmail.com wrote: On Tuesday, June 2, 2015 at 7:50:58 PM UTC+5:30, Ian wrote: On Tue, Jun 2, 2015 at 6:35 AM, Rustom Mody wrote: For that matter even this works But I am not sure whats happening or that I like it [x[-2:] for x in lines] ['12', '42', '49', '56', '25', '36', '49', '64', '81', '00'] x[-2:] selects all items in the sequence with index i such that len(x) - 2 = i len(x). For a sequence of length 2 or less, that's the entire sequence. Thanks -- learn something So it means that indices can give indexerror; slices cannot? Seems fair enough put that way, but is visually counterintuitive Yes. The rule I paraphrased above is stated at https://docs.python.org/3/library/stdtypes.html#common-sequence-operations -- scroll down to note 4. I don't know if there's anything that clearly states that sequence slicing can't raise IndexError, but it is at least implied by the above, and it is certainly true of all builtin sequence types. -- https://mail.python.org/mailman/listinfo/python-list
Re: Everything is an object in python - object class and type class
On Wed, Jun 3, 2015 at 2:57 AM, Marko Rauhamaa ma...@pacujo.net wrote: Steven D'Aprano steve+comp.lang.pyt...@pearwood.info: On Wednesday 03 June 2015 08:33, Marko Rauhamaa wrote: In Python, classes are little more than constructor functions. [...] Classes give you an inheritance hierarchy. That's encapsulated in the constructor. Not entirely true. class A: ... def say_hi(self): ... print(Hello!) ... class B: ... def say_hi(self): ... print(G'day!) ... obj = A() obj.say_hi() Hello! obj.__class__ = B obj.say_hi() G'day! -- https://mail.python.org/mailman/listinfo/python-list
Re: Everything is an object in python - object class and type class
On Tue, Jun 2, 2015 at 12:10 PM, Ned Batchelder n...@nedbatchelder.com wrote: On Tuesday, June 2, 2015 at 1:59:37 PM UTC-4, BartC wrote: Javascript primitives include Number and String. What does Python allow to be done with its Number (int, etc) and String types that can't be done with their Javascript counterparts, that makes /them/ objects? They have methods (not many, but a few): i, f = 101, 2.5 i.bit_length() 20 i.to_bytes(6, big) b'\x00\x00\x00\x0fBA' f.as_integer_ratio() (5, 2) f.hex() '0x1.4p+1' To add a wrinkle to this discussion, Javascript numbers also have methods: js (24).toExponential(3) 2.400e+1 I believe this is accomplished by implicitly boxing the number in the Number class when a method or property is accessed. This can be seen with: js (24).toSource() (new Number(24)) Note that 24 and new Number(24) are not equivalent. js 24 === 24 true js 24 === new Number(24) false js typeof(24) number js typeof(new Number(24)) object But this is a bit of an implementation detail. So what distinguishes Javascript primitives from objects? Steven listed identity as a third property of objects upthread; that seems applicable here. -- https://mail.python.org/mailman/listinfo/python-list
Re: Everything is an object in python - object class and type class
On Tue, Jun 2, 2015 at 3:47 PM, Mark Lawrence breamore...@yahoo.co.uk wrote: The classic response to Super Considered Harmful for those who may be interested is https://rhettinger.wordpress.com/2011/05/26/super-considered-super/ and recently https://www.youtube.com/watch?v=EiOglTERPEo I feel slightly cheated. In the video he promises to show how to do dependency injection using super, but in both of the examples given, every instance of super() could simply be replaced with self and the result would be the same. That's just taking advantage of the MRO, not super. -- https://mail.python.org/mailman/listinfo/python-list
Re: should self be changed?
On Tue, Jun 2, 2015 at 11:19 AM, Marko Rauhamaa ma...@pacujo.net wrote: Steven D'Aprano st...@pearwood.info: On Fri, 29 May 2015 12:00 pm, Steven D'Aprano wrote: [...] in a language where classes are themselves values, there is no reason why a class must be instantiated, particularly if you're only using a single instance of the class. Anyone ever come across a named design pattern that involves using classes directly without instantiating them? I'm basically looking for a less inelegant term for instanceless class -- not so much a singleton as a zeroton. C# has these, and calls them static classes. I guess Python has them, too, and calls them modules. Indeed. I find it amusing that C# has special syntax to work around what is ostensibly a design feature -- that all code must be contained in a class (or struct). But then, the same practice also exists in Java, where there is no specific syntax for it. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to access the low digits of a list
On Tue, Jun 2, 2015 at 6:35 AM, Rustom Mody rustompm...@gmail.com wrote: For that matter even this works But I am not sure whats happening or that I like it [x[-2:] for x in lines] ['12', '42', '49', '56', '25', '36', '49', '64', '81', '00'] x[-2:] selects all items in the sequence with index i such that len(x) - 2 = i len(x). For a sequence of length 2 or less, that's the entire sequence. -- https://mail.python.org/mailman/listinfo/python-list
Re: Everything is an object in python - object class and type class
On Mon, Jun 1, 2015 at 4:59 PM, BartC b...@freeuk.com wrote: I'm developing a new language along the lines of Python, perhaps a brief description of how things are done there might help. Or just give a different perspective. Objects in this language are tagged: there's a code attached to each indicating what kind of data is stored. These codes are integers, or enumerations, for example: Int =1 String = 2 List = 3 Class = 4 And the following are examples of object instances with their tags: a is (Int,42) b is (String, ABCXYZ) c is (List, [10,20,30,40]) d is (Class, 2 or String) The last one might be tricky to grasp: it's really just a number, but one that represents a class or type (or tag). If printed, it could display String rather than just 2. (And when used to do something with the class, the 2 might be an index into a set of tables.) d is not /the/ class itself, but just a reference to it (this is pseudo-code not Python): print (b) = ABCXYZ print (typeof(b)) = String 2 print (d) = String 2 print (typeof(d)) = Class 4 print (typeof(typeof(d))) = Class 4 In my own language, the connection between class and type is hazy (I've only just added classes so it needs to be hazy until I understand them more). 'Type' includes everything, including all the built-in types such as the tags above, but also user-defined classes. In fact classes are the mechanism used to define new types. But with both designated by an integer within the same band (eg. built-int types 1 to 20, user-defined classes 21 and up), it is easier not to have a strong distinction ... at the moment. I haven't a root class yet that is the base of all the others. I don't think it's necessary internally to make things work. But it might be a useful concept in the language. Calling it 'object' however might give rise to confusion as 'object' is informally used to refer to instances. (I've never used OO but have picked up some of the jargon!) (This is almost certainly not how Python does things. Although the Python language doesn't go into details as to its implementation provided the behaviour is correct.) It sounds quite similar to me. In CPython at least, every object has a direct pointer to its type rather than an array index (which is essentially just an offset from a pointer). -- https://mail.python.org/mailman/listinfo/python-list
Re: lotto number generator
On Mon, Jun 1, 2015 at 11:13 AM, Ian Kelly ian.g.ke...@gmail.com wrote: On Mon, Jun 1, 2015 at 10:23 AM, gm notmym...@mail.not wrote: Hi. I am new to python so am still in learning phase. I was thinking to make one program that will print out all possible combinations of 10 pairs. I think this is a good way for something bigger :-). This is how this looks like: 1.) 1 2 2.) 1 2 3.) 1 2 4.) 1 2 5.) 1 2 6.) 1 2 7.) 1 2 8.) 1 2 9.) 1 2 10.) 1 2 So, i want to print out (in rows) each possible configuration but for all 10 pairs. example: 1 1 1 2 2 2 2 1 1 1 2 2 2 1 1 1 1 2 2 2 1 1 2 2 1 1 2 2 1 1 etc. What would be the best way to make something like this ? Maybe some tutorial ? The Python tutorial is a good place to start: https://docs.python.org/3.4/tutorial/index.html For your particular problem you'll probably want to use the itertools.product function: https://docs.python.org/3.4/library/itertools.html#itertools.product I wouldn't recommend printing out every possible combination to the screen, though. That's over a million rows! Start with fewer pairs (4 is probably good), or write them to a file instead. Er, actually it's just 1024, so that isn't so bad. You probably still don't want all that in your terminal, though. -- https://mail.python.org/mailman/listinfo/python-list
Re: lotto number generator
On Mon, Jun 1, 2015 at 10:23 AM, gm notmym...@mail.not wrote: Hi. I am new to python so am still in learning phase. I was thinking to make one program that will print out all possible combinations of 10 pairs. I think this is a good way for something bigger :-). This is how this looks like: 1.) 1 2 2.) 1 2 3.) 1 2 4.) 1 2 5.) 1 2 6.) 1 2 7.) 1 2 8.) 1 2 9.) 1 2 10.) 1 2 So, i want to print out (in rows) each possible configuration but for all 10 pairs. example: 1 1 1 2 2 2 2 1 1 1 2 2 2 1 1 1 1 2 2 2 1 1 2 2 1 1 2 2 1 1 etc. What would be the best way to make something like this ? Maybe some tutorial ? The Python tutorial is a good place to start: https://docs.python.org/3.4/tutorial/index.html For your particular problem you'll probably want to use the itertools.product function: https://docs.python.org/3.4/library/itertools.html#itertools.product I wouldn't recommend printing out every possible combination to the screen, though. That's over a million rows! Start with fewer pairs (4 is probably good), or write them to a file instead. -- https://mail.python.org/mailman/listinfo/python-list
Re: Using a particular python binary with venv
On Mon, Jun 1, 2015 at 3:33 PM, greenbay.gra...@gmail.com wrote: According to this https://docs.python.org/3.4/library/venv.html#module-venv 'Each virtual environment has its own Python binary (allowing creation of environments with various Python versions)' So how would I create a virtual environment using the venv module that has a Python 2.7 binary? I wouldn't recommend trying to create a venv virtual environment using Python 2.7, because venv was only added to the standard library in Python 3.3. Use the third-party virtualenv instead. -- https://mail.python.org/mailman/listinfo/python-list
Re: What use for reversed()?
On Sun, May 31, 2015 at 1:58 PM, Denis McMahon denismfmcma...@gmail.com wrote: reversed returns an iterator, not a list, so it returns the reversed list of elements one at a time. You can use list() or create a list from reversed and then join the result: $ python Python 2.7.3 (default, Dec 18 2014, 19:10:20) [GCC 4.6.3] on linux2 Type help, copyright, credits or license for more information. .join(list(reversed(fred))) 'derf' .join([x for x in reversed(fred)]) 'derf' So reversed can do it, but needs a little help The str.join method will happily accept an iterator, so the intermediate list construction in those examples is unnecessary. -- https://mail.python.org/mailman/listinfo/python-list
Re: Fwd: Lossless bulletproof conversion to unicode (backslashing) (fwd)
On Fri, May 29, 2015 at 2:05 AM, anatoly techtonik techto...@gmail.com wrote: Added Mailman to my suxx tracker: https://github.com/techtonik/suxx-tracker#mailman What a useless tool. Instead of tiredly complaining that things suck, why not take some initiative to make them better? I'm curious about your complaint about virtualenv. How do you envision that logging in to the env would be any different from activating it? -- https://mail.python.org/mailman/listinfo/python-list
Re: Fwd: Lossless bulletproof conversion to unicode (backslashing) (fwd)
On Fri, May 29, 2015 at 4:44 AM, Jon Ribbens jon+use...@unequivocal.co.uk wrote: On 2015-05-29, Ian Kelly ian.g.ke...@gmail.com wrote: On Fri, May 29, 2015 at 2:05 AM, anatoly techtonik techto...@gmail.com wrote: Added Mailman to my suxx tracker: https://github.com/techtonik/suxx-tracker#mailman What a useless tool. Instead of tiredly complaining that things suck, why not take some initiative to make them better? I'm curious about your complaint about virtualenv. How do you envision that logging in to the env would be any different from activating it? Please Do Not Feed The Troll. It's not a troll if the discussion is potentially useful and not just disruptive. -- https://mail.python.org/mailman/listinfo/python-list
Re: should self be changed?
On Thu, May 28, 2015 at 9:01 AM, Marko Rauhamaa ma...@pacujo.net wrote: Anssi Saari a...@sci.fi: Do you have an example of state pattern using nested classes and python? With a quick look I didn't happen to find one in any language. Here's an sampling from my mail server: I think I would be more inclined to use enums. This has the advantages of not creating a new set of state classes for every connection instance and that each state is a singleton instance, allowing things like if self.state is SMTPConnectionState.IDLE. It could look something like this: class SMTPConnectionState(Enum): class IDLE: @classmethod def handle_command(cls, conn, cmd): # ... class SPF_HELO: @classmethod def terminate(cls, conn): # ... -- https://mail.python.org/mailman/listinfo/python-list
Re: different types of inheritence...
On Tue, May 26, 2015 at 2:52 PM, Michael Torrie torr...@gmail.com wrote: On 05/26/2015 08:57 AM, zipher wrote: Comprende? I'm not trying to be cryptic here. This is a bit of OOP theory to be discussed. No, sorry. Maybe an actual example (with use case) would spur discussion. Better yet, ignore the troll. -- https://mail.python.org/mailman/listinfo/python-list
Re: Documentaion of dunder methods
On Mon, May 25, 2015 at 8:17 PM, Steven D'Aprano st...@pearwood.info wrote: PEP 8 states that developers should never invent their own dunder methods: __double_leading_and_trailing_underscore__ : magic objects or attributes that live in user-controlled namespaces. E.g. __init__ , __import__ or __file__ . Never invent such names; only use them as documented. https://www.python.org/dev/peps/pep-0008/#naming-conventions In other words, dunder methods are reserved for use by the core developers for the use of the Python interpreter. Apart from PEP 8, is this documented anywhere in the official documentation? If so, I have been unable to find it. https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers -- https://mail.python.org/mailman/listinfo/python-list
Re: a more precise distance algorithm
On Mon, May 25, 2015 at 1:21 PM, ravas ra...@outlook.com wrote: I read an interesting comment: The coolest thing I've ever discovered about Pythagorean's Theorem is an alternate way to calculate it. If you write a program that uses the distance form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your available precision because the square root operation is last. A more accurate calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should swap them and of course handle the special case of a = 0. Is this valid? Does it apply to python? Any other thoughts? :D My imagining: def distance(A, B): A B are objects with x and y attributes :return: the distance between A and B dx = B.x - A.x dy = B.y - A.y a = min(dx, dy) b = max(dx, dy) if a == 0: return b elif b == 0: return a This branch is incorrect because a could be negative. You don't need this anyway; the a == 0 branch is only there because of the division by a in the else branch. else: return a * sqrt(1 + (b / a)**2) Same issue; if a is negative then the result will have the wrong sign. -- https://mail.python.org/mailman/listinfo/python-list
Re: Human Rights and Justice in Islam
On Sat, May 23, 2015 at 5:34 PM, hamilton hamil...@nothere.com wrote: So if you are a Member of Islamic State you have rights, other wise you are an infidel and subject to death from the whim of Allah or whom ever thinks they are Allah. Does that sound right ?? Please don't reply to spammers. I guarantee you that they aren't going to read it, so you're just creating more useless traffic for everybody else who reads this list. -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Sat, May 23, 2015 at 8:57 PM, Michael Torrie torr...@gmail.com wrote: On 05/23/2015 05:40 AM, Chris Angelico wrote: On Sat, May 23, 2015 at 9:34 PM, Tim Chase python.l...@tim.thechases.com wrote: A self-signed certificate may be of minimal worth the *first* time you visit a site, but if you return to the site, that initial certificate's signature can be used to confirm that you're talking to the same site you talked to previously. This is particularly valuable on a laptop where you make initial contact over a (I hesitate to say more secure) less hostile connection through your home ISP. Then, when you're out at the library, coffee-shop, or some hacker convention on their wifi, it's possible to determine whether you're securely connecting to the *same* site, or whether an attempt is being made to MitM because the cert changed. You can get the exact same benefit (knowing when the cert changes) with an externally-signed cert too. How many people actually bother to check? Except that you won't be notified automatically. A MitM attack nowadays most often uses a valid certificate signed by a recognized (though untrustworthy) CA. Thus with a self-signed cert that you've previously accepted, you'll immediate know of the MitM attack. I fail to see how this is the case. If a new certificate is suddenly provided, why should the status of the *previous* certificate have anything to do with whether the browser automatically notifies you? A change from a self-signed certificate to a CA certificate likely just means that the site has upgraded its certificate, not that a MitM attack is underway. -- https://mail.python.org/mailman/listinfo/python-list
Re: mix-in classes
On Sat, May 23, 2015 at 7:53 PM, Dr. John Q. Hacker zonderv...@gmail.com wrote: The post on different types of inheritence... brought up a thought. Let's say, I'm adding flexibility to a module by letting users change class behaviors by adding different mix-in classes. What should happen when there's a name collision on method names between mix-ins? Since they're mix-ins, it's not presumed that there is any parent class to decide. The proper thing would seem to call each method in the order that they are written within the parent class definition. I suppose one can create a method in the parent class, that runs the mixin methods in the same order as in the inheritance list, but would there be a better way for Python to enforce such a practice so as not to create class anarchy? (A problem for another topic.) Usually with mixins, one just wants to call a method of a specific mixin; a name collision is likely a symptom of poor design, and it would be unusual to want to call *all* mixin methods with the same name. If you really want to do that for a particular method though, is there some reason why super() won't suffice? -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Fri, May 22, 2015 at 1:34 PM, MRAB pyt...@mrabarnett.plus.com wrote: On 2015-05-22 20:14, Laura Creighton wrote: The first time you discover that in javascript typeof(null) is 'object' and not 'null' you will scream. I wonder how many home versions of typeof to replace the system one exist out in the wild? I don't think that typeof(null) should be 'null'. If the type of an integer instance is the integer type and the type of a string instance is the string type, then the type of null should be the null type, not a null instance. I suppose that you could consider that what JavaScript is doing is equivalent to saying in Python that: None = object() like you sometimes do when you want a unique sentinel because None itself would be an acceptable value. If only it were that logical. null in Javascript is a primitive type. Here's what typeof returns on some other primitive types: js typeof(4) number js typeof(4.5) number js typeof('hello') string js typeof(true) boolean js typeof(undefined) undefined An object in Javascript is basically just a collection of properties. For example: js typeof([1, 2, 3]) object js typeof({a: 1, b: 2, c: 3}) object Here's what happens when you try to access a property on null: js null.foo typein:18:0 TypeError: null has no properties We can conclude that null is not an object. Even the MDN reference page for the typeof operator refers to this as bogus. There was a proposal to fix this in ECMAScript 5.1, but it was rejected because it caused too much breakage. -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Fri, May 22, 2015 at 2:55 PM, Tim Chase python.l...@tim.thechases.com wrote: I've wondered this on multiple occasions, as I've wanted to just make an attribute bag and have to do something like class AttrBag(object): pass ab = AttrBag() ab.x = 42 ab.y = some other value because just doing ab = object() raises the AttributeError Marko highlights. :-/ This is what types.SimpleNamespace is for. -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Fri, May 22, 2015 at 10:30 PM, Michael Torrie torr...@gmail.com wrote: On 05/22/2015 10:10 PM, Ian Kelly wrote: Sure it is. Without some prior reason to trust the certificate, the certificate is meaningless. How is the browser to distinguish between a legitimate self-signed cert and a self-signed cert presented by an attacker conducting a man-in-the-middle attack? How does a CA actually help this problem? It just puts trust in some third party. But as we know, CA authorities are not all trustworthy and they certainly don't guarantee that the site is what it says it is. Nobody is forcing you to trust them. Go ahead and remove the CA certificates that you consider untrustworthy if you want. Remove all of them if you like, although good luck with verifying all those site certificates yourself. The CA helps because some assurance is better than none. There is still some value in TLS with a self-signed certificate in that at least the connection is encrypted and can't be eavesdropped by an attacker who can only read the channel, but there is no assurance that the party you're communicating with actually owns the public key that you've been presented. The same can be said of CA-signed certificates. The only way to know if the site is who they say they are is to know what the cert's fingerprint ought to be and see if it still is. I used to use a firefox plugin for this purpose, but certs for some major sites like even www.google.com change with such frequency that the utility of the plugin went away. So instead of trusting a CA, you have to trust the maintainers of the plugin. How is that any different? -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Fri, May 22, 2015 at 10:20 PM, Ben Finney ben+pyt...@benfinney.id.au wrote: Ian Kelly ian.g.ke...@gmail.com writes: On Fri, May 22, 2015 at 9:31 PM, Michael Torrie torr...@gmail.com wrote: On 05/22/2015 07:54 PM, Terry Reedy wrote: On 5/22/2015 5:40 PM, Tim Daneliuk wrote: Lo these many years ago, I argued that Python is a whole lot more than a programming language: https://www.tundraware.com/TechnicalNotes/Python-Is-Middleware/ Perhaps something at tundraware needs updating. ''' This Connection is Untrusted You have asked Firefox to connect securely to www.tundraware.com, but we can't confirm that your connection is secure. […] Without some prior reason to trust the certificate, the certificate is meaningless. How is the browser to distinguish between a legitimate self-signed cert and a self-signed cert presented by an attacker conducting a man-in-the-middle attack? Any unencrypted HTTP (“http://…”) connection has the same problem. Yet the same browsers don't present a big scary warning for those? The flaw in the browser is that it doesn't complain when an unencrypted HTTP connection is established, but only complains when an *encrypted* connection is made to a site with a self-signed certificate. There is still some value in TLS with a self-signed certificate in that at least the connection is encrypted and can't be eavesdropped by an attacker who can only read the channel, but there is no assurance that the party you're communicating with actually owns the public key that you've been presented. Right. By that logic, let's advocate for browsers to present a big intrusive warning for every HTTP connection that has no SSL layer or certificate. I will agree that a self-signed certificate presents the problem of how to verify the certificate automatically. Where I disagree is that this is somehow less secure than a completely *unencrypted* HTTP connection. No, the opposite is true. I don't disagree with you. There *should* be scary warnings for plain HTTP connections (although there is a counter-argument that many sites don't need any encryption and HTTPS would just be wasteful in those cases). The fact that browsers don't yet provide those warnings doesn't change anything that I wrote above. -- https://mail.python.org/mailman/listinfo/python-list
Re: Ah Python, you have spoiled me for all other languages
On Fri, May 22, 2015 at 9:31 PM, Michael Torrie torr...@gmail.com wrote: On 05/22/2015 07:54 PM, Terry Reedy wrote: On 5/22/2015 5:40 PM, Tim Daneliuk wrote: Lo these many years ago, I argued that Python is a whole lot more than a programming language: https://www.tundraware.com/TechnicalNotes/Python-Is-Middleware/ Perhaps something at tundraware needs updating. ''' This Connection is Untrusted You have asked Firefox to connect securely to www.tundraware.com, but we can't confirm that your connection is secure. Normally, when you try to connect securely, sites will present trusted identification to prove that you are going to the right place. However, this site's identity can't be verified. ''' Sigh. I blame this as much on the browser. There's no inherent reason why a connection to a site secured with a self-signed certificate is insecure. In fact it's definitely not. Sure it is. Without some prior reason to trust the certificate, the certificate is meaningless. How is the browser to distinguish between a legitimate self-signed cert and a self-signed cert presented by an attacker conducting a man-in-the-middle attack? There is still some value in TLS with a self-signed certificate in that at least the connection is encrypted and can't be eavesdropped by an attacker who can only read the channel, but there is no assurance that the party you're communicating with actually owns the public key that you've been presented. -- https://mail.python.org/mailman/listinfo/python-list
Re: case-sensitive configparser without magical interpolation?
On Fri, May 22, 2015 at 10:59 AM, georgeryo...@gmail.com wrote: [python 2.7] I need to use a configparser that is case-sensitive for option names, but does not do magical interpolation of percent sign. I.e.: [Mapping0] backupHost = eng%26 dbNode = v_br_node0001 should be read (and later written) as is, including capitalization and the percent sign. I find that RawConfigParser keeps the %, but downcases the option name. And SafeConfigParser can be hacked with optionxform to be case-sensitive, but does interpolation. How can I get case-sensitive but no interpolation? RawConfigParser also has the optionxform method; have you tried overriding that? If that doesn't work, then how strict is the 2.7 requirement? In 3.2 or later, the ConfigParser takes an interpolation keyword argument that can be used to disable interpolation: https://docs.python.org/3.4/library/configparser.html#configparser-objects -- https://mail.python.org/mailman/listinfo/python-list
Re: subprocess.Popen zombie
On May 21, 2015 12:41 AM, Thomas Rachel nutznetz-0c1b6768-bfa9-48d5-a470-7603bd3aa...@spamschutz.glglgl.de wrote: Am 20.05.2015 um 18:44 schrieb Robin Becker: not really, it's just normal to keep event routines short; the routine which beeps is after detection of the cat's entrance into the house and various recognition schemes have pronounced intruder :) You could add a timed cleanup routine which .wait()s after a certain time (250 ms or so). Or even better, which .poll()s and re-schedules itself if the process still runs. Or just: Thread(target=p.wait).start() And then you can forget all about it. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to do integers to binary lists and back
On Thu, May 21, 2015 at 4:31 PM, Ben Finney ben+pyt...@benfinney.id.au wrote: {foo:d}.format(foo=foo) '4567' {foo:b}.format(foo=foo) '1000111010111' Which since there's nothing else in the format string can be simplified to: format(foo, b) '1000111010111' -- https://mail.python.org/mailman/listinfo/python-list
Re: Help regarding python run time
On Wed, May 20, 2015 at 7:02 AM, AKHIL RANA akh...@iitk.ac.in wrote: Hi, I am student at IIT Kanpur and working on a Opencv based Python project. I am working on program development which takes less time to execute. For that i have tested my small program hello word on python to now the time taken by this program. I had run many time. and every time it run it gives me a different run time. Can we some how make the program to run at constant time so that we can work on how to reduce the timing ? Not practically. The exact run time is dependent on a lot of factors that are mostly out of your control: what other programs are running, what they're doing and what resources they're using at that moment; what hardware interrupts occur while your program is running; is data to be read from disk cached or not; is data to be loaded from RAM cached or not; if the disk is mechanical, what cylinder and sector does the read head happen to be at when it gets a read request. Instead of trying to measure an exact time over one run, you will get better results by running the program several times and then taking the minimum measurement as representative of the program's runtime under ideal conditions. -- https://mail.python.org/mailman/listinfo/python-list
Re: Help regarding python run time
On Wed, May 20, 2015 at 9:48 AM, Irmen de Jong irmen.nos...@xs4all.nl wrote: Or measure the actual CPU clock cycles taken instead of the wall clock run time. Then you should get a fairly constant number, if the program does the same work every time you run it. phobos:~ irmen$ time python test.py real0m3.368s user0m0.214s--- cpu time spent in user mode actually doing work sys 0m0.053s And yet: $ time python3 primes.py real 0m1.101s user 0m1.099s sys 0m0.000s $ time python3 primes.py real 0m1.135s user 0m1.128s sys 0m0.004s $ time python3 primes.py real 0m1.162s user 0m1.147s sys 0m0.013s http://unix.stackexchange.com/questions/162115/why-does-the-user-and-sys-time-vary-on-multiple-executions -- https://mail.python.org/mailman/listinfo/python-list
Re: Slices time complexity
On Wed, May 20, 2015 at 1:51 PM, Mario Figueiredo mar...@gmail.com wrote: On Wed, 20 May 2015 03:07:03 +1000, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: Yes, a slice can be expensive, if you have (say) a ten billion element list, and take a slice list[1:]. Since nothing seems to surprise you and you seem so adamant on calling anyone being surprised by it, maybe I will surprise you if you actually run the code I posted on the OP and witness for yourself that even on a 50 element list will take 3 seconds to execute on an intel i5. I suspect you've made a mistake in your timing. I measure it at 20.8 microseconds on a Xeon W3690. def minimum(values): ... if len(values) == 1: ... return values[0] ... else: ... m = minimum(values[1:]) ... return m if m values[0] else values[0] ... from timeit import Timer t = Timer(minimum(values), setup=from __main__ import minimum; values = list(range(50))) min(t.repeat(number=10)) 2.077940827002749 -- https://mail.python.org/mailman/listinfo/python-list
Re: fork/exec close file descriptors
On Tue, May 19, 2015 at 7:10 PM, Gregory Ewing greg.ew...@canterbury.ac.nz wrote: On Tue, May 19, 2015 at 8:54 AM, Chris Angelico ros...@gmail.com mailto:ros...@gmail.com wrote: On Linux (and possibly some other Unixes), /proc/self/fd may be of use. On MacOSX, /dev/fd seems to be the equivalent of this. Not a perfect equivalent. On Linux, ls -lF /proc/self/fd shows the contents as symlinks, which is handy since you can just read the links to see what they're pointing to. On OSX, ls -lF /dev/fd shows three ttys and two directories. Though I also note that on my Ubuntu Trusty system, /dev/fd is itself a symlink to /proc/self/fd. -- https://mail.python.org/mailman/listinfo/python-list
Re: No ttk in 2.7
On Wed, May 20, 2015 at 12:54 PM, Cecil Westerhof ce...@decebal.nl wrote: Op Wednesday 20 May 2015 19:03 CEST schreef Zachary Ware: try: import tkinter as tk from tkinter import ttk except ImportError: import Tkinter as tk import ttk When there goes something wrong with: from tkinter import ttk you will not understand what is happening. ;-) If something goes wrong with the first import and raises an ImportError, then it will execute the except clause, which will definitely raise an ImportError. In this case the second error will simply be chained onto the first, so the details of the first error won't be lost. -- https://mail.python.org/mailman/listinfo/python-list
Re: Slices time complexity
On May 19, 2015 4:16 AM, Serhiy Storchaka storch...@gmail.com wrote: On 19.05.15 12:45, Steven D'Aprano wrote: On Tuesday 19 May 2015 05:23, Mario Figueiredo wrote: From the above link it seems slices work in linear time on all cases. I wouldn't trust that is always the case, e.g. deleting a contiguous slice from the end of a list could be O(1). It always has linear complexity. You need to decref removed elements. Only in CPython. The operation might be O(1) in Pypy or Jython. -- https://mail.python.org/mailman/listinfo/python-list
Re: Why does the first loop go wrong with Python3
On Tue, May 19, 2015 at 8:44 AM, Cecil Westerhof ce...@decebal.nl wrote: I looked at the documentation. Is it necessary to do a: p.wait() afterwards? It's good practice to clean up zombie processes by waiting on them, but they will also get cleaned up when your script exits. -- https://mail.python.org/mailman/listinfo/python-list
Re: Slices time complexity
On Mon, May 18, 2015 at 1:23 PM, Mario Figueiredo mar...@gmail.com wrote: I'd like to understand what I'm being told about slices in https://wiki.python.org/moin/TimeComplexity Particularly, what's a 'del slice' and a 'set slice' and whether this information pertains to both CPython 2.7 and 3.4. Del Slice is the operation where a slice of a list is deleted, and Set Slice is the operation where a slice is replaced. E.g.: x = list(range(100)) del x[2:98] x [0, 1, 98, 99] x[1:3] = [7, 6, 5, 4, 3] x [0, 7, 6, 5, 4, 3, 99] Other languages implement slices. I'm currently being faced with a Go snippet that mirrors the exact code above and it does run in linear time. Is there any reason why Python 3.4 implementation of slices cannot be a near constant operation? The semantics are different. IIRC, a slice in Go is just a view of some underlying array; if you change the array (or some other slice of it), the change will be reflected in the slice. A slice of a list in Python, OTOH, constructs a completely independent list. It may be possible that lists in CPython could be made to share their internal arrays with other lists on a copy-on-write basis, which could allow slicing to be O(1) as long as neither list is modified while the array is being shared. I expect this would be a substantial piece of work, and I don't know if it's something that anybody has looked into. -- https://mail.python.org/mailman/listinfo/python-list
Re: Fastest way to remove the first x characters from a very long string
On Sat, May 16, 2015 at 10:22 AM, bruceg113...@gmail.com wrote: # Chris's Approach # lines = ss.split(\n) new_text = \n.join(line[8:] for line in lines) Looks like the approach you have may be fast enough already, but I'd wager the generator expression could be replaced with: map(operator.itemgetter(slice(8, None)), lines) for a modest speed-up. On the downside, this is less readable. Substitute itertools.imap for map if using Python 2.x. -- https://mail.python.org/mailman/listinfo/python-list
Re: Building CPython
On Fri, May 15, 2015 at 6:43 AM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: How much time would it save? Probably very little. After all, unless the method call itself did bugger-all work, the time to create the method object is probably insignificant. But it's a possible optimization. An interesting alternative (if it's not already being done) might be to maintain a limited free-list of method objects, removing the need to allocate memory for one before filling it in with data. -- https://mail.python.org/mailman/listinfo/python-list
Re: Building CPython
On Fri, May 15, 2015 at 9:00 AM, Ian Kelly ian.g.ke...@gmail.com wrote: On Fri, May 15, 2015 at 6:43 AM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: How much time would it save? Probably very little. After all, unless the method call itself did bugger-all work, the time to create the method object is probably insignificant. But it's a possible optimization. An interesting alternative (if it's not already being done) might be to maintain a limited free-list of method objects, removing the need to allocate memory for one before filling it in with data. Looks like it is already being done: https://hg.python.org/cpython/file/e7c7431f91b2/Objects/methodobject.c#l7 -- https://mail.python.org/mailman/listinfo/python-list
Re: a python pitfall
On Thu, May 14, 2015 at 12:06 PM, Billy Earney billy.ear...@gmail.com wrote: Hello friends: I saw the following example at http://nafiulis.me/potential-pythonic-pitfalls.html#using-mutable-default-arguments and did not believe the output produced and had to try it for myself See also https://docs.python.org/3/faq/programming.html#why-are-default-values-shared-between-objects -- https://mail.python.org/mailman/listinfo/python-list
Re: Survey -- Move To Trash function in Python?
On May 14, 2015 7:55 PM, Chris Angelico ros...@gmail.com wrote: (Though when it comes to the bikeshedding phase, I'm sure there'll be some who say if it can't be trashed, just hard delete it, and others who say if it can't be trashed, raise an exception. And neither is truly wrong.) The answer is raise an exception. Moving to trash and deleting are different operations, and one shouldn't be substituted for the other any more than a failed attempt to create a hard link should create a soft link instead. If the user wants, they can catch the exception and delete the file instead. Recovering from an accidental deletion would be more difficult. -- https://mail.python.org/mailman/listinfo/python-list
Re: Basic misunderstanding on object creation
On Wed, May 13, 2015 at 8:42 AM, andrew cooke and...@acooke.org wrote: On Wednesday, 13 May 2015 11:36:12 UTC-3, Thomas Rachel wrote: Am 13.05.2015 um 15:25 schrieb andrew cooke: class Foo: ... def __new__(cls, *args, **kargs): ... print('new', args, kargs) ... super().__new__(cls, *args, **kargs) new (1,) {} Traceback (most recent call last): File stdin, line 1, in module File stdin, line 4, in __new__ TypeError: object() takes no parameters object's __new__() dosn't take any parameters. So call it without arguments: class Foo: def __new__(cls, *args, **kargs): print('new', args, kargs) super().__new__(cls) (at least if we know that we inherit from object. Might be that this one doesn't work very good with multiple inheritance...) Thomas But then nothing will be passed to __init__ on the subclass. __init__ is not called by __new__. In the object construction, __new__ is called, and *then* __init__ is called on the result. -- https://mail.python.org/mailman/listinfo/python-list
Re: Basic misunderstanding on object creation
On Wed, May 13, 2015 at 8:45 AM, andrew cooke and...@acooke.org wrote: class Foo: ... def __new__(cls, *args, **kargs): ... print('new', args, kargs) ... super().__new__(cls) ... class Bar(Foo): ... def __init__(self, a): ... print('init', a) ... Bar(1) new (1,) {} no init is printed. You're not returning anything from Foo.__new__, so the result of the constructor is None. None.__init__ does nothing. -- https://mail.python.org/mailman/listinfo/python-list
Re: Instead of deciding between Python or Lisp for a programming intro course...What about an intro course that uses *BOTH*? Good idea?
On Wed, May 13, 2015 at 12:07 PM, zipher dreamingforw...@gmail.com wrote: On Wednesday, May 13, 2015 at 10:27:23 AM UTC-5, Ian wrote: I don't know why I'm replying to this... Because you're trying to get an answer to a question that even Academia hasn't answered or understood. On Wed, May 13, 2015 at 8:44 AM, zipher dreamingforw...@gmail.com wrote: On Tuesday, May 12, 2015 at 10:35:29 PM UTC-5, Rustom Mody wrote: How history U-turns!! Lisp actually got every major/fundamental thing wrong - variables scopes were dynamic by mistake - lambdas were non-first class because the locution 'first-class' was still 8 years in the future I think you're confused. LISP doesn't have variables. Yes, it does. No, Common LISP does, but as the website says Common LISP is a multi-paradigm langauge. It's trying to be everything to everybody, just like Python tried to do in the other direction, making everything an object. Python was trying to be too pure, while LISP was trying to be too universal: be everything to everyone -- you might say batteries included. True LISP, doesn't need a 'let' statement for example. To understand true LISP you have to understand the modus operandi of the lambda the ultimate crowd. Very few do from academic computer science. MIT understands it. You think you understand it, but you don't. By true LISP are you referring to the original specification by John McCarthy? Here's an example lambda S-expression from McCarthy's original paper: (LABEL, SUBST, (LAMBDA, (X, Y, Z), (COND ((ATOM, Z), (COND, (EQ, Y, Z), X), ((QUOTE, T), Z))), ((QUOTE, T), (CONS, (SUBST, X, Y, (CAR Z)), (SUBST, X, Y, (CDR, Z))) Ugh, what a mess. But ignoring that, tell us how many variables you see there. I'll give you a hint: I count more than two. It's only abstractions, like math. It's purpose is to output a final result, that is all. It's not at all to make commercial applications. It's rather like Asimov's computer in the Last Question. It's a whole different model of computation. Instead of a Turing Tape or VonNeumann stream, you have hierarchies of expressions all evaluating... ...well I would say all at the same time, but since I have to constrain my description to a common set of reality that is shared with you, then I'd say on the stack frame. It's why they had specialized machines to run true LISP. Sure. Lisp machines never, ever ran computer graphics applications. Or medical image processing. Nope, never, because that would be commercial and dirty and a hideous perversion of the idol of computer science created by the prophet McCarthy. Text editors? Heavens to Betsy, now you're just trying to shock me, aren't you! with an entirely different model computation than other programming languages which use variables all the time. To the extent that it DOES have variables, it's to accommodate those coming over from iterative programming. What is iterative programming? If you mean writing programs that work iteratively, then this describes both functional and procedural styles alike. Yes, and LISP is neither. Although LISP is a functional style, that is only by appearance. It's completely different from Haskell, which I would describe as a true functional language. The difference is how the program is lexed in the mind or on the machine. But that's too difficult to explain on this thread. And Fermat had a truly marvelous proof, which you would think wonderful, if only he had enough room in that infamous margin. The opposite of iterative programming would then be a style where the program can't ever repeat anything. That would be a very limited language and would *not* be equivalent to a Turing machine. The opposite of iterative programming is recursive programming. It's not limited, except that it has an entirely different relationship to data, one orthogonal to iterative computation. Iteration is a type of recursion. Specifically, it's the type of recursion that doesn't require keeping a stack of values from each higher-up repetition in the evaluation. Also known as linear recursion or tail recursion. Often people use the word iteration to mean a syntactic construct that repeats itself without explicit reference to a function and recursion to mean a syntactic construct where a function explicitly repeats itself, but from a theoretical standpoint this is all just syntax. The two constructs are fundamentally the same. And the idea of lambdas were already encoded by the use of special expressions, set-off by parenthesis. So they practically *defined* the concept of lambdas. LISP is also the reason why we're cursed with the terrible name lambda for anonymous functions rather than something more mnemonic (like function). No, you haven't understood, padawan. *plonk* -- https://mail.python.org/mailman/listinfo/python-list
Re: Instead of deciding between Python or Lisp for a programming intro course...What about an intro course that uses *BOTH*? Good idea?
I don't know why I'm replying to this... On Wed, May 13, 2015 at 8:44 AM, zipher dreamingforw...@gmail.com wrote: On Tuesday, May 12, 2015 at 10:35:29 PM UTC-5, Rustom Mody wrote: How history U-turns!! Lisp actually got every major/fundamental thing wrong - variables scopes were dynamic by mistake - lambdas were non-first class because the locution 'first-class' was still 8 years in the future I think you're confused. LISP doesn't have variables. Yes, it does. It's a lambda calculus No, it isn't. Lambda calculus is a formal system of mathematics. LISP is a programming language. It may draw inspiration and borrow notation from lambda calculus, but these are different things with different uses and purposes. with an entirely different model computation than other programming languages which use variables all the time. To the extent that it DOES have variables, it's to accomidate those coming over from iterative programming. What is iterative programming? If you mean writing programs that work iteratively, then this describes both functional and procedural styles alike. The opposite of iterative programming would then be a style where the program can't ever repeat anything. That would be a very limited language and would *not* be equivalent to a Turing machine. And the idea of lambdas were already encoded by the use of special expressions, set-off by parenthesis. So they practically *defined* the concept of lambdas. LISP is also the reason why we're cursed with the terrible name lambda for anonymous functions rather than something more mnemonic (like function). -- https://mail.python.org/mailman/listinfo/python-list
Re: Python file structure
On Tue, May 12, 2015 at 1:29 PM, Chris Angelico ros...@gmail.com wrote: On Wed, May 13, 2015 at 5:13 AM, zljubisic...@gmail.com wrote: If I find an error in command line parameters section I cannot call function usage() because it is not defined yet. I have few options here: 1. Put definition of usage function before command line parameters parsing section I'd do this, unless there's a good reason not to. A simple usage function probably doesn't have many dependencies, so it can logically go high in the code. As a general rule, I like to organize code such that things are defined higher up than they're used; it's not strictly necessary (if they're used inside functions, the requirement is only that they be defined before the function's called), but it helps with clarity. That generally means that def usage(): wants to go up above any place where usage() occurs, but below the definitions of any functions that usage() itself calls, and below the first assignments to any global names it uses. It's not always possible, but when it is, it tends to produce an easy-to-navigate source file. +1. Also, I like to put command-line parsing inside the main function and make that its *only* responsibility. The main function then calls the real entry point of my script, which will be something more specifically named. This also has the advantage that if some other module needs to invoke my script, all it has to do is call the entry point function which will be named something more suitable than main. -- https://mail.python.org/mailman/listinfo/python-list
Re: anomaly
On Tue, May 12, 2015 at 9:34 AM, zipher dreamingforw...@gmail.com wrote: * when it comes to built-in functions (e.g. sum, map, pow) and types (e.g. int, str, list) there are significant and important use-cases for allowing shadowing; Name one significant and important use case for shadowing built-in types. Functions, I don't have a problem with, but types are more fundamental than functions. try: str = unicode # Use the Python 3 name. except NameError: pass -- https://mail.python.org/mailman/listinfo/python-list
Re: Instead of deciding between Python or Lisp for a programming intro course...What about an intro course that uses *BOTH*? Good idea?
On Tue, May 12, 2015 at 9:11 PM, zipher dreamingforw...@gmail.com wrote: I know. That's because most people have fallen off the path (http://c2.com/cgi/wiki?OneTruePath). You wrote that, didn't you? I recognize that combination of delusional narcissism and curious obsession with Turing machines. You haven't done it because either others have done it for you (NumPy) or you simply aren't perfecting anything that needs to scale; i.e. you don't really need to minimize memory or CPU consumption because you're working with toy problems relative to the power of most hardware these days. There is such a thing as over-optimization. Given unlimited programmer time, sure, everything might be made to run using the minimum possible time and space. Nobody has unlimited time, though. The job of the programmer is to get the program to run fast enough for the needs of the application. Getting it to run faster than it needs to is generally a waste of the programmer's time that could be spent on more valuable tasks. Of course, I say this as somebody who works on a highly scaled user-facing application that will never be fast enough. :-) -- https://mail.python.org/mailman/listinfo/python-list
Re: code blocks
On Sun, May 10, 2015 at 7:39 PM, zipher dreamingforw...@gmail.com wrote: Similarly, you'd want: encode(codestr) to instantiate all objects in the codestr. You can't do this with eval, because it doesn't allow assignment (eval(n=2) returns InvalidSyntax). Is exec what you're looking for? exec('n = 2') print(n) 2 -- https://mail.python.org/mailman/listinfo/python-list
Re: code blocks
On Sun, May 10, 2015 at 9:31 PM, Ian Kelly ian.g.ke...@gmail.com wrote: On Sun, May 10, 2015 at 7:39 PM, zipher dreamingforw...@gmail.com wrote: Similarly, you'd want: encode(codestr) to instantiate all objects in the codestr. You can't do this with eval, because it doesn't allow assignment (eval(n=2) returns InvalidSyntax). Is exec what you're looking for? exec('n = 2') print(n) 2 I just found that eval can be used to evaluate a code object compiled in the 'exec' mode: eval(compile('n = 42', '', 'exec')) n 42 Interesting. But I suppose that the mode is really just a compilation option and there's not really much distinction as far as eval is concerned once they've been compiled. -- https://mail.python.org/mailman/listinfo/python-list
Re: Instead of deciding between Python or Lisp for a programming intro course...What about an intro course that uses *BOTH*? Good idea?
On Sun, May 10, 2015 at 3:16 PM, Marko Rauhamaa ma...@pacujo.net wrote: Scheme is my favorite language. I think, however, it is a pretty advanced language and requires a pretty solid basis in programming and computer science. Python, in contrast, is a great introductory programming language. Sure, you *can* get quite advanced with it, too, but you can get quite a bit of fun stuff done with just the basics. MIT famously used Scheme in their introductory course for more than two decades. Although they switched to Python a few years ago, I don't think they did so because there was anything wrong with Scheme. Wikipedia informs me that Yale and Grinnell are still using Scheme for their introductory courses. Of course, you could introduce Scheme with similar simplifications. However, such simplifications (say, iterative constructs) are nonidiomatic in Scheme. The students should not get into bad habits that they need to be weaned off of later. You don't need iterative constructs to teach an introductory course. The full text of SICP (the wizard book) is available on the web for anyone to read at https://mitpress.mit.edu/sicp/. I don't think it ever even *mentions* iterative constructs. Where it distinguishes recursive algorithms from iterative ones, recursive syntax is used in both cases. I'm thinking half way into the semester, instead of moving into intermediate Scheme, perhaps that is a good time to switch to Python? No, stick with one language for at least the first course. Needing to learn the syntax and semantics of *two* programming languages, especially two such different ones, is just going to distract students from the fundamental concepts that the introductory class is intended to teach. -- https://mail.python.org/mailman/listinfo/python-list
Re: anomaly
On Sun, May 10, 2015 at 10:34 AM, Mark Rosenblitt-Janssen dreamingforw...@gmail.com wrote: Here's something that might be wrong in Python (tried on v2.7): class int(str): pass This defines a new class named int that is a subclass of str. It has no relation to the builtin class int. int(3) '3' This creates an instance of the above int class, which is basically equivalent to calling str(3). Were you expecting a different result? -- https://mail.python.org/mailman/listinfo/python-list
Re: Calling a function is faster than not calling it?
On Sun, May 10, 2015 at 10:14 AM, Peter Otten __pete...@web.de wrote: When there was an actual speed-up I also had a look at PyEval_GetGlobals/Locals() which in turn call PyEval_GetFrame() and PyEvalPyFrame_FastToLocalsWithError() whatever these do. (The first function reminded me of sys._getframe() hence the mention of stack inspection) Based on the names, I surmise that the first one gets the top stack frame object, and that the second one extracts the fast local variables from the frame object and builds a dict of them for use by eval. -- https://mail.python.org/mailman/listinfo/python-list
Re: functions, optional parameters
On Fri, May 8, 2015 at 9:50 AM, Michael Welle mwe012...@gmx.net wrote: Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes: If your language uses late binding, it is very inconvenient to get early binding when you want it. But if your language uses early binding, it is very simple to get late binding when you want it: just put the code you want to run inside the body of the function: And you have to do it all the time again and again. I can't provide hard numbers, but I think usually I want late binding. You could perhaps write a decorator to evaluate your defaults at call time. This one relies on inspect.signature, so it requires Python 3.3 or newer: import inspect from functools import wraps def late_defaults(**defaults): def decorator(f): sig = inspect.signature(f) @wraps(f) def wrapped(*args, **kwargs): bound_args = sig.bind_partial(*args, **kwargs) for name, get_value in defaults.items(): if name not in bound_args.arguments: bound_args.arguments[name] = get_value() return f(*bound_args.args, **bound_args.kwargs) return wrapped return decorator @late_defaults(b=lambda: x+1, c=lambda: y*2) def f(a, b, c=None): print(a, b, c) x = 14 y = 37 f(10) x = 30 y = 19 f(10) f(10, 11) f(10, 11, c=12) Output: 10 15 74 10 31 38 10 11 38 10 11 12 For documentation purposes I suggest using default values of None in the function spec to indicate that the arguments are optional, and elaborating on the actual defaults in the docstring. Alternatively you could put the lambdas in the the actual function spec and then just tell the decorator which ones to apply if not supplied, but that would result in less readable pydoc. -- https://mail.python.org/mailman/listinfo/python-list
Re: Moving to Python 3.x
On Sat, May 9, 2015 at 12:30 PM, Antranig Vartanian antra...@pingvinashen.am wrote: Hay, I learned the basics of python using the book Think Python (http://www.greenteapress.com/thinkpython/) which was good (IMHO), and it teaches in Python 2.7. Now I'm trying to write my first python+gtk program. anyways, my question will be, is it so necessary to move to python3.x ASAP? or Python2.7 will live for a while (2-3 years)?. Python 2.7 will continue to be maintained through 2020. If you don't have any specific reason to use Python 2.7 (such as a library dependency), then you should try to use 3.x for new projects. You'll avoid the pain of needing to migrate later, and you'll be able to start taking advantage of newer features right away. and what do you advice a newbie programmer to do after learning the basics? Find an existing open source project that you'd like to contribute to. It doesn't have to be anything major, but it will help you learn about the Python ecosystem, and the opportunities to collaborate will help you build your skills. It also looks good on a resume, if your plans include being a professional Python programmer. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to properly apply OOP in the bouncing ball code
On May 8, 2015 9:46 AM, Tommy C tommyc168...@gmail.com wrote: I'm trying to apply OOP in this bouncing ball code in order to have multiple balls bouncing around the screen. The objective of this code is to create a method called settings, which controls all the settings for the screen and the bouncing behaviour of multiple balls, under the class Ball. However, I keep on getting errors related to attributes (e.g., speed). I'm new to OOP in Python so your help will be much appreciated. Thanks in advance. As the error says, the attribute does not exist. class Ball: def __init__(self, X, Y): self.velocity = [1,1] Here where you set it, you call it velocity. speed = self.velocity Here you create a local variable called speed, which you never use. if balls.ball_boundary.left 0 or balls.ball_boundary.right self.width: balls.speed[0] = -balls.speed[0] And here you look up an attribute of Ball called speed, which doesn't match the name you used in __init__. This is a muddled design overall. Your Ball class represents the individual balls bouncing around the screen. It should not also contain details about window size and the logic for the event loop. Use a separate class for that. Similarly, if the purpose of your settings method is to manage settings, why does it also contain all the bouncing logic? These should probably be separate methods. -- https://mail.python.org/mailman/listinfo/python-list
Re: functions, optional parameters
On May 8, 2015 9:26 AM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: Do you think that Python will re-compile the body of the function every time you call it? Setting the default is part of the process of compiling the function. To be a bit pedantic, that's not accurate. The default is evaluated when the function object is created, i.e. when the def statement is executed at runtime, not when the underlying code object is compiled. -- https://mail.python.org/mailman/listinfo/python-list
Re: PEP idea: On Windows, subprocess should implicitly support .bat and .cmd scripts by using FindExecutable from win32 API
On Thu, May 7, 2015 at 8:03 AM, Chris Angelico ros...@gmail.com wrote: On Thu, May 7, 2015 at 11:44 PM, Marko Rauhamaa ma...@pacujo.net wrote: Whole programming cultures, idioms and right ways differ between platforms. What's the right way to write a service (daemon)? That's probably completely different between Windows and Linux. Linux itself is undergoing a biggish transformation there: an exemplary daemon of last year will likely be deprecated within a few years. And that's where a library function can be really awesome. What's the right way to daemonize? import daemonize; daemonize.daemonize() seems good to me. Maybe there's platform-specific code in the *implementation* of that, but in your application, no. That's the job of a layer underneath you. https://www.python.org/dev/peps/pep-3143/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Tue, May 5, 2015 at 7:27 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: Only the minimum is statistically useful. I disagree. The minimum tells you how fast the code *can* run, under optimal circumstances. The mean tells you how fast it *realistically* runs, under typical load. Both can be useful to measure. -- https://mail.python.org/mailman/listinfo/python-list
Re: Writing list of dictionaries to CSV
On Wed, May 6, 2015 at 12:22 PM, Tim Chase python.l...@tim.thechases.com wrote: On 2015-05-06 19:08, MRAB wrote: You could tell it to quote any value that's not a number: w = csv.DictWriter(f, pol_keys, quoting=csv.QUOTE_NONNUMERIC) It looks like all of the values you have are strings, so they'll all be quoted. I would hope that Excel will then treat it as a string; it would be stupid if it didn't! :-) Sadly, Excel *is* that stupid based on the tests I tried just now. :-( Regardless of whether Mar 2015 is quoted or unquoted in the source CSV file, Excel tries to outwit you and mangles the presentation. Quoting a value in csv doesn't mean it's a string; it just means that it's a single field. You *can* force Excel to treat a value as a string by prefixing it with an apostrophe, though. -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Wed, May 6, 2015 at 1:08 AM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: On Wednesday 06 May 2015 15:58, Ian Kelly wrote: On Tue, May 5, 2015 at 7:27 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: Only the minimum is statistically useful. I disagree. The minimum tells you how fast the code *can* run, under optimal circumstances. The mean tells you how fast it *realistically* runs, under typical load. Both can be useful to measure. Er, not even close. Running code using timeit is in no way the same as running code for real under realistic circumstances. The fact that you are running the function or code snippet in isolation, in its own scope, via exec, rather than as part of some larger script or application, should be a hint. timeit itself has overhead, so you cannot measure the time taken by the operation alone, you can only measure the time taken by the operation within the timeit environment. We have no reason to think that the distribution of noise under timeit will be even vaguely similar to the noise when running in production. You also can't be sure that the base time taken by the operation in your development environment will be comparable to the time taken in production; different system architectures may produce different results, and what is faster on your workstation may be slower on a server. Also, different algorithms may react to load differently. For example, an algorithm that goes to different parts of memory frequently may start thrashing sooner than an algorithm with better spatial locality if the system is paging a lot. I'll grant that just computing the means on a workstation that is not under a controlled load is not the best way to measure this -- but a difference in mean that is not simply proportional to the difference in min is still potentially useful information. The purpose of timeit is to compare individual algorithms, in as close as possible to an equal footing with as little noise as possible. If you want to profile code used in a realistic application, use a profiler, not timeit. And even that doesn't tell you how fast the code would be alone, because the profiler adds overhead. Besides, typical load is a myth -- there is no such thing. A high-end Windows web server getting ten thousand hits a minute, a virtual machine starved for RAM, a Mac laptop, a Linux server idling away with a load of 0.1 all day... any of those machines could run your code. How can you *possibly* say what is typical? The very idea is absurd. Agreed. -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Wed, May 6, 2015 at 9:12 AM, Paul Rubin no.email@nospam.invalid wrote: Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes: Multiplying upwards seems to be more expensive than multiplying downwards... I can only guess that it has something to do with the way multiplication is implemented, or perhaps the memory management involved, or something. Who the hell knows? It seems pretty natural if multiplication uses the obvious quadratic-time pencil and paper algorithm. The cost of multiplying m*n is basically w(m)*w(n) where w(x) is the width of x in machine words. So for factorial where m is the counter and n is the running product, w(m) is always 1 while w(n) is basically log2(n!). From from math import log def xfac(seq): cost = logfac = 0.0 for i in seq: logfac += log(i,2) cost += logfac return cost def upward(n): return xfac(xrange(1,n+1)) def downward(n): return xfac(xrange(n,1,-1)) print upward(4),downward(4) I get: 10499542692.6 11652843833.5 A lower number for upward than downward. The difference isn't as large as your timings, but I think it still gives some explanation of the effect. That was my initial thought as well, but the problem is that this actually predicts the *opposite* of what is being reported: upward should be less expensive, not more. -- https://mail.python.org/mailman/listinfo/python-list
Re: Step further with filebasedMessages
On May 5, 2015 5:46 AM, Cecil Westerhof ce...@decebal.nl wrote: Op Tuesday 5 May 2015 12:41 CEST schreef Steven D'Aprano: # Untested. def get_message_slice(message_filename, start=0, end=None, step=1): real_file = expanduser(message_filename) messages = [] # FIXME: I assume this is expensive. Can we avoid it? nr_of_messages = get_nr_of_messages(real_file) If I want to give the possibility to use negative values also, I need the value. You could make this call only if one of the boundaries is actually negative. Then callers that provide positive values don't need to pay the cost of that case. Alternatively, consider that it's common for slices of iterators to disallow negative indices altogether, and question whether you really need that. the_slice = slice(start, end, step) # Calculate the indexes in the given slice, e.g. # start=1, stop=7, step=2 gives [1,3,5]. indices = range(*(the_slice.indices(nr_of_messages))) with open(real_file, 'r') as f: for i, message in enumerate(f): if i in indices: messages.append(message) return messages I approve of using slice.indices instead of calculating the indices manually, but otherwise, the islice approach feels cleaner to me. This reads like a reimplementation of that. -- https://mail.python.org/mailman/listinfo/python-list
Re: asyncio: What is the difference between tasks, futures, and coroutines?
On Tue, May 5, 2015 at 9:22 AM, Paul Moore p.f.mo...@gmail.com wrote: I'm working my way through the asyncio documentation. I have got to the Tasks and coroutines section, but I'm frankly confused as to the difference between the various things described in that section: coroutines, tasks, and futures. I think can understand a coroutine. Correct me if I'm wrong, but it's roughly something that you can run which can suspend itself. But I don't understand what a Future is. The document just says it's almost the same as a concurrent.futures.Future, which is described as something that encapsulates the asynchronous execution of a callable. Which doesn't help a lot. In concurrent.futures, you don't create Futures, you get them back from submit(), but in the asyncio docs it looks like you can create them by hand (example Future with run_until_complete). And there's nothing that says what a Future is, just what it's like... :-( Fundamentally, a future is a placeholder for something that isn't available yet. You can use it to set a callback to be called when that thing is available, and once it's available you can get that thing from it. A Task is a subclass of Future, but the documentation doesn't say what it *is*, but rather that it schedules the execution of a coroutine. But that doesn't make sense to me - objects don't do things, they *are* things. I thought the event loop did the scheduling? In asyncio, a Task is a a Future that serves as a placeholder for the result of a coroutine. The event loop manages callbacks, and that's all it does. An event that it's been told to listen for occurs, and the event loop calls the callback associated with that event. The Task manages a coroutine's interaction with the event loop; when the coroutine yields a future, the Task instructs the event loop to listen for the completion of that future, setting a callback that will resume the coroutine. Reading between the lines, it seems that the event loop schedules Tasks (which makes sense) and that Tasks somehow wrap up coroutines - but I don't see *why* you need to wrap a task in a coroutine rather than just scheduling coroutines. And I don't see where Futures fit in - why not just wrap a coroutine in a Future, if it needs to be wrapped up at all? The coroutines themselves are not that interesting of an interface; all you can do with them is resume them. The asynchronous execution done by asyncio is all based on futures. Because a coroutine can easily be wrapped in a Task, this allows for coroutines to be used anywhere a future is expected. I don't know if I've done a good job explaining, but I hope this helps. -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Tue, May 5, 2015 at 12:45 PM, Dave Angel da...@davea.name wrote: When the simple is True, the function takes noticeably and consistently longer. For example, it might take 116 instead of 109 seconds. For the same counts, your code took 111. I can't replicate this. What version of Python is it, and what value of x are you testing with? I've looked at dis.dis(factorial_iterative), and can see no explicit reason for the difference. My first thought is that maybe it's a result of the branch. Have you tried swapping the branches, or reimplementing as separate functions and comparing? -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Tue, May 5, 2015 at 3:23 PM, Ian Kelly ian.g.ke...@gmail.com wrote: On Tue, May 5, 2015 at 3:00 PM, Dave Angel da...@davea.name wrote: def loop(func, funcname, arg): start = time.time() for i in range(repeats): func(arg, True) print({0}({1}) took {2:7.4}.format(funcname, arg, time.time()-start)) start = time.time() for i in range(repeats): func(arg) print({0}({1}) took {2:7.4}.format(funcname, arg, time.time()-start)) Note that you're explicitly passing True in one case but leaving the default in the other. I don't know whether that might be responsible for the difference you're seeing. I don't think that's the cause, but I do think that it has something to do with the way the timing is being run. When I run your loop function, I do observe the difference. If I reverse the order so that the False case is tested first, I observe the opposite. That is, the slower case is consistently the one that is timed *first* in the loop function, regardless of which case that is. -- https://mail.python.org/mailman/listinfo/python-list
Re: Throw the cat among the pigeons
On Tue, May 5, 2015 at 3:00 PM, Dave Angel da...@davea.name wrote: def loop(func, funcname, arg): start = time.time() for i in range(repeats): func(arg, True) print({0}({1}) took {2:7.4}.format(funcname, arg, time.time()-start)) start = time.time() for i in range(repeats): func(arg) print({0}({1}) took {2:7.4}.format(funcname, arg, time.time()-start)) Note that you're explicitly passing True in one case but leaving the default in the other. I don't know whether that might be responsible for the difference you're seeing. Also, it's best to use the timeit module for timing code, e.g.: from timeit import Timer t1 = Timer(factorial_iterative(10, False), from __main__ import factorial_iterative) t1.repeat(10, number=1) [3.8517245299881324, 3.7571076710009947, 3.7780062559759244, 3.848508063936606, 3.7627131739864126, 3.8278848479967564, 3.776115525048226, 3.83024005102925, 3.8322679550619796, 3.8195601429324597] min(_), sum(_) / len(_) (3.7571076710009947, 3.8084128216956743) t2 = Timer(factorial_iterative(10, True), from __main__ import factorial_iterative) t2.repeat(10, number=1) [3.8363616950809956, 3.753201302024536, 3.7838632150087506, 3.7670978900277987, 3.805312803015113, 3.7682680500438437, 3.856655619922094, 3.796431727008894, 3.8224815409630537, 3.765664782957174] min(_), sum(_) / len(_) (3.753201302024536, 3.7955338626052253) As you can see, in my testing the True case was actually marginally (probably not significantly) faster in both the min and the average. -- https://mail.python.org/mailman/listinfo/python-list
Re: when does newlines get set in universal newlines mode?
On Mon, May 4, 2015 at 9:17 AM, Peter Otten __pete...@web.de wrote: OK, you convinced me. Then I tried: with open(tmp.txt, wb) as f: f.write(0\r\n3\r5\n7) ... assert len(open(tmp.txt, rb).read()) == 8 f = open(tmp.txt, rU) f.readline() '0\n' f.newlines f.tell() 3 f.newlines '\r\n' Hm, so tell() moves the file pointer? Is that sane? If I call readline() followed by tell(), I expect the result to be the position of the start of the next line. Maybe this is considered safe because tell() on a pipe raises an exception? -- https://mail.python.org/mailman/listinfo/python-list
Re: Bitten by my C/Java experience
On Mon, May 4, 2015 at 9:20 AM, Cecil Westerhof ce...@decebal.nl wrote: Potential dangerous bug introduced by programming in Python as if it was C/Java. :-( I used: ++tries that has to be: tries += 1 Are there other things I have to be careful on? That does not work as in C/Java, but is correct syntax. Some other gotchas that aren't necessarily related to C/Java but can be surprising nonetheless: *() is a zero-element tuple, and (a, b) is a two-element tuple, but (a) is not a one-element tuple. Tuples are created by commas, not parentheses, so use (a,) instead. *Default function arguments are created at definition time, not at call time. So if you do something like: def foo(a, b=[]): b.append(a) print(b) The b list will be the same list on each call and will retain all changes from previous calls. *super() doesn't do what you might expect in multiple inheritance situations, particularly if you're coming from Java where you never have to deal with multiple inheritance. It binds to the next class in the method resolution order, *not* necessarily the immediate superclass. This also means that the particular class bound to can vary depending on the specific class of the object. *[[None] * 8] * 8 doesn't create a 2-dimensional array of None. It creates one list containing None 8 times, and then it creates a second list containing the first list 8 times, *not* a list of 8 distinct lists. *If some_tuple is a tuple containing a list, then some_tuple[0] += ['foo'] will concatenate the list *but* will also raise a TypeError when it tries to reassign the list back to the tuple. -- https://mail.python.org/mailman/listinfo/python-list
Re: Bitten by my C/Java experience
On Mon, May 4, 2015 at 11:59 AM, Mark Lawrence breamore...@yahoo.co.uk wrote: On 04/05/2015 16:20, Cecil Westerhof wrote: Potential dangerous bug introduced by programming in Python as if it was C/Java. :-( I used: ++tries that has to be: tries += 1 Are there other things I have to be careful on? That does not work as in C/Java, but is correct syntax. Not dangerous at all, your test code picks it up. I'd also guess, but don't actually know, that one of the various linter tools could be configured to find this problem. pylint reports it as an error. -- https://mail.python.org/mailman/listinfo/python-list
Re: Why from en to two times with sending email
On Mon, May 4, 2015 at 12:59 PM, Cecil Westerhof ce...@decebal.nl wrote: I want to change an old Bash script to Python. When I look at: https://docs.python.org/2/library/email-examples.html Then from and to have to be used two times? Why is that? Once to construct the message headers, and once to instruct the SMTP server where to send the message. These are not required to agree; for instance, bcc recipients need to be supplied to the server but aren't included in the headers. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is not bad ;-)
On Sat, May 2, 2015 at 4:35 AM, Dave Angel da...@davea.name wrote: I can't see how that is worth doing. The recursive version is already a distortion of the definition of factorial that I learned. And to force it to be recursive and also contort it so it does the operations in the same order as the iterative version, just to gain performance? If you want performance on factorial, write it iteratively, in as straightforward a way as possible. Or just call the library function. Or if you really want to write it functionally: from functools import reduce from operator import mul def product(iterable): return reduce(mul, iterable, 1) def factorial(n): return product(range(1, n+1)) For Python 2, delete the first import and replace range with xrange. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is not bad ;-)
On Sat, May 2, 2015 at 5:42 AM, Marko Rauhamaa ma...@pacujo.net wrote: Christian Gollwitzer aurio...@gmx.de: That's why I still think it is a microoptimization, which helps only in some specific cases. It isn't done for performance. It's done to avoid a stack overflow exception. If your tree is balanced, then the number of items you would need to have to get a stack overflow exception would be approximately 2 ** 1000, which you can't possibly hope to fit into memory. If your tree is unbalanced and you're getting a stack overflow exception, then maybe you should think about balancing it. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is not bad ;-)
On Sat, May 2, 2015 at 9:53 AM, Joonas Liik liik.joo...@gmail.com wrote: Top-posting is heavily frowned at on this list, so please don't do it. Balancing of trees is kind of irrelevant when tree means search space no? I think it's relatively rare that DFS is truly the best algorithm for such a search. For one thing, search space often means graph, not tree. And for any other type of search, you'll want/need to implement it iteratively rather than recursively anyway. And you definitely dont need to keep the entire tree in memory at the same time. You could harness every single storage device on the planet and you would still not have nearly enough capacity to fill a balanced search tree to a depth of 1000. Also should not-running-out-of-call-stack really be the main reason to balance trees? That sounds like an optimisation to me .. It is. My point was that if your unbalanced search tree is getting to a depth of 1000, then it's probably long past time for you to start thinking about optimizing it *anyway*. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python is not bad ;-)
On Sat, May 2, 2015 at 9:55 AM, Chris Angelico ros...@gmail.com wrote: On Sun, May 3, 2015 at 1:45 AM, Ian Kelly ian.g.ke...@gmail.com wrote: On Sat, May 2, 2015 at 5:42 AM, Marko Rauhamaa ma...@pacujo.net wrote: Christian Gollwitzer aurio...@gmx.de: That's why I still think it is a microoptimization, which helps only in some specific cases. It isn't done for performance. It's done to avoid a stack overflow exception. If your tree is balanced, then the number of items you would need to have to get a stack overflow exception would be approximately 2 ** 1000, which you can't possibly hope to fit into memory. If your tree is unbalanced and you're getting a stack overflow exception, then maybe you should think about balancing it. That's assuming it's a search tree, where you *can* just think about balancing it. What if it's a parse tree? Let's say you're walking the AST of a Python module, looking for all functions that contain 'yield' or 'yield from' (ie generator functions). To do that, you need to walk the entire depth of the tree, no matter how far that goes. I'm not sure how complex a piece of code would have to be to hit 1000, but it wouldn't be hard to have each level of tree cost you two or three stack entries, so that could come down to just a few hundred. Or you just iterate over the ast.walk generator, which uses a deque rather than recursion. -- https://mail.python.org/mailman/listinfo/python-list
Re: l = range(int(1E9))
On Sat, May 2, 2015 at 1:51 PM, BartC b...@freeuk.com wrote: On 02/05/2015 20:15, Mark Lawrence wrote: On 02/05/2015 19:34, BartC wrote: OK, so it's the programmer's fault if as fundamental a concept as a for-loop ranging over integers is implemented inefficiently. He has to transform it into high-level terms, or has to reconstruct it somehow using a while-loop and an incrementing loop index. I give up. So do I, I think, if no-one is willing to admit that the original way of implementing range() was a glaring mistake. range() was and is a *convenience* function. In the real world, the vast majority of for loops are over arrays or other containers, not integers, and those that aren't are usually very small. In non-toy code, using a for loop to count to a billion is highly unusual. So yeah, for a programmer porting code to Python who needed to loop over an array, the correct approach would be to actually loop over the *array* in place of the indices of the array. I don't know why you make this out to be such a big deal; it's a simple conversion. Would it have been better if range() had been implemented as xrange() from the beginning? Sure, that would have been great. Except for one small detail: the iterator protocol didn't exist back then. That wasn't introduced until PEP 234 in Python 2.1, which means that the xrange() function wasn't even *possible* before then. I don't think anybody would claim that Python was perfect when it was first introduced (nor is it perfect now). Like all other software, it has improved over time as a result of iterative refinement. -- https://mail.python.org/mailman/listinfo/python-list
Re: l = range(int(1E9))
On Sat, May 2, 2015 at 3:28 PM, Tony the Tiger tony@tiger.invalid wrote: On Fri, 01 May 2015 14:42:04 +1000, Steven D'Aprano wrote: use l as a variable name, as it looks too much like 1 If you use a better font, they are very different. Besides, a variable name cannot start with a digit (nor can it be a single digit), so it's a given that it's an 'l'. Of course it can be a single digit. You're assuming that the person reading the code already somehow knows that this is a variable name and not an integer literal. -- https://mail.python.org/mailman/listinfo/python-list
Re: Inner workings of this Python feature: Can a Python data structure reference itself?
On Sat, May 2, 2015 at 7:57 PM, Tim Chase python.l...@tim.thechases.com wrote: So it sounds like you have to request such a mark-and-sweep from the gc module. You *can* request it. But as long as it hasn't been explicitly disabled (by calling gc.disable()), the mark-and-sweep garbage collection will also run automatically -- just not necessarily immediately. -- https://mail.python.org/mailman/listinfo/python-list
Re: Inner workings of this Python feature: Can a Python data structure reference itself?
On Sat, May 2, 2015 at 2:17 PM, Tim Chase python.l...@tim.thechases.com wrote: If you know that you're creating such cyclical structures, it's best to manually unlink them before freeing them: lst = [] lst.append(lst) # create the cycle lst[:] = [] # break the cycle # or lst.remove(lst) # though this takes more care del lst In general, this shouldn't be necessary. I believe that reference cycles are guaranteed to be cleaned up in all major implementations of Python, except that in CPython prior to version 3.4 reference cycles containing objects with finalizers would not be collected. So the better advice would be don't use finalizers in reference cycles if you need compatibility with Python 3.3 or earlier. -- https://mail.python.org/mailman/listinfo/python-list
Re: l = range(int(1E9))
On Sat, May 2, 2015 at 5:51 PM, Terry Reedy tjre...@udel.edu wrote: On 5/2/2015 5:31 PM, Ian Kelly wrote: Would it have been better if range() had been implemented as xrange() from the beginning? Sure, that would have been great. Except for one small detail: the iterator protocol didn't exist back then. For loops originally used the getitem iterator protocol. xrange objects have a __getitem__ method, but not __iter__ or __next__. As Mark pointed out, they were introduced in 1993. I'm aware of getitem iterators; just didn't realize that xrange used it or was that old. -- https://mail.python.org/mailman/listinfo/python-list
Re: Is my implementation of happy number OK
On Fri, May 1, 2015 at 2:27 AM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: Rather than 10**7, how about trying (10**500 + 2). Is it happy? Using the Python code from Wikipedia: https://en.wikipedia.org/wiki/Happy_number SQUARE = dict([(c, int(c)**2) for c in 0123456789]) def is_happy(n): while (n 1) and (n != 4): n = sum(SQUARE[d] for d in str(n)) return n == 1 I can calculate whether n=10**500 + 2 is happy in less than a millisecond on my computer. Not really the most exciting example, since the following number in the sequence would be 5. But a random sequence of 500 non-zero digits doesn't seem to take substantially longer. -- https://mail.python.org/mailman/listinfo/python-list
Re: Is my implementation of happy number OK
On Thu, Apr 30, 2015 at 9:59 AM, Cecil Westerhof ce...@decebal.nl wrote: I implemented happy_number function: _happy_set = { '1' } _unhappy_set= set() def happy_number(n): Check if a number is a happy number https://en.wikipedia.org/wiki/Happy_number def create_current(n): current_array = sorted([value for value in str(n) if value != '0']) A generator expression in place of the list comprehension would avoid creating an intermediate list, which would save memory on extremely large numbers. Not sure without testing how it would compare on small numbers. For reference, since size here refers to number of digits, 1E8 would be considered tiny. Also for large numbers, there might be a smarter data structure to use than just sorting the digits, which is O(n log n). Simplest would probably just be a 9-element tuple containing the count of each non-zero digit, which would be only O(n) to build. return (current_array, ''.join(current_array)) You don't seem to be actually using current_array for anything, so why not just return the string? global _happy_set global _unhappy_set current_run = set() current_array, \ current_string = create_current(n) As a stylistic rule, avoid line continuations using \. Prefer using unclosed parentheses for line continuations, e.g. the above could be written as: (current_array, current_string) = create_current(n) But really, the above is short enough it could just be written on a single line. Also, I feel like the prefix current_ is used so much here that it loses its meaning. All these variable names would be better without it. if current_string in _happy_set: return True if current_string in _unhappy_set: return False Instead of two sets, consider using a single _happy_dict with values True for happy and False for unhappy. Then the above becomes: if current_string in _happy_dict: return _happy_dict[current_string] while True: current_run.add(current_string) current_array, \ current_string = create_current(sum([int(value) ** 2 for value in current_string])) Since there are only 9 values that get squared, you could precompute the squares to avoid squaring them over and over again. if current_string in current_run | _unhappy_set: In case the sets get large it might be better to avoid creating the union and just do: if current_string in current_run or current_string in _unhappy_set: _unhappy_set |= current_run return False if current_string in _happy_set: _happy_set |= current_run return True -- https://mail.python.org/mailman/listinfo/python-list
Re: mixing set and list operations
On Thu, Apr 30, 2015 at 10:07 AM, Tim jtim.arn...@gmail.com wrote: I noticed this today, using Python2.7 or 3.4, and wondered if it is implementation dependent: You can use 'extend' to add set elements to a list and use 'update' to add list elements to a set. It's not implementation dependent. Both methods are documented as accepting arbitrary iterables. The same is also true for the other foo_update set methods (and is generally true of built-ins). It is *not* true for the operator versions of the set methods, however (|, -, , ^). It's also true for dict.update, except that in this case if an iterable is passed instead of a map, then each element of the iterable must be a 2-element iterable. -- https://mail.python.org/mailman/listinfo/python-list
Re: Wrote a memoize function: I do not mind feedback
On Wed, Apr 29, 2015 at 12:06 AM, Cecil Westerhof ce...@decebal.nl wrote: Op Monday 27 Apr 2015 22:35 CEST schreef Albert-Jan Roskam: def some_func(arg, _memoize={}): try: return _memoize[arg] except KeyError: result = some_expensive_operation(arg) _memoize[arg] = result return result That is in a way the same as what I do, except I do not use an exception. Iunderstand it is not as expensive as it was anymore, but I do not like to use an exception (when not necessary). It's useful to keep in mind which case it is that you're trying to optimize. The expensive case for exceptions is when one actually gets raised. A try that doesn't raise an exception is pretty cheap, probably cheaper than looking up the key in the dict twice as the code you linked does. Compare: from timeit import timeit timeit(if x in y: y[x], setup=x = 1; y = {1: 2}) 0.2224265851985905 timeit( ... try: ... y[x] ... except KeyError: ... pass ... , setup=x = 1; y = {1: 2}) 0.15237198271520924 On the other hand, if the KeyError does get raised, then it will be more expensive, but that would already be the expensive case for memoized function, and if that computation isn't significantly more expensive than the cost of raising an exception, then it probably isn't worth memoizing in the first place. Regardless of which way you write it, this sort of micro-optimization hardly ever matters, and in the situations where it does matter, you'll gain much more performance by rewriting in C than by obsessively tuning your Python code. -- https://mail.python.org/mailman/listinfo/python-list
Re: Lucky numbers in Python
On Wed, Apr 29, 2015 at 12:24 PM, Cecil Westerhof ce...@decebal.nl wrote: I was wondering if there is a way to do this: for del_index in range((sieve_len // skip_count) * skip_count - 1, skip_count - 2, -skip_count): del sieve[del_index] in a more efficient way. You can delete using slices. del sieve[(sieve_len // skip_count) * skip_count - 1 : skip_count - 2 : -skip_count] Now you no longer need to do the iteration in reverse, which makes the slicing simpler: del sieve[skip_count - 1 : (sieve_len // skip_count) * skip_count : skip_count] And although it's not clear to me what this is supposed to be doing, you probably no longer need the middle term if the intention is to continue deleting all the way to the end of the list (if it is then I think you have a bug in the existing implementation, since the last item in the list can never be deleted). del sieve[skip_count - 1 :: skip_count] -- https://mail.python.org/mailman/listinfo/python-list
Re: Useful module to be written by a newbie
On Wed, Apr 29, 2015 at 1:03 PM, Peter Otten __pete...@web.de wrote: I was judging from the look of your MovingAverage. I don't like the interface, it really should take an iterable so that you can write list(moving_average([1,2,3], 2)) [1.5, 2.5] The problem with this is that many use cases for moving averages need to access the current average before future items become available. For example, a download speed indicator: it needs to average the transfer rate over the last few seconds. There will be more transfer rate data in the future, but it's not available yet, so you can't add it to the iterable unless the iterable is actually some sort of read/write buffer that can be appended to once iteration has already started. That seems overly complex, though; for one, you need to be careful not to exhaust the buffer, since the iteration protocol requires that once next() raises StopIteration, it will always raise StopIteration. But also, why add the data to object A so that it can be consumed by object B if you could just add it to object B directly? So I think the iterable interface that you're describing really needs to be a coroutine of some sort: from collections import deque def moving_average(length): ... values = deque([(yield)], maxlen=length) ... while True: ... values.append((yield sum(values) / len(values))) ... mavg = moving_average(5) next(mavg) mavg.send(1) 1.0 mavg.send(2) 1.5 mavg.send(3) 2.0 mavg.send(4) 2.5 mavg.send(5) 3.0 mavg.send(6) 4.0 mavg.send(7) 5.0 This works, but I don't really like it. For one, our moving_average iterable isn't really an iterable any more; we need to call send instead of next, which means we can't just stick it inside a for loop. And if we can't iterate over it, then what's the point of using a generator? If we make it a class, then we can give it a more flexible API. class MovingAverage(object): def __init__(self, length): self.values = deque(maxlen=length) def append(self, value): self.values.append(value) def average(self): return sum(self.values) / len(self.values) Which is pretty much back to where we started. -- https://mail.python.org/mailman/listinfo/python-list
Re: Lucky numbers in Python
On Wed, Apr 29, 2015 at 3:45 PM, Cecil Westerhof ce...@decebal.nl wrote: Op Wednesday 29 Apr 2015 21:57 CEST schreef Ian Kelly: And although it's not clear to me what this is supposed to be doing, you probably no longer need the middle term if the intention is to continue deleting all the way to the end of the list (if it is then I think you have a bug in the existing implementation, since the last item in the list can never be deleted). What do you mean by this? Executing: lucky_numbers(5) gives: [1, 3] So the last element (5) is deleted. Off by one error on my part. This is why negative skip values on ranges and slices are not recommended: they're confusing. :-) In that case you can definitely omit the middle term of the slice, which will be both more concise and clearer in intent, though probably not significantly faster. -- https://mail.python.org/mailman/listinfo/python-list
Re: Lucky numbers in Python
On Wed, Apr 29, 2015 at 6:01 PM, Cecil Westerhof ce...@decebal.nl wrote: Op Thursday 30 Apr 2015 00:38 CEST schreef Ian Kelly: In that case you can definitely omit the middle term of the slice, which will be both more concise and clearer in intent, though probably not significantly faster. It is certainly nit faster. It is even significantly slower. With the middle term lucky_numbers(int(1e6)) takes 0.13 seconds. Without it takes 14.3 seconds. Hundred times as long. That would be rather surprising, since it's the same operation being performed, so I did my own timing and came up with 0.25 seconds (best of 3) with the middle term and 0.22 seconds without. I suspect that you tested it as del sieve[skip_count - 1 : skip_count] (which would delete only one item) rather than del sieve[skip_count - 1 :: skip_count]. -- https://mail.python.org/mailman/listinfo/python-list
Re: Lucky numbers in Python
On Wed, Apr 29, 2015 at 6:11 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: On Thu, 30 Apr 2015 05:57 am, Ian Kelly wrote: On Wed, Apr 29, 2015 at 12:24 PM, Cecil Westerhof ce...@decebal.nl wrote: I was wondering if there is a way to do this: for del_index in range((sieve_len // skip_count) * skip_count - 1, skip_count - 2, -skip_count): del sieve[del_index] in a more efficient way. You can delete using slices. del sieve[(sieve_len // skip_count) * skip_count - 1 : skip_count - 2 : -skip_count] Now you no longer need to do the iteration in reverse, which makes the slicing simpler: del sieve[skip_count - 1 : (sieve_len // skip_count) * skip_count : skip_count] True, but *probably* at the expense of speed. When you delete items from a list, the remaining items have to be moved, which takes time, especially for very large lists. Most of the time, rather than deleting items, it is faster to set them to a placeholder (for example None) and then copy the ones which aren't None in a separate loop: You're correct, but I think this would be difficult to apply to the OP's algorithm since the list indexing depends on the items from previous iterations having been removed. -- https://mail.python.org/mailman/listinfo/python-list