Re: How this C function was called through ctypes this way?

2016-02-04 Thread eryk sun
On Thu, Feb 4, 2016 at 3:33 AM,   wrote:
>
> class DoubleArrayType:
> def from_param(self, param):
>
> [snip]
>
> DoubleArray = DoubleArrayType()
> _avg = _mod.avg
> _avg.argtypes = (DoubleArray, ctypes.c_int)
>
> [snip]
>
> What confuse me are:
> (1) at line: _avg.argtypes = (DoubleArray, ctypes.c_int)
> The "DoubleArry" is an instance of the class "DoubleArrayType",
> Can it appear at where a type was expected?

ctypes generally (notwithstanding out parameters defined via
paramflags) requires only that an object set in argtypes has a
from_param callable to check and convert the corresponding argument.
Usually classes are set in argtypes, so usually from_param is a
classmethod. In this case the author chose to set an instance in
argtypes and use an instance method. While unusual, this works all the
same.

> (2) How the method "from_param" was invoked? I can't see any
> mechanism to reach it from the "_avg(values, len(values))" call.

A ctypes function pointer is a callable implemented in C. Its tp_call
slot function is PyCFuncPtr in Modules/_ctypes/_ctypes.c, which in
turn calls _ctypes_callproc in Modules/_ctypes/callproc.c. Here's the
snippet of code from _ctypes_callproc that's responsible for calling
the from_param converter:

converter = PyTuple_GET_ITEM(argtypes, i);
v = PyObject_CallFunctionObjArgs(converter, arg, NULL);

Note that "argtypes" in the above is a misnomer; it's actually the
tuple of from_param converters.

Each from_param return value is passed to ConvParam, which handles
ctypes objects, integers, strings, and None. If the object isn't of
those but has an _as_parameter_ attribute, ConvParam calls itself
recursively using the _as_parameter_ value.

> _mod = ctypes.cdll.LoadLibrary(_path)

Calling ctypes.CDLL directly is preferable since it allows passing
parameters such as "mode" and "use_errno".

IMO, the ctypes.cdll and ctypes.windll loaders should be avoided in
general, especially on Windows, since their attribute-based access
(e.g. windll.user32) caches libraries, which in turn cache
function-pointer attributes. You don't want function pointer instances
being shared across unrelated packages. They may not use compatible
prototypes and errcheck functions. Each package, module, or script
should create private instances of CDLL, PyDLL, and WinDLL for a given
shared library.
-- 
https://mail.python.org/mailman/listinfo/python-list


Non working Parallel videoplayer

2016-02-04 Thread mdelamo90
I have coded a program with python and vlc that plays some videos, but whenever 
I try to play 3 videos at once, Windows closes the program, I'm guessing that 
the reason is that one process can't play 3 videos at once (but I don't really 
know).

My trimmed program plays one video (trhough a global variable, os it's easy to 
change) and once you press the number 2 on the keyboard it put the data from a 
video on the queue to be played by another process.

The new process starts and makes some prints, then it should start the new 
video and another one, so we should have two videos playing, one on the left 
part of the screen and the other on the right.

I've done this on linear programming, but I can't make the new process to start 
the videos. I oame from a C/C++ background so I don't know if it's something I 
didn't fully grasp about the multiprocess on Python or something else that I'm 
doing wrong.

Here is the linear working version (commented lines 206-220) and the parallel 
non-working version lines (221-225) Comment/Uncomment to toogle bewtween them.

Its huge for a miniexample (383 lines), but I didn't know how to make it 
shorter (it comes from a 3000+ project):



# import external libraries
import wx # 2.8
import vlc
import pdb
from time import sleep
# import standard libraries
import user
import urllib

VIDEO1 = "Insert_your_first_video.mp4"
VIDEO2 = "Insert_your_second_video.mp4"


class MyOpener(urllib.FancyURLopener):
version = "App/1.7"  # doesn't work
version = "Mozilla/4.0 (MSIE 6.0; Windows NT 5.0)2011-03-10 15:38:34"  # 
works


class Consumer(multiprocessing.Process):
def __init__(self, task_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue

def run(self):
app = wx.App()
proc_name = self.name

print "waiting for queue"

while True:  # While queue not empy
print self.task_queue.qsize()
next_task = self.task_queue.get()
global dual, midsection
midsection = next_task.d
dual = True
player2 = Player("Dual PyVLC Player")
player2.Centre()
# show the player window centred and run the application
print "parametro a " + str(next_task.a)
print "parametro b " + str(next_task.b)
print "parametro c " + str(next_task.c)
print "parametro d " + str(next_task.d)
  # tasks.put(Task(media, dual, time, midsection))

player2.playFile(VIDEO1,next_task.c,True)
player2.playFile(VIDEO2,next_task.c,False)
#player.vidplayer1.set_media(next_task.a)
#player.vidplayer2.play()
player2.Maximize(True)
player2.OnFullscreen(None)
player2.SetTransparent(255)
player2.SetFocus()
player2.Show(True)
#sleep(1)
#player2.vidplayer1.set_title(next_task.a)
'''player1.SetTransparent(0)
player1.timer1.Start(fadetime)
player1.set_amount(0)'''

def playFile(self,moviefile,time,principal):

# Creation
self.Media = self.Instance.media_new_path(moviefile)
self.SetTitle("Monkey")
#self.SetIcon(wx.Icon('Monkey.ico', wx.BITMAP_TYPE_ICO))

if dual:
if principal:
#If we dont set any handle it starts in another window, maybe 
usefull for dual screens?
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)
else:
self.vidplayer2.set_media(self.Media)
#self.vlchandle = self.vidplayer2.get_xwindow()
self.vidplayer2.set_hwnd(self.videopanel2.GetHandle())
self.OnPlay(None,False)
self.vidplayer2.set_time(time)
else:
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)

class Task(object):
def __init__(self, a, b, c, d):
self.a = a  # can't be media, it's not pickable
self.b = b  # boolean
self.c = c  # Number (time)
self.d = d  # boolean

def __str__(self):
return '%s * %s' % (self.a, self.b, self.c, self.d)

class Player(wx.Frame):
"""The main window has to deal with events.
"""
def __init__(self, title, OS="windows"):

self.OS = OS

wx.Frame.__init__(self, None, -1, title,
  pos=wx.DefaultPosition, size=(950, 500))
self.SetBackgroundColour(wx.BLACK)

# Panels
# The first panel holds the video/videos and it's all black


.py file won't open in windows 7

2016-02-04 Thread Yossifoff Yossif
Hallow,
I try to open a .py file (attached), but what I get is a windows DOS window 
opening and closing in a couple of seconds. Ran repair of the program, nothing 
happened.
I cannot see error messages and don't know where to look for ones.
Would appreciate your piece of advice.

[cid:image003.jpg@01D15F5B.A41CF980]



joseph

Yossif Yossifoff

Operations CS Core

Hot Mobile

Office +972539036945,

Mobile +972532422649

yoss...@hotmobile.co.il





















This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain materials 
protected by copyright or information that is non-public, proprietary, 
privileged, confidential, and exempt from disclosure under applicable law or 
agreement. If you are not the intended recipient, you are hereby notified that 
any use, dissemination, distribution, or copying of this communication is 
strictly prohibited. If you have received this communication by error, notify 
the sender immediately and delete this message immediately. Thank you
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26039] More flexibility in zipfile interface

2016-02-04 Thread Thomas Kluyver

Thomas Kluyver added the comment:

Is there anything more I should be doing with either of these patches? I think 
I've incorporated all review comments I've seen. Thanks!

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Non working Parallel videoplayer

2016-02-04 Thread mdelamo90
I think it has to be with some functions that I think they are defined on the 
"forked" processes and maybe they don't, if I create a class on the new process 
it should have all the defined functions than the original right?
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Florin Papa

Florin Papa added the comment:

I ran perf to use calibration and there is no difference in stability
compared to the unpatched version.

With patch:

python perf.py -b json_dump_v2 -v --csv=out1.csv --affinity=2 ../cpython/python 
../cpython/python
INFO:root:Automatically selected timer: perf_counter
[1/1] json_dump_v2...
Calibrating
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 1 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 2 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 4 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 8 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 16 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 32 --timer perf_counter`
Calibrating => num_runs=10, num_loops=32 (0.50 sec < 0.87 sec)
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 10 -l 32 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 10 -l 32 --timer perf_counter`

Report on Linux centos 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 
2015 x86_64 x86_64
Total CPU cores: 18

### json_dump_v2 ###
Min: 0.877497 -> 0.886482: 1.01x slower   <--
Avg: 0.878150 -> 0.888351: 1.01x slower
Not significant
Stddev: 0.00054 -> 0.00106: 1.9481x larger


Without patch:

python perf.py -b json_dump_v2 -v --csv=out1.csv --affinity=2 ../cpython/python 
../cpython/python
INFO:root:Automatically selected timer: perf_counter
[1/1] json_dump_v2...
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 50 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 50 --timer perf_counter`

Report on Linux centos 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 
2015 x86_64 x86_64
Total CPU cores: 18

### json_dump_v2 ###
Min: 2.755514 -> 2.764131: 1.00x slower <-- (almost) same as above
Avg: 2.766546 -> 2.775587: 1.00x slower
Not significant
Stddev: 0.00538 -> 0.00382: 1.4069x smaller

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Multiprocess videoplayer

2016-02-04 Thread mdelamo90
I have coded a program with python and vlc that plays some videos, but whenever 
I try to play 3 videos at once, Windows closes the program, I'm guessing that 
the reason is that one process can't play 3 videos at once (but I don't really 
know).

My trimmed program plays one video (trhough a global variable, os it's easy to 
change) and once you press the number 2 on the keyboard it put the data from a 
video on the queue to be played by another process.

The new process starts and makes some prints, then it should start the new 
video and another one, so we should have two videos playing, one on the left 
part of the screen and the other on the right. 

I've done this on linear programming, but I can't make the new process to start 
the videos. I oame from a C/C++ background so I don't know if it's something I 
didn't fully grasp about the multiprocess on Python or something else that I'm 
doing wrong.

Its huge for a miniexample (300 lines), but I didn't know how to make it 
shorter:


# import external libraries
import wx # 2.8
import vlc
import pdb
# import standard libraries
import user
import urllib
import multiprocessing



VIDEO1 = "Insert_your_first_video.mp4"
VIDEO2 = "Insert_your_second_video.mp4"

class MyOpener(urllib.FancyURLopener):
version = "App/1.7"  # doesn't work
version = "Mozilla/4.0 (MSIE 6.0; Windows NT 5.0)2011-03-10 15:38:34"  # 
works


class Consumer(multiprocessing.Process):
def __init__(self, task_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue

def run(self):
app = wx.App()
proc_name = self.name

print "waiting for queue"

while True:  # While queue not empy
print self.task_queue.qsize()
next_task = self.task_queue.get()
global dual, midsection
dual = True
midsection = next_task.d
player2 = Player("Dual PyVLC Player")
player2.Centre()
# show the player window centred and run the application
print "parametro a " + str(next_task.a)
print "parametro b " + str(next_task.b)
print "parametro c " + str(next_task.c)
print "parametro d " + str(next_task.d)
  # tasks.put(Task(media, dual, time, midsection))

player2.playFile(next_task.a,next_task.c,True)
player2.playFile(VIDEO2,next_task.c,True)
#player.vidplayer1.set_media(next_task.a)
#player.vidplayer2.play()
player2.Maximize(True)
player2.OnFullscreen(None)
player2.SetTransparent(255)
player2.SetFocus()
player2.Show()
#player2.vidplayer1.set_title(next_task.a)
'''player1.SetTransparent(0)
player1.timer1.Start(fadetime)
player1.set_amount(0)'''

def playFile(self,moviefile,time,principal):

# Creation
self.Media = self.Instance.media_new_path(moviefile)
self.SetTitle("Monkey")
#self.SetIcon(wx.Icon('Monkey.ico', wx.BITMAP_TYPE_ICO))

if dual:
if principal:
#If we dont set any handle it starts in another window, maybe 
usefull for dual screens?
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)
else:
self.vidplayer2.set_media(self.Media)
#self.vlchandle = self.vidplayer2.get_xwindow()
self.vidplayer2.set_hwnd(self.videopanel2.GetHandle())
self.OnPlay(None,False)
self.vidplayer2.set_time(time)
else:
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)

class Task(object):
def __init__(self, a, b, c, d):
self.a = a  # can't be media, it's not pickable
self.b = b  # boolean
self.c = c  # Number (time)
self.d = d  # boolean

def __str__(self):
return '%s * %s' % (self.a, self.b, self.c, self.d)

class Player(wx.Frame):
"""The main window has to deal with events.
"""
def __init__(self, title, OS="windows"):

self.OS = OS

wx.Frame.__init__(self, None, -1, title,
  pos=wx.DefaultPosition, size=(950, 500))
self.SetBackgroundColour(wx.BLACK)

# Panels
# The first panel holds the video/videos and it's all black

self.videopanel = wx.Panel(self, -1)
self.videopanel.SetBackgroundStyle(wx.BG_STYLE_COLOUR)
self.videopanel.SetBackgroundColour(wx.BLACK)

if dual:
videopanel2 = wx.Panel(self, -1)
  

[issue22107] tempfile module misinterprets access denied error on Windows

2016-02-04 Thread Thomas Kluyver

Thomas Kluyver added the comment:

This issue was closed, but I believe the original bug reported was not fixed: 
trying to create a temporary file in a directory where you don't have write 
permissions hangs for a long time before failing with a misleading 
FileExistsError, rather than failing immediately with PermissionError.

I've just run into this on Python 3.5.1 while trying to use tempfile to check 
if a directory is writable - which I'm doing precisely because os.access() 
isn't useful on Windows!

I find it hard to believe that there is no way to distinguish a failure because 
the name is already used for a subdirectory from a failure because we don't 
have permission to create a file.

--
nosy: +takluyver

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

fastint2.patch adds small regression for string multiplication:

$ ./python -m timeit -s "x = 'x'" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "
Unpatched:  1.46 usec per loop
Patched:1.54 usec per loop

Here is an alternative patch. It just uses existing specialized functions for 
integers: long_add, long_sub and long_mul. It doesn't add regression for above 
example with string multiplication, and it looks faster than fastint2.patch for 
integer multiplication.

$ ./python -m timeit -s "x = 12345" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "
Unpatched:  0.887 usec per loop
fastint2.patch: 0.841 usec per loop
fastint_alt.patch:  0.804 usec per loop

--
Added file: http://bugs.python.org/file41801/fastint_alt.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26252] Add an example to importlib docs on setting up an importer

2016-02-04 Thread Maciej Szulik

Changes by Maciej Szulik :


--
nosy: +maciej.szulik

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Victor, this is a very interesting write-up, thank you.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

> STINNER Victor added the comment:
> I modified Stefan's telco.py to remove all I/O from the hot code: the 
> benchmark is now really CPU-bound. I also modified telco.py to run the 
> benchmark 5 times. One run takes around 2.6 seconds.
> 

Nice. telco.py is an ad-hoc script from the original decimal.py sandbox,
I missed that it called "infil.read(8)". :)

> And *NOW* using my isolated CPU physical cores #2 and #3 (Linux CPUs 2, 3, 6 
> and 7), still on the heavily loaded system:
> ---
> $ taskset -c 2,3,6,7 python3 telco_haypo.py full 
> 
> Elapsed time: 2.57948748662
> Elapsed time: 2.582796103536
> Elapsed time: 2.5811954810001225
> Elapsed time: 2.578203360887
> Elapsed time: 2.57237063649

Great.  I'll try that out in the weekend.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: problem in installing python

2016-02-04 Thread Salony Permanand
hello sir,
During working on python I need urllib2 for my python version 2.7.11.
Kindly provide me address from where to download it..
Thanking you.

On Wed, Feb 3, 2016 at 4:27 PM, Salony Permanand  wrote:

> Thankyou for consideration..I have solved my problem by changing name of
> temp files by "temp1"
>
> -- Forwarded message --
> From: eryk sun 
> Date: Wed, Feb 3, 2016 at 3:55 PM
> Subject: Re: problem in installing python
> To: python-list@python.org
> Cc: Salony Permanand 
>
>
> On Wed, Feb 3, 2016 at 12:57 AM, Salony Permanand
>  wrote:
> >
> > I downloaded different version of python but no one is installed on my pc
> > because of same installation error each time having error code 2203.
>
> 2203 may be a Windows Installer error [1]. If so, the error message
> has the following template:
>
> Database: [2]. Cannot open database file. System error [3].
>
> Please provide the "System error", since it may help to determine why
> the installer can't open the file. Possibly an anti-virus program is
> interfering, in which case temporarily disabling it may solve the
> problem.
>
> [1]: https://msdn.microsoft.com/en-us/library/aa372835
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


multiprocessing videoplayer

2016-02-04 Thread mdelamo90
I have coded a program with python and vlc that plays some videos, but whenever 
I try to play 3 videos at once, Windows closes the program, I'm guessing that 
the reason is that one process can't play 3 videos at once (but I don't really 
know).

My trimmed program plays one video (trhough a global variable, os it's easy to 
change) and once you press the number 2 on the keyboard it put the data from a 
video on the queue to be played by another process.

The new process starts and makes some prints, then it should start the new 
video and another one, so we should have two videos playing, one on the left 
part of the screen and the other on the right.

I've done this on linear programming, but I can't make the new process to start 
the videos. I oame from a C/C++ background so I don't know if it's something I 
didn't fully grasp about the multiprocess on Python or something else that I'm 
doing wrong.

Its huge for a miniexample (300 lines), but I didn't know how to make it 
shorter:


# import external libraries
import wx # 2.8
import vlc
import pdb
# import standard libraries
import user
import urllib
import multiprocessing



VIDEO1 = "Insert_your_first_video.mp4"
VIDEO2 = "Insert_your_second_video.mp4"

class MyOpener(urllib.FancyURLopener):
version = "App/1.7"  # doesn't work
version = "Mozilla/4.0 (MSIE 6.0; Windows NT 5.0)2011-03-10 15:38:34"  # 
works


class Consumer(multiprocessing.Process):
def __init__(self, task_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue

def run(self):
app = wx.App()
proc_name = self.name

print "waiting for queue"

while True:  # While queue not empy
print self.task_queue.qsize()
next_task = self.task_queue.get()
global dual, midsection
dual = True
midsection = next_task.d
player2 = Player("Dual PyVLC Player")
player2.Centre()
# show the player window centred and run the application
print "parametro a " + str(next_task.a)
print "parametro b " + str(next_task.b)
print "parametro c " + str(next_task.c)
print "parametro d " + str(next_task.d)
  # tasks.put(Task(media, dual, time, midsection))

player2.playFile(next_task.a,next_task.c,True)
player2.playFile(VIDEO2,next_task.c,False)
#player.vidplayer1.set_media(next_task.a)
#player.vidplayer2.play()
player2.Maximize(True)
player2.OnFullscreen(None)
player2.SetTransparent(255)
player2.SetFocus()
player2.Show()
#player2.vidplayer1.set_title(next_task.a)
'''player1.SetTransparent(0)
player1.timer1.Start(fadetime)
player1.set_amount(0)'''

def playFile(self,moviefile,time,principal):

# Creation
self.Media = self.Instance.media_new_path(moviefile)
self.SetTitle("Monkey")
#self.SetIcon(wx.Icon('Monkey.ico', wx.BITMAP_TYPE_ICO))

if dual:
if principal:
#If we dont set any handle it starts in another window, maybe 
usefull for dual screens?
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)
else:
self.vidplayer2.set_media(self.Media)
#self.vlchandle = self.vidplayer2.get_xwindow()
self.vidplayer2.set_hwnd(self.videopanel2.GetHandle())
self.OnPlay(None,False)
self.vidplayer2.set_time(time)
else:
self.vidplayer1.set_media(self.Media)
#self.vlchandle = self.vidplayer1.get_xwindow()
self.vidplayer1.set_hwnd(self.videopanel.GetHandle())
self.OnPlay(None,True)
self.vidplayer1.set_time(time)

class Task(object):
def __init__(self, a, b, c, d):
self.a = a  # can't be media, it's not pickable
self.b = b  # boolean
self.c = c  # Number (time)
self.d = d  # boolean

def __str__(self):
return '%s * %s' % (self.a, self.b, self.c, self.d)

class Player(wx.Frame):
"""The main window has to deal with events.
"""
def __init__(self, title, OS="windows"):

self.OS = OS

wx.Frame.__init__(self, None, -1, title,
  pos=wx.DefaultPosition, size=(950, 500))
self.SetBackgroundColour(wx.BLACK)

# Panels
# The first panel holds the video/videos and it's all black

self.videopanel = wx.Panel(self, -1)
self.videopanel.SetBackgroundStyle(wx.BG_STYLE_COLOUR)
self.videopanel.SetBackgroundColour(wx.BLACK)

if dual:
videopanel2 = wx.Panel(self, -1)
  

[issue26269] zipfile should call lstat instead of stat if available

2016-02-04 Thread Anish Shah

Changes by Anish Shah :


--
nosy: +anish.shah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

Stefan: "In my experience it is very hard to get stable benchmark results with 
Python.  Even long running benchmarks on an empty machine vary: (...)"

tl; dr We *can* tune the Linux kernel to avoid most of the system noise when 
running kernels.


I modified Stefan's telco.py to remove all I/O from the hot code: the benchmark 
is now really CPU-bound. I also modified telco.py to run the benchmark 5 times. 
One run takes around 2.6 seconds.

I also added the following lines to check the CPU affinity and the number of 
context switches:

os.system("grep -E -i 'cpu|ctx' /proc/%s/status" % os.getpid())

Well, see attached telco_haypo.py for the full script.

I used my system_load.py script to get a system load >= 5.0. Without tasksel, 
the benchmark result changes completly: at least 5 seconds. Well, it's not 
really surprising, it's known that benchmarks depend on the system load.


*BUT* I have a great kernel called Linux which has cool features called "CPU 
isolation" and "no HZ" (tickless kernel). On my Fedoera 23, the kernel is 
compiled with CONFIG_NO_HZ=y and CONFIG_NO_HZ_FULL=y.

haypo@smithers$ lscpu --extended
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZMINMHZ
0   00  00:0:0:0   oui5900, 1600,
1   00  11:1:1:0   oui5900, 1600,
2   00  22:2:2:0   oui5900, 1600,
3   00  33:3:3:0   oui5900, 1600,
4   00  00:0:0:0   oui5900, 1600,
5   00  11:1:1:0   oui5900, 1600,
6   00  22:2:2:0   oui5900, 1600,
7   00  33:3:3:0   oui5900, 1600,

My CPU is on a single socket, has 4 physical cores, but Linux gets 8 cores 
because of hyper threading.


I modified the Linux command line during the boot in GRUB to add: 
isolcpus=2,3,6,7 nohz_full=2,3,6,7. Then I forced the CPU frequency to 
performance to avoid hiccups:

# for id in 2 3 6 7; do echo performance > cpu$id/cpufreq/scaling_governor; 
done 

Check the config with:

$ cat /sys/devices/system/cpu/isolated
2-3,6-7
$ cat /sys/devices/system/cpu/nohz_full
2-3,6-7
$ cat /sys/devices/system/cpu/cpu[2367]/cpufreq/scaling_governor
performance
performance
performance
performance


Ok now with this kernel config but still without tasksel on an idle system:
---
Elapsed time: 2.66008842437
Elapsed time: 2.592753862844
Elapsed time: 2.613568236813
Elapsed time: 2.581926057324
Elapsed time: 2.599129409322

Cpus_allowed:   33
Cpus_allowed_list:  0-1,4-5
voluntary_ctxt_switches:1
nonvoluntary_ctxt_switches: 21
---

With system load >= 5.0:
---
Elapsed time: 5.348448917415
Elapsed time: 5.33679747233
Elapsed time: 5.18741368792
Elapsed time: 5.2412202058
Elapsed time: 5.1020124644

Cpus_allowed_list:  0-1,4-5
voluntary_ctxt_switches:1
nonvoluntary_ctxt_switches: 1597
---

And *NOW* using my isolated CPU physical cores #2 and #3 (Linux CPUs 2, 3, 6 
and 7), still on the heavily loaded system:
---
$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.57948748662
Elapsed time: 2.582796103536
Elapsed time: 2.5811954810001225
Elapsed time: 2.578203360887
Elapsed time: 2.57237063649

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 16
---

Numbers look *more* stable than the numbers of the first test without taskset 
on an idle system! You can see that number of context switches is very low 
(total: 18).

Example of a second run:
---
haypo@smithers$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.538398498999868
Elapsed time: 2.54471196891
Elapsed time: 2.532367733904
Elapsed time: 2.53625264783
Elapsed time: 2.52574818205

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 15
---

Third run:
---
haypo@smithers$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.581917293605
Elapsed time: 2.578302425365
Elapsed time: 2.57849358701
Elapsed time: 2.577419851588
Elapsed time: 2.577214899445

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 15
---

Well, it's no perfect, but it looks much stable than timings without specific 
kernel config nor CPU pinning.

Statistics on the 15 timings of the 3 runs with tunning on a heavily loaded 
system:

>>> times
[2.57948748662, 2.582796103536, 2.5811954810001225, 2.578203360887, 
2.57237063649, 2.538398498999868, 2.54471196891, 2.532367733904, 
2.53625264783, 2.52574818205, 

[issue26110] Speedup method calls 1.2x

2016-02-04 Thread Maciej Szulik

Changes by Maciej Szulik :


--
nosy: +maciej.szulik

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

I prefer fastint_alt.patch design, it's simpler. I added a comment on the 
review.

My numbers, best of 5 timeit runs:

$ ./python -m timeit -s "x = 12345" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "

* original: 299 ns
* fastint2.patch: 282 ns (-17 ns, -6%)
* fastint_alt.patch: 267 ns (-32 ns, -11%)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



setup failed

2016-02-04 Thread Prince Thomas
Hi

   I am an computer science engineer. I downloaded the python version 
3.5.1.amd64 and just python 3.5.1. The problem is when I install the program 
setup is failed and showing  0*80070570-The file or directory is corrupted and 
unreadable. I install the newest visual c++ redist and still same. My os is win 
8.1. Please help me out of this 






Sent from Windows Mail
-- 
https://mail.python.org/mailman/listinfo/python-list


How a module is being marked as imported?

2016-02-04 Thread Jean-Charles Lefebvre
Hi all,

The short version: How CPython marks a module as being fully imported, if it 
does, so that the same import statement ran from another C thread at the same 
time does not collide? Or, reversely, does not think the module is not already 
fully imported?

The full version: I'm running CPython 3.5.1, embedded into a C++ application on 
Windows. The application is heavily multi-threaded so several C threads call 
some Python code at the same time (different Python modules), sharing 
interpreter's resources by acquiring/releasing the GIL frequently DURING the 
calls, at language boundaries.

Sometimes (but always only once per application instance), a call to 
os.path.expandvars raises the AttributeError exception with message: module 
'string' has no attribute 'ascii_letters'. It is raised by the 
ntpath.expandvars function (line 372). When I noticed the late import statement 
of the 'string' module at the line above, I thought that MAYBE, it could be 
because the interpreter is ran in an heavily multi-threaded environment and 
that the GIL acquiring/releasing occurred at a bad timing? Making me wonder how 
the import mechanism interacts with the GIL, if it does?
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26286] dis module: coroutine opcode documentation clarity

2016-02-04 Thread Jim Jewett

New submission from Jim Jewett:

https://docs.python.org/3/library/dis.html includes a section describing the 
various opcodes.

Current documentation: """
Coroutine opcodes

GET_AWAITABLE
Implements TOS = get_awaitable(TOS), where get_awaitable(o) returns o if o is a 
coroutine object or a generator object with the CO_ITERABLE_COROUTINE flag, or 
resolves o.__await__.

GET_AITER
Implements TOS = get_awaitable(TOS.__aiter__()). See GET_AWAITABLE for details 
about get_awaitable

GET_ANEXT
Implements PUSH(get_awaitable(TOS.__anext__())). See GET_AWAITABLE for details 
about get_awaitable

BEFORE_ASYNC_WITH
Resolves __aenter__ and __aexit__ from the object on top of the stack. Pushes 
__aexit__ and result of __aenter__() to the stack.

SETUP_ASYNC_WITH
Creates a new frame object.
"""

(1)  There is a PUSH macro in ceval.c, but no PUSH bytecode.  I spent a few 
minutes trying to figure out what a PUSH command was, and how the GET_ANEXT 
differed from 
TOS = get_awaitable(TOS.__anext__())
which would match the bytecodes right above it.

After looking at ceval.c, I think GET_ANEXT is the only such bytecode to leave 
the original TOS in place, but I'm not certain about that.  Please be explicit. 
 (Unless they are the same, in which case, please use the same wording.)
 
(2)  The coroutine bytecode instructions should have a "New in 3.5" marker, as 
the GET_YIELD_FROM_ITER does.  It might make sense to just place the mark under 
Coroutine opcodes section header and say it applies to all of them, instead of 
marking each individual opcode.  

(3)  The GET_AITER and GET_ANEXT descriptions do not show the final period.  
Opcodes such as INPLACE_LSHIFT also end with a code quote, but still include a 
(not-marked-as-code) final period.

(4)  Why does SETUP_ASYNC_WITH talk about frames?  Is there actually a python 
frame involved, or is this another bytecode "block", similar to that used for 
except and finally?

--
assignee: yselivanov
components: Documentation
messages: 259595
nosy: Jim.Jewett, yselivanov
priority: normal
severity: normal
stage: needs patch
status: open
title: dis module: coroutine opcode documentation clarity
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23551] IDLE to provide menu link to PIP gui.

2016-02-04 Thread Upendra Kumar

Upendra Kumar added the comment:

I am trying to make a Tk based GUI for pip package manager. In reference to 
msg256736, I am confused about the discovery method mentioned. Is there any way 
already implemented to detect the Python versions installed systemwide?

Moreover, how to manage the non-standard installation of Python by users? I 
think in that case, it would be very difficult to detect the Python versions 
installed in the user's system. In addition to it different tools used for 
installation of Python generally end up in installing in different folders or 
paths. 

Therefore, initially what functionality should I try to implement in it? Can 
anyone please suggest me?

--
nosy: +upendra-k14

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Le 04/02/2016 21:42, Alecsandru Patrascu a écrit :
> 
> To compress all of the above, the main reason for this speedup is the
> reduction of the code path length and having the useful function
> close together, so that the CPU will be able to prefetch them in
> advance and use them instead of trowing them away because they are
> not used.

I'm expecting this patch to have an impact on executable or library
size, but not really on runtime performance, as the CPU instruction
cache only fetches whichever pieces of code are actually called.  In
other words, unused sections of code should remain cold wrt. the CPU
caches.  Apart from more or less random aliasing effects (and perhaps
TLB effects, but those should be very minor) I'm surprised that it has
positive performance effects.  But since you work at Intel, perhaps you
know things that I don't ;-)

Also any name starting with Py_ or _Py_ is an API that may be called by
third-party code, so it shouldn't be removed at all...

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

People should stop getting hung up about benchmarks numbers and instead should 
first think about what they are trying to *achieve*. FP performance in pure 
Python does not seem like an important goal in itself. Also, some benchmarks 
may show variations which are randomly correlated with a patch (e.g. before of 
different code placement by the compiler interfering with instruction cache 
wayness). It is important not to block a patch because some random benchmark on 
some random machine shows an unexpected slowdown.

That said, both of Serhiy's patches are probably ok IMO.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23551] IDLE to provide menu link to PIP gui.

2016-02-04 Thread Terry J. Reedy

Terry J. Reedy added the comment:

I think an initial version of a pip gui need only install to the Python version 
running it.

The py launcher must discover some version of 'all' Python installs to choose, 
for instance, the latest 3.x version.  I do not know the details, nor which 
system py.exe runs on.  I was suggesting looking into the details after a first 
version.

A few days ago Steve Dower reported on pydev list how PSF installs on Windows 
register themselves in the registry (the keys used).  He also proposed a 
standard convention for other distributions to register, if they wish be be 
discovered by other apps, in a way that does not interfere with the entries for 
PSF installations.

Upendra, are you an intended GSOC student or simply a volunteer?  I am asking 
because, in the absence of submissions in nearly a year, I proposed on 
core-mentorship that this might be a good GSOC project.  I will not reserve 
this for GSOC if someone else is actually going to submit something.  But I 
also do not want to withdraw the idea unless someone is.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> Also any name starting with Py_ or _Py_ is an API that may be called by 
> third-party code, so it shouldn't be removed at all...

Right. You cannot remove the following functions, they are part of the
public C API (Include/pymem.h).

/usr/bin/ld: Removing unused section '.text.PyMem_RawMalloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawCalloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawRealloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawFree' in file
'Objects/obmalloc.o'

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I’m not sure if resize() should change the len(). Dustin, why do you think it 
should? Not all ctypes objects even implement len(). The len() of 10 seems 
embedded in the class of the return value:

>>> b


Also, how would this affect create_unicode_buffer(), if the buffer is resized 
to a non-multiple of sizeof(c_wchar)?

Gedai: I’m not that familiar with the ctypes internals, but it looks like 
__len__() is implemented on the Array base class:

>>> type(b).__len__


Maybe look for the implementation of this method: 
.

--
nosy: +martin.panter

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Alecsandru Patrascu added the comment:

Sure, I attached them as files because they have a lot of lines for posting 
here (~90 in total).

The linker offers the possibility to show what piece of data/functions was 
removed, but I intentionally omitted it in order not to clutter the build 
trace. If you think it will be useful for the user to see it, I can add them to 
the patch also.

--
Added file: http://bugs.python.org/file41808/gc-removed-cpython2.txt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Changes by Alecsandru Patrascu :


Added file: http://bugs.python.org/file41809/gc-removed-cpython3.txt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: problem in installing python

2016-02-04 Thread Chris Angelico
On Thu, Feb 4, 2016 at 11:22 PM, Salony Permanand
 wrote:
> During working on python I need urllib2 for my python version 2.7.11.
> Kindly provide me address from where to download it..
> Thanking you.

It should have come with Python. Try it - you should be able to just
use it as-is.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

> I'm pretty sure that optimizing lists (and tuples?) is a great idea.

I think it's a good idea indeed.

> It would also be nice to optimize [-1] lookup

How is that different from the above? :)

--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20160] broken ctypes calling convention on MSVC / 64-bit Windows (large structs)

2016-02-04 Thread Mark Lawrence

Changes by Mark Lawrence :


--
nosy:  -BreamoreBoy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26291] Floating-point arithmetic

2016-02-04 Thread good.bad

New submission from good.bad:

print(1 - 0.8)
0.19996
print(1 - 0.2)
0.8

why not 0.2?

--
messages: 259622
nosy: goodbad
priority: normal
severity: normal
status: open
title: Floating-point arithmetic
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Finding in which class an object's method comes from

2016-02-04 Thread dieter
"ast"  writes:
> Suppose we have:
>
> ClassC inherit from ClassB
> ClassB inherit from ClassA
> ClassA inherit from object
>
> Let's build an object:
>
> obj = ClassC()
>
> Let's invoke an obj method
>
> obj.funct()
>
> funct is first looked in ClassC, then if not found
> on ClassB, then ClassA then object

In Python 2, I am using the following function to find out such
information.

from inspect import getmro

def definedBy(name, class_):
  '''return *class_* base class defining *name*.

  *class_* may (now) also be an object. In this case, its class is used.
  '''
  if not hasattr(class_, '__bases__'): class_ = class_.__class__
  for cl in getmro(class_):
if hasattr(cl,'__dict__'):
  if cl.__dict__.has_key(name): return cl
elif hasattr(cl, name): return cl
  return None
  
(Unlike other approaches reported in this thread)
it not only works for methods but also for other attributes.

I am using this for (interactive) debugging purposes:
usually, I work with Zope/Plone which is a huge software stack
and their it is handy to be able to quickly find out where something
is defined in the code.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Okay, I see. To clarify, it is Python that sets up Gnu Readline for stdout: 
. The 
problem is whichever way we go, we will have to change some part of the 
behaviour to make it internally consistent. I think my patch is the minimal 
change required.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Okay, I see. To clarify, it is Python that sets up Gnu Readline for stdout: 
. The 
problem is whichever way we go, we will have to change some part of the 
behaviour to make it internally consistent. I think my patch is the minimal 
change required.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22847] Improve method cache efficiency

2016-02-04 Thread Benjamin Peterson

Benjamin Peterson added the comment:

I suppose we've backported scarier things.

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

This is rather an objection.

If Gnu Readline is configured for stdout, why bash outputs to stderr? We should 
investigate what exactly do bash and other popular programs with readline, and 
implement this in Python.

Changing the documentation usually is a less drastic change than changing 
behavior.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: How this C function was called through ctypes this way?

2016-02-04 Thread jfong
eryk sun at 2016/2/4 UTC+8 7:35:17PM wrote:
> > _mod = ctypes.cdll.LoadLibrary(_path)
> 
> Calling ctypes.CDLL directly is preferable since it allows passing
> parameters such as "mode" and "use_errno".
> 
> IMO, the ctypes.cdll and ctypes.windll loaders should be avoided in
> general, especially on Windows, since their attribute-based access
> (e.g. windll.user32) caches libraries, which in turn cache
> function-pointer attributes. You don't want function pointer instances
> being shared across unrelated packages. They may not use compatible
> prototypes and errcheck functions. Each package, module, or script
> should create private instances of CDLL, PyDLL, and WinDLL for a given
> shared library.

Thank you for your detail and deep explanation.

I suppose the reason there are many cases using LoadLibrary() and 
attribute-based access is because it's the way the ctypes tutorial in Python 
document takes. Although both methods has been mentioned in the ctypes 
reference section, but no pros and cons was explained.

--Jach
-- 
https://mail.python.org/mailman/listinfo/python-list


metaclass

2016-02-04 Thread ast

Hi

I am looking the relationship between some classes
from the enum module


from enum import EnumMeta, Enum



class Color(Enum):

 pass


type(EnumMeta)



EnumMeta.__bases__

(,)




so EnumMeta is a metaclass, it is an instance of type
and inherit from type too.


type(Enum)



Enum.__bases__

(,)

so Enum is an instance of EnumMeta
and Enum inherit from object


type(Color)



Color.__bases__

(,)

so Color is an instance of EnumMeta and
inherit from Enum

It is not obvious to me that Color is an instance
of EnumMeta. Is it a python rule that if a class C
inherit from a class which is an instance of
a metaclass, then class C is an instance of the
same metaclass too ?

Or was it feasible to guess that ?




--
https://mail.python.org/mailman/listinfo/python-list


Re: _siftup and _siftdown implementation

2016-02-04 Thread Steven D'Aprano
On Fri, 5 Feb 2016 07:50 am, srinivas devaki wrote:

> _siftdown function breaks out of the loop when the current pos has a valid
> parent.
> 
> but _siftup function is not implemented in that fashion, if a valid
> subheap is given to the _siftup, it will bring down the root of sub heap
> and then again bring it up to its original place.
> 
> I was wondering why it is so, is it just to make the code look simple???

Hi Srinivas,

I'm sure that your question is obvious to you, but it's not obvious to us.
Where are _siftup and _siftdown defined? Are they in your code? Somebody
else's code? A library? Which library? What do they do? Where are they
from?




-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Michael Cuthbert

Michael Cuthbert added the comment:

The test looks great to me.  Does anyone on nosy know the proper way to request 
a patch review?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Hi Yury,

> I'm not sure how to respond to that. Every performance aspect *is*
> important.

Performance is not a religion (not any more than security or any other
matter).  It is not helpful to brandish results on benchmarks which have
no relevance to real-world applications.

It helps to define what we should achieve and why we want to achieve it.
 Once you start asking "why", the prospect of speeding up FP
computations in the eval loop starts becoming dubious.

> numpy isn't shipped with CPython, not everyone uses it.

That's not the point. *People doing FP-heavy computations* should use
Numpy or any of the packages that can make FP-heavy computations faster
(Numba, Cython, Pythran, etc.).

You should use the right tool for the job.  There is no need to
micro-optimize a hammer for driving screws when you could use a
screwdriver instead.  Lists or tuples of Python float objects are an
awful representation for what should be vectorized native data.  They
eat more memory in addition to being massively slower (they will also be
slower to serialize from/to disk, etc.).

"Not using" Numpy when you would benefit from it is silly.
Numpy is not only massively faster on array-wide tasks, it also makes it
easier to write high-level, readable, reusable code instead of writing
loops and iterating by hand.  Because it has been designed explicitly
for such use cases (which the Python core was not, despite the existence
of the colorsys module ;-)).  It also gives you access to a large
ecosystem of third-party modules implementing various domain-specific
operations, actively maintained by experts in the field.

Really, the mindset of "people shouldn't need to use Numpy, they can do
FP computations in the interpreter loop" is counter-productive.  I
understand that it's seductive to think that Python core should stand on
its own, but it's also a dangerous fallacy.

You *should* advocate people use Numpy for FP computations.  It's an
excellent library, and it's currently a major selling point for Python.
Anyone doing FP-heavy computations with Python should learn to use
Numpy, even if they only use it from time to time.  Downplaying its
importance, and pretending core Python is sufficient, is not helpful.

> It also harms Python 3 adoption a little bit, since many benchmarks
> are still slower. Some of them are FP related.

The Python 3 migration is happening already. There is no need to worry
about it... Even the diehard 3.x haters have stopped talking of
releasing a 2.8 ;-)

> In any case, I think that if we can optimize something - we should.

That's not true. Some optimizations add maintenance overhead for no real
benefit. Some may even hinder performance as they add conditional
branches in a critical path (increasing the load on the CPU's branch
predictors and making them potentially less efficient).

Some optimizations are obviously good, like the method call optimization
which caters to real-world use cases (and, by the way, kudos for that...
you are doing much better than all previous attempts ;-)). But some are
solutions waiting for a problem to solve.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Serhiy, was your comment an objection to changing away from stderr, or was that 
just an observation that Python’s design is inconsistent with the rest of the 
world?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12923] test_urllib fails in refleak mode

2016-02-04 Thread Roundup Robot

Roundup Robot added the comment:

New changeset eb69070e5382 by Martin Panter in branch '3.5':
Issue #12923: Reset FancyURLopener's redirect counter even on exception
https://hg.python.org/cpython/rev/eb69070e5382

New changeset a8aa7944c5a8 by Martin Panter in branch '2.7':
Issue #12923: Reset FancyURLopener's redirect counter even on exception
https://hg.python.org/cpython/rev/a8aa7944c5a8

New changeset d3be5c4507b4 by Martin Panter in branch 'default':
Issue #12923: Merge FancyURLopener fix from 3.5
https://hg.python.org/cpython/rev/d3be5c4507b4

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26287] Core dump in f-string with lambda and format specification

2016-02-04 Thread Petr Viktorin

New submission from Petr Viktorin:

Evaluating the expression f"{(lambda: 0):x}" crashes Python.

$ ./python
Python 3.6.0a0 (default, Feb  5 2016, 02:14:48) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f"{(lambda: 0):x}"  
Fatal Python error: Python/ceval.c:3576 object at 0x7f6b42f21338 has negative 
ref count -2604246222170760230
Traceback (most recent call last):
  File "", line 1, in 
TypeError: non-empty format string passed to object.__format__
Aborted (core dumped)

--
messages: 259609
nosy: encukou, eric.smith
priority: normal
severity: normal
status: open
title: Core dump in f-string with lambda and format specification
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

tl;dr   I'm attaching a new patch - fastint4 -- the fastest of them all. It 
incorporates Serhiy's suggestion to export long/float functions and use them.  
I think it's reasonable complete -- please review it, and let's get it 
committed.

== Benchmarks ==

spectral_norm (fastint_alt)-> 1.07x faster
spectral_norm (fastintfloat)   -> 1.08x faster
spectral_norm (fastint3.patch) -> 1.29x faster
spectral_norm (fastint4.patch) -> 1.16x faster

spectral_norm (fastint**.patch)-> 1.31x faster
nbody (fastint**.patch)-> 1.16x faster

Where:
- fastint3 - is my previous patch that nobody likes (it inlined a lot of logic 
from longobject/floatobject)

- fastint4 - is the patch I'm attaching and ideally want to commit

- fastint** - is a modification of fastint4.  This is very interesting -- I 
started to profile different approaches, and found two bottlenecks, that really 
made Serhiy's and my other patches slower than fastint3.  What I found is that 
PyLong_AsDouble can be significantly optimized, and PyLong_FloorDiv is super 
inefficient.

PyLong_AsDouble can be sped up several times if we add a fastpath for 1-digit 
longs:

// longobject.c: PyLong_AsDouble
if (PyLong_CheckExact(v) && Py_ABS(Py_SIZE(v)) <= 1) {
/* fast path; single digit will always fit decimal */
return (double)MEDIUM_VALUE((PyLongObject *)v);
}


PyLong_FloorDiv (fastint4 adds it) can be specialized for single digits, which 
gives it a tremendous boost.

With those too optimizations, fastint4 becomes as fast as fastint3.  I'll 
create separate issues for PyLong_AsDouble and FloorDiv.

== Micro-benchmarks ==

Floats + ints:  -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + 
(x+0.1)/(x-0.1)*2 + (x+10)*(x-30)"

2.7:  0.42 (usec)
3.5:  0.619
fastint_alt   0.619
fastintfloat: 0.52
fastint3: 0.289
fastint4: 0.51
fastint**:0.314

===

Ints:  -m timeit -s "x=2" "x + 10 + x * 20 - x // 3 + x* 10 + 20 -x"

2.7:  0.151 (usec)
3.5:  0.19
fastint_alt:  0.136
fastintfloat: 0.135
fastint3: 0.135
fastint4: 0.122
fastint**:0.122


P.S. I have another variant of fastint4 that uses fast_* functions in ceval 
loop, instead of a big macro.  Its performance is slightly worse than with the 
macro.

--
Added file: http://bugs.python.org/file41811/fastint4.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12923] test_urllib fails in refleak mode

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

One extra change I made to test_redirect_limit_independent() was to stop 
relying on _urlopener being created before we call urlopen(). As a consequence, 
in the Python 3 tests I made a wrapper around FancyURLopener to suppress the 
deprecation warning.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

> People should stop getting hung up about benchmarks numbers and instead 
> should first think about what they are trying to *achieve*. FP performance in 
> pure Python does not seem like an important goal in itself.

I'm not sure how to respond to that.  Every performance aspect *is* important.  
numpy isn't shipped with CPython, not everyone uses it.  In one of my programs 
I used colorsys extensively -- did I need to rewrite it using numpy?  Probably 
I could, but that was a simple shell script without any dependencies.

It also harms Python 3 adoption a little bit, since many benchmarks are still 
slower.  Some of them are FP related.

In any case, I think that if we can optimize something - we should.


> Also, some benchmarks may show variations which are randomly correlated with 
> a patch (e.g. before of different code placement by the compiler interfering 
> with instruction cache wayness). 

30-50% speed improvement is not a variation.  It's just that a lot less code 
gets executed if we inline some operations.


> It is important not to block a patch because some random benchmark on some 
> random machine shows an unexpected slowdown.

Nothing is blocked atm, we're just discussing various approaches.


> That said, both of Serhiy's patches are probably ok IMO.

Current Serhiy's patches are incomplete.  In any case, I've been doing some 
research and will post another message shortly.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: _siftup and _siftdown implementation

2016-02-04 Thread srinivas devaki
On Feb 5, 2016 5:45 AM, "Steven D'Aprano"  wrote:
>
> On Fri, 5 Feb 2016 07:50 am, srinivas devaki wrote:
>
> > _siftdown function breaks out of the loop when the current pos has a
valid
> > parent.
> >
> > but _siftup function is not implemented in that fashion, if a valid
> > subheap is given to the _siftup, it will bring down the root of sub heap
> > and then again bring it up to its original place.

as I come to think of it again, it is not subheap, it actually heap cut at
some level hope you get the idea from the usage of _siftup. so even though
the `pos` children are valid the _siftup brings down the new element (i.e
the element which is at first at `pos`) upto its leaf level and then again
it is brought up by using _siftdown. why do the redundant work when it can
simply breakout?

> >
> > I was wondering why it is so, is it just to make the code look simple???
>
> Hi Srinivas,
>
> I'm sure that your question is obvious to you, but it's not obvious to us.
> Where are _siftup and _siftdown defined? Are they in your code? Somebody
> else's code? A library? Which library? What do they do? Where are they
> from?

_siftup and _siftdown are functions from python standard heapq module.

PS: I do competitive programming, I use these modules every couple of days
when compared to other modules. so didn't give much thought when posting to
the mailing list. sorry for that.

Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

Ok, I've started on the instrumenting, thanks for that head start, that would 
have taken me a while to figure out where to call the stats dump function from. 
Fun fact: BINARY_SUBSCR is called 717 starting python.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26287] Core dump in f-string with lambda and format specification

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I had to recompile with “--with-pydebug” to get the crash. I know f-strings 
don’t support the lambda syntax very well, but I can also make it crash without 
using lambda:

>>> f"{ {1: 2}:x}"
Fatal Python error: Python/ceval.c:3576 object at 0x7fa32ab030c8 has negative 
ref count -1
Traceback (most recent call last):
  File "", line 1, in 
TypeError: non-empty format string passed to object.__format__
Aborted (core dumped)

--
components: +Interpreter Core
nosy: +martin.panter
type:  -> crash

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: _siftup and _siftdown implementation

2016-02-04 Thread Sven R. Kunze

On 05.02.2016 01:12, Steven D'Aprano wrote:

On Fri, 5 Feb 2016 07:50 am, srinivas devaki wrote:


_siftdown function breaks out of the loop when the current pos has a valid
parent.

but _siftup function is not implemented in that fashion, if a valid
subheap is given to the _siftup, it will bring down the root of sub heap
and then again bring it up to its original place.

I was wondering why it is so, is it just to make the code look simple???

Hi Srinivas,

I'm sure that your question is obvious to you, but it's not obvious to us.
Where are _siftup and _siftdown defined? Are they in your code? Somebody
else's code? A library? Which library? What do they do? Where are they
from?



The question originated here: 
https://github.com/srkunze/xheap/pull/1#discussion_r51770210



(btw, Steven, your email client somehow breaks my threading view in 
thunderbird. This reply appeared unconnected to Srinivas' post.)

--
https://mail.python.org/mailman/listinfo/python-list


[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I can’t really comment on the patch, but I’m a bit worried that this is not the 
purpose of the b_length field.

--
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26288] Optimize PyLong_AsDouble for single-digit longs

2016-02-04 Thread Yury Selivanov

New submission from Yury Selivanov:

The attached patch drastically speeds up PyLong_AsDouble for single digit longs:


-m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)*2 + 
(x+10)*(x-30)"

with patch: 0.414
without: 0.612

spectral_norm: 1.05x faster.The results are even better when paired with 
patch from issue #21955.

--
components: Interpreter Core
files: as_double.patch
keywords: patch
messages: 259615
nosy: haypo, pitrou, serhiy.storchaka, yselivanov
priority: normal
severity: normal
status: open
title: Optimize PyLong_AsDouble for single-digit longs
versions: Python 3.6
Added file: http://bugs.python.org/file41812/as_double.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26289] Optimize floor division for ints

2016-02-04 Thread Yury Selivanov

New submission from Yury Selivanov:

The attached patch optimizes floor division for ints.

### spectral_norm ###
Min: 0.319087 -> 0.289172: 1.10x faster
Avg: 0.322564 -> 0.294319: 1.10x faster
Significant (t=21.71)
Stddev: 0.00249 -> 0.01277: 5.1180x larger


-m timeit -s "x=22331" "x//2;x//3;x//4;x//5;x//6;x//7;x//8;x/99;x//100;"

with patch: 0.298
without:0.515

--
components: Interpreter Core
files: floor_div.patch
keywords: patch
messages: 259617
nosy: haypo, pitrou, serhiy.storchaka, yselivanov
priority: normal
severity: normal
stage: patch review
status: open
title: Optimize floor division for ints
type: performance
versions: Python 3.6
Added file: http://bugs.python.org/file41813/floor_div.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Don Hatch

New submission from Don Hatch:

Iterating over input using either 'for line in fileinput.input():'
or 'for line in sys.stdin:' has the following unexpected behavior:
no matter how many lines of input the process reads, the loop body is not
entered until either (1) at least 8193 chars have been read and at least one of
them was a newline, or (2) EOF is read (i.e. the read() system call returns
zero bytes).

The behavior I expect instead is what
"for line in iter(sys.stdin.readline, ''):" does: that is, the loop body is
entered for the first time as soon as a newline or EOF is read.
Furthermore strace reveals that this well-behaved alternative code does
sensible input buffering, in the sense that the underlying system call being
made is read(0,buf,8192), thereby allowing it to get as many characters as are
available on input, up to 8192 of them, to be buffered and used in subsequent
loop iterations.  This is familiar and sensible behavior, and is what I think
of as "input buffering".

I anticipate there will be responses to this bug report of the form "this is
documented behavior; the fileinput and sys.stdin iterators do input buffering".
To that, I say: no, these iterators' unfriendly behavior is *not* input
buffering in any useful sense; my impression is that someone may have
implemented what they thought the words "input buffering" meant, but if so,
they really botched it.

This bug is most noticeable and harmful when using a filter written in python
to filter the output of an ongoing process that may have long pauses between
lines of output; e.g. running "tail -f" on a log file.  In this case, the
python filter spends a lot of time in a state where it is paused without
reason, having read many input lines that it has not yet processed.

If there is any suspicion that the delayed output is due to the previous
program in the pipeline buffering its output instead, strace can be used on the
python filter process to confirm that its input lines are in fact being read in
a timely manner.  This is certainly true if the previous process in the
pipeline is "tail -f", at least on my ubuntu linux system.

To demonstrate the bug, run each of the following from the bash command line.
This was observed using bash 4.3.11(1), python 2.7.6, and python 3.4.3,
on ubuntu 14.04 linux.

--
{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import fileinput,sys\nfor 
line in fileinput.input(): sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in 
sys.stdin: sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in 
iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import fileinput,sys\nfor 
line in fileinput.input(): sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in 
sys.stdin: sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in 
iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt
--

Notice the 'for line in sys.stdin:' behavior is apparently fixed in python 3.4.
So the matrix of behavior observed above can be summarized as follows:

   2.7  3.4
for line in fileinput.input(): BAD  BAD
for line in sys.stdin: BAD  GOOD
for line in iter(sys.stdin.readline, ""):  GOOD GOOD

Note that adding '-u' to the python args makes no difference in behavior, in
any of the above 6 command lines.

Finally, if I insert "strace -T" before "python" in each of the 6 command lines
above, it confirms that the python process is reading the 3 lines of input
immediately in all cases, in a single read(..., ..., 4096 or 8192) which seems
reasonable.

--
components: Library (Lib)
messages: 259619
nosy: Don Hatch
priority: normal
severity: normal
status: open
title: fileinput and 'for line in sys.stdin' do strange mockery of input 
buffering
type: behavior
versions: Python 2.7, Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 

[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Emanuel Barry

Emanuel Barry added the comment:

Left a comment on Rietveld. I don't have time right now to check the test, but 
I suspect you tested it before submitting the patch, so it should probably be 
fine.

--
nosy: +ebarry
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Timo Furrer

Timo Furrer added the comment:

Yes, I've tested it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Tamás Bence Gedai

Tamás Bence Gedai added the comment:

I've added a patch, that solves the problem with the built-in len. Even if it 
turns out that this functionality is not needed, it was quite of a challenge to 
track down the issue, I've learned a lot. :)

Here are some functions, that I looked through, might be useful for someone, 
who'd like to look into this issue.

https://github.com/python/cpython/blob/master/Python/bltinmodule.c#L1443
static PyObject *
builtin_len(PyModuleDef *module, PyObject *obj)
/*[clinic end generated code: output=8e5837b6f81d915b input=bc55598da9e9c9b5]*/
{
Py_ssize_t res;

res = PyObject_Size(obj);
if (res < 0 && PyErr_Occurred())
return NULL;
return PyLong_FromSsize_t(res);
}

https://github.com/python/cpython/blob/master/Objects/abstract.c#L42
Py_ssize_t
PyObject_Size(PyObject *o)
{
/*...*/
m = o->ob_type->tp_as_sequence;
if (m && m->sq_length)
return m->sq_length(o);
/*...*/
}

https://github.com/python/cpython/blob/master/Modules/_ctypes/_ctypes.c#L4449
static PySequenceMethods Array_as_sequence = {
Array_length,   /* sq_length; */
/*...*/
};

https://github.com/python/cpython/blob/master/Modules/_ctypes/_ctypes.c#L4442
static Py_ssize_t
Array_length(PyObject *myself)
{
CDataObject *self = (CDataObject *)myself;
return self->b_length;
}

--
keywords: +patch
Added file: http://bugs.python.org/file41810/resize.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Antoine, FWIW I agree on most of your points :)  And yes, numpy, scipy, numba, 
etc rock.

Please take a look at my fastint4.patch.  All tests pass, no performance 
regressions, no crazy inlining of floating point exceptions etc.  And yet we 
have a nice improvement for both ints and floats.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26291] Floating-point arithmetic

2016-02-04 Thread Emanuel Barry

Emanuel Barry added the comment:

This is due to how floating point numbers are handled under the hood. See 
http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
and https://docs.python.org/3/tutorial/floatingpoint.html for some useful read 
about why Python behaves like this regarding floating point numbers. Both these 
link state that this isn't a bug in Python, rightly so as it isn't.

--
nosy: +ebarry
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20160] broken ctypes calling convention on MSVC / 64-bit Windows (large structs)

2016-02-04 Thread Christoph Sarnowski

Changes by Christoph Sarnowski :


--
nosy: +Christoph Sarnowski

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: problem in installing python

2016-02-04 Thread Joel Goldstick
On Thu, Feb 4, 2016 at 7:22 AM, Salony Permanand  wrote:

> hello sir,
> During working on python I need urllib2 for my python version 2.7.11.
> Kindly provide me address from where to download it..
> Thanking you.
>
> Hello Salony,

Since this is a new question, its best to start a new thread with a new
title.  You may also want to drop the 'hello sir' greeting.  Yes, mostly
males here, but some very active women also contribute here.  A simple
hello will suffice

As to urllib2, it is included in python so do this in your program:

import urllib2

You may find that the third party library 'requests' is easier to use


-- 
Joel Goldstick
http://joelgoldstick.com/stats/birthdays
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: eval( 'import math' )

2016-02-04 Thread Ian Kelly
On Thu, Feb 4, 2016 at 6:33 AM, 阎兆珣  wrote:
>Excuse me for the same problem in Python 3.4.2-32bit
>
>I just discovered that  function does not necessarily take the
>string input and transfer it to a command to execute.
>
>So is there a problem with my assumption?

eval evaluates an expression, not a statement. For that, you would use exec.

If you're just trying to import though, then you don't need it at all.
Use the importlib.import_module function instead to import a module
determined at runtime. This is more secure than eval or exec, which
can cause any arbitrary Python code to be executed.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> Why not combine my patch and Serhiy's?  First we check if left & right are 
> both longs.  Then we check if they are unicode (for +).  And then we have a 
> fastpath for floats.

See my comment on Serhiy's patch. Maybe we can start by check that the type of 
both operands are the same, and then use PyLong_CheckExact and 
PyUnicode_CheckExact.

Using such design, we may add a _PyFloat_Add(). But the next question is then 
the overhead on the "slow" path, which requires a benchmark too! For example, 
use a subtype of int.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Florin Papa

Florin Papa added the comment:

I was also talking about the variance/deviation of the mean value
displayed by perf.py, sorry if I was unclear. The perf.py output in my
previous message showed little difference between the patched and
non-patched version. I will also try increasing the number of
runs to see if there is any change.

The CPU isolation feature is a great finding, thank you.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Tkinter problem: TclError> couldn't connect to display ":0

2016-02-04 Thread gemjack . pb
On Sunday, 29 December 2013 20:20:00 UTC, Michael Matveev  wrote:
> Hi,
> I use live Debian on VM and trying to compile this code.
> 
> 
> import Tkinter
>  
> root = Tkinter.Tk()
>  
> root.title("Fenster 1")
> root.geometry("100x100")
>  
> root.mainloop()
> 
> 
> The shell gives out that kind of message:
> 
> File "test.py", line 5, in 
> root = Tkinter.Tk()
> File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1712, in __init__
> self.tk = _tkinter.create(screenName, baseName, className, interactive, 
> wantobjects, useTk, sync, use)
> _tkinter.TclError: couldn't connect to display ":0"
> 
> 
> 
> thanks for helping out.
> 
> greets.
> Mike

This fixed my problem with thkinter.   sudo cp ~/.Xauthority ~root/
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

tl; dr I'm disappointed. According to the statistics module, running the 
bm_regex_v8.py benchmark more times with more iterations make the benchmark 
more unstable... I expected the opposite...


Patch version 2:

* patch also performance/bm_pickle.py
* change min_time from 100 ms to 500 ms with --fast
* compute the number of runs using a maximum time, maximum time change with 
--fast and --rigorous

+if options.rigorous:
+min_time = 1.0
+max_time = 100.0  # 100 runs
+elif options.fast:
+min_time = 0.5
+max_time = 25.0   # 50 runs
+else:
+min_time = 0.5
+max_time = 50.0   # 100 runs


To measure the stability of perf.py, I pinned perf.py to CPU cores which are 
isolated of the system using Linux "isolcpus" kernel parameter. I also forced 
the CPU frequency governor to "performance" and enabled "no HZ full" on these 
cores.

I ran perf.py 5 times on regex_v8.


Calibration (original => patched):

* --fast: 1 iteration x 5 runs => 16 iterations x 50 runs
* (no option): 1 iteration x 50 runs => 16 iterations x 100 runs


Approximated duration of the benchmark (original => patched):

* --fast: 7 sec => 7 min 34 sec
* (no option): 30 sec => 14 min 35 sec

(I made a mistake, so I was unable to get the exact duration.)

Hum, maybe timings are not well chosen because the benchmark is really slow 
(minutes vs seconds) :-/


Standard deviation, --fast:

* (python2) 0.00071 (1.2%, mean=0.05961) => 0.01059 (1.1%, mean=0.96723)
* (python3) 0.00068 (1.5%, mean=0.04494) => 0.05925 (8.0%, mean=0.74248)
* (faster) 0.02986 (2.2%, mean=1.32750) => 0.09083 (6.9%, mean=1.31000)

Standard deviation, (no option):

* (python2) 0.00072 (1.2%, mean=0.05957) => 0.00874 (0.9%, mean=0.97028)
* (python3) 0.00053 (1.2%, mean=0.04477) => 0.00966 (1.3%, mean=0.72680)
* (faster) 0.02739 (2.1%, mean=1.33000) => 0.02608 (2.0%, mean=1.33600)

Variance, --fast:

* (python2) 0.0 (0.001%, mean=0.05961) => 0.9 (0.009%, mean=0.96723)
* (python3) 0.0 (0.001%, mean=0.04494) => 0.00281 (0.378%, mean=0.74248)
* (faster) 0.00067 (0.050%, mean=1.32750) => 0.00660 (0.504%, mean=1.31000)

Variance, (no option):

* (python2) 0.0 (0.001%, mean=0.05957) => 0.6 (0.006%, mean=0.97028)
* (python3) 0.0 (0.001%, mean=0.04477) => 0.7 (0.010%, mean=0.72680)
* (faster) 0.00060 (0.045%, mean=1.33000) => 0.00054 (0.041%, mean=1.33600)

Legend:

* (python2) are timings of python2 ran by perf.py (of the "Min" line)
* (python3) are timings of python3 ran by perf.py (of the "Min" line)
* (faster) are the "1.34x" numbers of "faster" or "slower" of the "Min" line
* percentages are: value * 100 / mean

It's not easy to compare these values since the number of iterations is very 
different (1 => 16) and so timings are very different (ex: 0.059 sec => 0.950 
sec). I guess that it's ok to compare percentages.


I used the stability.py script, attached to this issue, to compute deviation 
and variance from the "Min" line of the 5 runs. The script takes the output of 
perf.py as input.

I'm not sure that 5 runs are enough to compute statistics.

--

Raw data.

Original perf.py.

$ grep ^Min original.fast 
Min: 0.059236 -> 0.045948: 1.29x faster
Min: 0.059005 -> 0.044654: 1.32x faster
Min: 0.059601 -> 0.044547: 1.34x faster
Min: 0.060605 -> 0.044600: 1.36x faster

$ grep ^Min original
Min: 0.060479 -> 0.044762: 1.35x faster
Min: 0.059002 -> 0.045689: 1.29x faster
Min: 0.058991 -> 0.044587: 1.32x faster
Min: 0.060231 -> 0.044364: 1.36x faster
Min: 0.059165 -> 0.044464: 1.33x faster

Patched perf.py.

$ grep ^Min patched.fast 
Min: 0.950717 -> 0.711018: 1.34x faster
Min: 0.968413 -> 0.730810: 1.33x faster
Min: 0.976092 -> 0.847388: 1.15x faster
Min: 0.964355 -> 0.711083: 1.36x faster
Min: 0.976573 -> 0.712081: 1.37x faster

$ grep ^Min patched
Min: 0.968810 -> 0.729109: 1.33x faster
Min: 0.973615 -> 0.731308: 1.33x faster
Min: 0.974215 -> 0.734259: 1.33x faster
Min: 0.978781 -> 0.709915: 1.38x faster
Min: 0.955977 -> 0.729387: 1.31x faster

$ grep ^Calibration patched.fast 
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.4 sec)
Calibration: num_runs=50, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 37.3 sec)
Calibration: num_runs=50, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 37.4 sec)
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.6 sec)
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.7 sec)

$ grep ^Calibration patched
Calibration: num_runs=100, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 73.0 sec)
Calibration: num_runs=100, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 75.3 sec)
Calibration: num_runs=100, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 73.2 sec)
Calibration: 

[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

Changes by STINNER Victor :


Added file: http://bugs.python.org/file41804/perf_calibration-2.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

> I agree with Marc-Andre, people doing FP-heavy math in Python use Numpy 
> (possibly with Numba, Cython or any other additional library). 
> Micro-optimizing floating-point operations in the eval loop makes little 
> sense IMO.

I disagree.

30% faster floats (sic!) is a serious improvement, that shouldn't just be 
discarded.  Many applications have floating point calculations one way or 
another, but don't use numpy because it's an overkill.

Python 2 is much faster than Python 3 on any kind of numeric calculations.  
This point is being frequently brought up in every python2 vs 3 debate.  I 
think it's unacceptable.


> * the ceval loop may no longer fit in to the CPU cache on
   systems with small cache sizes, since the compiler will likely
   inline all the fast_*() functions (I guess it would be possible
   to simply eliminate all fast paths using a compile time
   flag)

That's a speculation.  It may still fit.  Or it had never really fitted, it's 
already huge.  I tested the patch on a 8 year old desktop CPU, no performance 
degradation on our benchmarks.

### raytrace ###
Avg: 1.858527 -> 1.652754: 1.12x faster

### nbody ###
Avg: 0.310281 -> 0.285179: 1.09x faster

### float ###
Avg: 0.392169 -> 0.358989: 1.09x faster

### chaos ###
Avg: 0.355519 -> 0.326400: 1.09x faster

### spectral_norm ###
Avg: 0.377147 -> 0.303928: 1.24x faster

### telco ###
Avg: 0.012845 -> 0.013006: 1.01x slower

The last benchmark (telco) is especially interesting.  It uses decimals for 
calculation, that means that it uses overloaded numeric operators.  Still no 
significant performance degradation.

> * maintenance will get more difficult

Fast path for floats is easy to understand for every core dev that works with 
ceval, there is no rocket science there (if you want rocket science that is 
hard to maintain look at generators/yield from).  If you don't like inlining 
floating point calculations, we can make float_add, float_sub, float_div, and 
float_mul exported and use them in ceval.

Why not combine my patch and Serhiy's?  First we check if left & right are both 
longs.  Then we check if they are unicode (for +).  And then we have a fastpath 
for floats.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: .py file won't open in windows 7

2016-02-04 Thread Tim Golden
On 04/02/2016 13:09, Tim Golden wrote:
> On 04/02/2016 12:52, Yossifoff Yossif wrote:
>> Hallow,
>> I try to open a .py file (attached), but what I get is a windows DOS window 
>> opening and closing in a couple of seconds. Ran repair of the program, 
>> nothing happened.
>> I cannot see error messages and don't know where to look for ones.
>> Would appreciate your piece of advice.
> 
> Attachments won't make it through to the list, Yossif. But your code is
> basically something like this:
> 
> """
> import sys
> 
> def main():
>   # calculate stuff
>   print(stuff)
> 
> if __name__ == '__main__':
>   sys.exit(main())
> """
> 
> In that case, the program starts (in a console window), runs, prints the
> result, and then closes. You've got a few simple ways of seeing the output:


... or run within a development environment. Python itself ships with IDLE:

  https://docs.python.org/3/library/idle.html

but there are plenty of others:

  https://wiki.python.org/moin/IntegratedDevelopmentEnvironments

TJG
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: eval( 'import math' )

2016-02-04 Thread Lutz Horn
Hi,

>I just discovered that  function does not necessarily take the
>string input and transfer it to a command to execute.

Can you please show us the code you try to execute and tells what result you 
expect?

Lutz

  
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Le 04/02/2016 15:18, Yury Selivanov a écrit :
> 
> But it is faster. That's visible on many benchmarks. Even simple
timeit oneliners can show that. Probably it's because that such
benchmarks usually combine floats and ints, i.e. "2 * smth" instead of
"2.0 * smth".

So it's not about FP-related calculations anymore. It's about ints
having become slower ;-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

>> But it is faster. That's visible on many benchmarks. Even simple
> timeit oneliners can show that. Probably it's because that such
> benchmarks usually combine floats and ints, i.e. "2 * smth" instead of
> "2.0 * smth".
> 
> So it's not about FP-related calculations anymore. It's about ints
> having become slower ;-)

I should have written 2 * smth_float vs 2.0 * smth_float

--
nosy: +Yury.Selivanov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: eval( 'import math' )

2016-02-04 Thread Peter Otten
阎兆珣 wrote:

>Excuse me for the same problem in Python 3.4.2-32bit
> 
>I just discovered that  function does not necessarily take the
>string input and transfer it to a command to execute.
> 
>So is there a problem with my assumption?

Python discriminates between statements and expressions. The eval function 
will only accept an expression. OK:

>>> eval("1 + 1")
2
>>> eval("print(42)") # Python 3 only; in Python 2 print is a statement
42
>>> x = y = 1
>>> eval("x > y")
False

Not acceptable:

>>> eval("1+1; 2+2")
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1
1+1; 2+2
   ^
SyntaxError: invalid syntax

>>> eval("import os")
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1
import os
 ^
SyntaxError: invalid syntax

>>> eval("if x > y: print(42)")
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1
if x > y: print(42)
 ^
SyntaxError: invalid syntax

To import a module dynamically either switch to exec()

>>> exec("import os")
>>> os


or use the import_module() function:

>>> import importlib
>>> eval("importlib.import_module('os')")


Of course you can use that function directly

>>> importlib.import_module("os")


and that's what you should do if your goal is to import a module rather than 
to run arbitrary Python code.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Attaching another approach -- fastint5.patch.

Similar to what fastint4.patch does, but doesn't export any new APIs.  Instead, 
similarly to abstract.c, it uses type slots directly.

--
Added file: http://bugs.python.org/file41815/fastint5.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Looks like we want to specialize it for lists, tuples, and dicts; as expected.  
Not so sure about [-1, but I suggest to benchmark it anyways ;)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Don Hatch

Don Hatch added the comment:

Possibly related to http://bugs.python.org/issue1633941 .
Note that the matrix of GOOD and BAD versions and input methods is
exactly the same for this bug as for that one.  To verify: run
each of the 6 python commands I mentioned on its own, being sure to type
at least one line of input ending in newline before hitting ctrl-D -- if it 
exits after one ctrl-D it's GOOD; having to type a second ctrl-D is BAD.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Looks as statistics varies from test to test too much. Could you please collect 
the statistics for running all tests?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Jython 2.7.1 beta3 released!

2016-02-04 Thread fwierzbi...@gmail.com
On behalf of the Jython development team, I'm pleased to announce that
Jython 2.7.1 beta3 is released!

Thanks to Amobee for sponsoring my work on Jython, and thanks to the
many contributors to Jython!

Details are here:
http://fwierzbicki.blogspot.com/2016/02/jython-271-beta3-released.html

-Frank
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: Bokeh 0.11.1 released

2016-02-04 Thread Bryan Van de Ven
Hi all,

I am please to announce a new point release of Bokeh, version 0.11.1, is
now available. Installation instructions can be found in the usual location:

http://bokeh.pydata.org/en/latest/docs/installation.html

This release focused on providing bug fixes, small features, and documentation 
improvements. Highlights include:

* documentation:
  - instructions for running Bokeh server behind an SSL terminated proxy
  - Quickstart update and cleanup
* bugfixes:
  - notebook comms handles work properly
  - MultiSelect works
  - Oval legend renders correctly
  - Plot title orientation setting works
  - Annulus glyph works on IE/Edge
* features:
  - preview of new streaming API in OHLC demo
  - undo/redo tool add, reset tool now resets plot size
  - "bokeh static" and "bokeh sampledata" commands
  - can now create Bokeh apps directly from Jupyter Notebooks
  - headers and content type now configurable on AjaxDataSource
  - new network config options for "bokeh serve"

For full details, refer to the CHANGELOG in the GitHub repository, and the full
release notes (http://bokeh.pydata.org/en/latest/docs/releases/0.11.1.html)

Issues, enhancement requests, and pull requests can be made on the Bokeh
Github page: https://github.com/bokeh/bokeh

Full documentation is available at http://bokeh.pydata.org/en/0.11.1

Questions can be directed to the Bokeh mailing list: bo...@continuum.io

Thanks, 

Bryan 
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

I'll put together something comprehensive in a bit, but here's a quick preview:

$ ./python
Python 3.6.0a0 (default, Feb  4 2016, 20:08:03) 
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
Total BINARY_SUBSCR calls: 726
List BINARY_SUBSCR calls: 36
Tuple BINARY_SUBSCR calls: 103
Dict BINARY_SUBSCR calls: 227
Unicode BINARY_SUBSCR calls: 288
Bytes BINARY_SUBSCR calls: 68
[-1] BINARY_SUBSCR calls: 0

$ python bm_elementtree.py -n 100 --timer perf_counter
...[snip]...
Total BINARY_SUBSCR calls: 1078533
List BINARY_SUBSCR calls: 513
Tuple BINARY_SUBSCR calls: 1322
Dict BINARY_SUBSCR calls: 1063075
Unicode BINARY_SUBSCR calls: 13150
Bytes BINARY_SUBSCR calls: 248
[-1] BINARY_SUBSCR calls: 0

Lib/test$ ../../python -m unittest discover
...[snip]...^C <== I got bored waiting
KeyboardInterrupt
Total BINARY_SUBSCR calls:  4732885
List BINARY_SUBSCR calls:   1418730
Tuple BINARY_SUBSCR calls:  1300717
Dict BINARY_SUBSCR calls:   1151766
Unicode BINARY_SUBSCR calls: 409924
Bytes BINARY_SUBSCR calls:   363029
[-1] BINARY_SUBSCR calls: 26623

So dict seems to be the winner here

--
keywords: +patch
Added file: http://bugs.python.org/file41814/26280_stats.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Eryk Sun

Eryk Sun added the comment:

You can't reassign the array object's __class__, and you can't modify the array 
type itself, so I think modifying the internal b_length field of the object is 
a confused result. 

Even if you ignore this confusion, it's still not as simple as using the size 
in bytes as the length. That's what b_size is for, after all. The array length 
is the new size divided by the element size, which you can get from 
PyType_stgdict(dict->proto)->size. Also, you'd have to ensure it's only 
updating b_length for arrays, i.e. ArrayObject_Check(obj), since it makes no 
sense to modify the length of a simple type, struct, union, or [function] 
pointer.

However, I don't think this result is worth the confusion. ctypes buffers can 
grow, but arrays have a fixed size by design. There are already ways to access 
a resized buffer. For example, you can use the from_buffer method to create an 
instance of a new array type that has the desired length, or you can 
dereference a pointer for the new array type. So I'm inclined to close this 
issue as "not a bug".

Note: 
Be careful when resizing buffers that are shared across objects. Say you resize 
array "a" and share it as array "b" using from_buffer or a pointer dereference. 
Then later you resize "a" again. The underlying realloc might change the base 
address of the buffer, while "b" still uses the old address. For example:

>>> a = (ctypes.c_int * 5)(*range(5))
>>> ctypes.resize(a, 4 * 10)
>>> b = ctypes.POINTER(ctypes.c_int * 10)(a)[0]
>>> ctypes.addressof(a)
20136704
>>> ctypes.addressof(b)
20136704
>>> b._b_needsfree_ # b doesn't own the buffer
0
>>> b[:] # the reallocation is not zeroed
[0, 1, 2, 3, 4, 0, 0, 32736, 48, 0]

Here's the problem to keep in mind:

>>> ctypes.resize(a, 4 * 1000)
>>> ctypes.addressof(a)
22077952
>>> ctypes.addressof(b)
20136704
>>> b[:] # garbage; maybe a segfault
[1771815800, 32736, 1633841761, 540763495, 1652121965,
 543236197, 1718052211, 1953264993, 10, 0]

--
nosy: +eryksun
type:  -> enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

For fileinput see issue15068.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

One thing I forgot to do was count slices.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1633941] for line in sys.stdin: doesn't notice EOF the first time

2016-02-04 Thread Don Hatch

Don Hatch added the comment:

I've reported the unfriendly input withholding that several people have
observed and mentioned here as a separate bug: 
http://bugs.python.org/issue26290 . The symptom is different but I suspect it 
has exactly the same underlying cause (incorrect use of stdio) and fix that 
Ralph Corderoy has described clearly here.

--
nosy: +Don Hatch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22847] Improve method cache efficiency

2016-02-04 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 6357d851029d by Antoine Pitrou in branch '2.7':
Issue #22847: Improve method cache efficiency.
https://hg.python.org/cpython/rev/6357d851029d

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26269] zipfile should call lstat instead of stat if available

2016-02-04 Thread SilentGhost

Changes by SilentGhost :


--
components: +Extension Modules
nosy: +alanmcintyre, serhiy.storchaka, twouters
versions:  -Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Finding in which class an object's method comes from

2016-02-04 Thread Chris Angelico
On Thu, Feb 4, 2016 at 7:54 PM, ast  wrote:
> It is strange but I dont have the same result that you:
> (Python 3.4)
>
 class A:
>
> def a(self):pass
>
 class B(A):
> def b(self):pass
>
 class C(B):
> def c(self):pass
>
 obj = C()
>
 obj.a
>
> >

Curious. It appears to have changed between 3.4 and 3.5; my original
copy/paste was from 3.6.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: import cannot be used inside eval

2016-02-04 Thread Gary Herron

On 02/03/2016 09:03 PM, 阎兆珣 wrote:

a = input("tell me which py to execute:  ")

print(a)

print('import '+a)

print(type('import'+a))

eval('print(a)')
Eval is meant to evaluate Python expressions.  The import is a 
statement, not an expression.  Also, it's a bad idea to use eval like 
this, and it's a *really* bad idea to use eval with user supplied 
input.  The user could inject *any* malicious code.


Instead, use the importlib module to programmatically import a module.

Gary Herron


--
Dr. Gary Herron
Department of Computer Science
DigiPen Institute of Technology
(425) 895-4418


--
https://mail.python.org/mailman/listinfo/python-list


Re: __bases__ attribute on classes not displayed by dir() command

2016-02-04 Thread ast


"eryk sun"  a écrit dans le message de 
news:mailman.49.1454576255.30993.python-l...@python.org...

On Thu, Feb 4, 2016 at 2:03 AM, ast  wrote:

but if I am using dir to display all Carre's attributes and methods,
__bases__ is not on the list. Why ?


The __bases__ property is defined by the metaclass, "type". dir() of a
class doesn't show attributes from the metaclass [1].



Oh metaclass !

I am learning them at the moment, and have headaches 


--
https://mail.python.org/mailman/listinfo/python-list


[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

In my experience it is very hard to get stable benchmark results with Python.  
Even long running benchmarks on an empty machine vary:

wget http://www.bytereef.org/software/mpdecimal/benchmarks/telco.py
wget http://speleotrove.com/decimal/expon180-1e6b.zip
unzip expon180-1e6b.zip

taskset -c 0 ./python telco.py full


$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.16255
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 6.982884
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.0953491
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']

--
nosy: +skrah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

I've cut off the highest result in the previous message:

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.304339
$ taskset -c 0 ./python telco.py full

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> taskset -c 0 ./python telco.py full

Did you see that I just merged Florin's patch to add --affinity parameter to 
perf.py? :-)

You may isolate some CPU cores using the kernel command parameter isolcpus=xxx. 
I don't think that the core #0 is the best choice, it may be preferred.

It would be nice to collect "tricks" to get most reliable benchmark results. 
Maybe in perf.py README page? Or a wiki page? Whatever? :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Install Error

2016-02-04 Thread Oscar Benjamin
On 3 February 2016 at 21:55, Barrie Taylor  wrote:
>
> I am attempting to install and run Python3.5.1 on my Windows machine.
>
> After installation on launching I am presented the attached error message.
> It reads:
> 'The program can't start because api-ms-win-crt-runtime-l1-1-0.dll is missing 
> from your computer. Try reinstalling the program to fix this problem.'
> Needless to say, I have tried uninstalling and reinstalling ad repairing the 
> program a number of times to no avail.

I don't know why it says to try re-installing. The problem is that
Python 3.5 requires an update to your system runtimes from Microsoft.
You can find more information about that here:

https://support.microsoft.com/en-us/kb/2999226

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

I agree with Marc-Andre, people doing FP-heavy math in Python use Numpy 
(possibly with Numba, Cython or any other additional library). Micro-optimizing 
floating-point operations in the eval loop makes little sense IMO.

The point of optimizing integers is that they are used for many purposes, not 
only "math" (e.g. indexing).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

For an older project (Fusil the fuzzer), I wrote a short function to sleep 
until the system load is lower than a threshold. I had to write such function 
to reduce the noise when the system is heavily loaded. I wrote that to be able 
to run a very long task (it takes at least 1 hour, but may run for multiple 
days!) on my desktop and continue to use my desktop for various other tasks.

On Linux, we can use the "cpu xxx xxx xxx ..." line of /proc/stat to get the 
system load.

My code to read the system load:
https://bitbucket.org/haypo/fusil/src/32ddc281219cd90c1ad12a3ee4ce212bea1c3e0f/fusil/linux/cpu_load.py?at=default=file-view-default#cpu_load.py-51

My code to wait until the system load is lower than a threshold:
https://bitbucket.org/haypo/fusil/src/32ddc281219cd90c1ad12a3ee4ce212bea1c3e0f/fusil/system_calm.py?at=default=file-view-default#system_calm.py-5

--

I also write a script to do the opposite :-) A script to stress the system to 
have a system load higher or equal than a minimum load:

https://bitbucket.org/haypo/misc/src/3fd3993413b128b37e945690865ea2c5ef48c446/bin/system_load.py?at=default=file-view-default

This script helped to me reproduce sporadic failures like timeouts which only 
occur when the system is highly loaded.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: please help

2016-02-04 Thread Oscar Benjamin
On 3 February 2016 at 23:03, Syavosh Malek  wrote:
> hi i install python 3.5.1 and found run time error
> see attach file and help me please

I'm afraid your attachment didn't arrive as this is a text-only
mailing list. Can you include more information about the error?

If it's that you're missing a dll called something like
Api-ms-win-crt-runtime-l1-1-0.dll then you need to update your system
runtimes from Microsoft. You can read about that here:
https://support.microsoft.com/en-us/kb/2999226

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


How this C function was called through ctypes this way?

2016-02-04 Thread jfong
Here is an example from "Python Cookbook, Third Edition(by David Beazley and 
Brian K. Jones)" Chapter 15.1. "Accessing C Code Using ctypes"

---
import ctypes
...
# Try to locate the .so file in the same directory as this file
...
_mod = ctypes.cdll.LoadLibrary(_path)
...
...
# void avg(double *, int n)
# Define a special type for the 'double *' argument
class DoubleArrayType:
def from_param(self, param):
typename = type(param).__name__
if hasattr(self, 'from_' + typename):
return getattr(self, 'from_' + typename)(param)
elif isinstance(param, ctypes.Array):
return param
else:
raise TypeError("Can't convert %s" % typename)

...
# Cast from array.array objects
def from_array(self, param):
...
...

# Cast from lists
def from_list(self, param):
val = ((ctypes.c_double)*len(param))(*param)
return val

# Cast from a numpy array
def from_ndarray(self, param):
...
...

DoubleArray = DoubleArrayType()
_avg = _mod.avg
_avg.argtypes = (DoubleArray, ctypes.c_int)
_avg.restype = ctypes.c_double

def avg(values):
return _avg(values, len(values))

avg([1,2,3])
--

The followings was quoted from the book which explain how it does:
"The DoubleArrayType class shows how to handle this situation. In this class, a 
single method from_param() is defined. The role of this method is to take a 
single parameter and narrow it down to a compatible ctypes object (a pointer to 
a ctypes.c_double, in the example). Within from_param(), you are free to do 
anything that you wish. In the solution, the typename of the parameter is 
extracted and used to dispatch to a more specialized method. For example, if a 
list is passed, the typename is list and a method from_list() is invoked."

What confuse me are:
(1) at line: _avg.argtypes = (DoubleArray, ctypes.c_int)
The "DoubleArry" is an instance of the class "DoubleArrayType", Can it appear 
at where a type was expected? 
(2) How the method "from_param" was invoked? I can't see any mechanism to reach 
it from the "_avg(values, len(values))" call.


Best Regards,
Jach Fong

-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   >