[issue16031] relative import headaches

2012-09-24 Thread James Hutchison

New submission from James Hutchison:

This might even be a bug I've stumbled upon but I'm listing it as an 
enhancement for now.

I really feel that relative imports in Python should just work. Regardless of 
the __name__, I should be able to import below me. Likewise, it should work 
even if I've already done an import into the symbol table. It adds additional 
work to us as a developer to have to do some pythonpath or code gymnastics to 
get something rather trivial working. Additionally, the import errors from 
circular imports add another challenge to work around. In C/C++ you can force 
it to import a file once and only once, why can't Python work the same way?

Take the following example set-up:
startPoint.py
subModule1
   /__init__.py
   /a.py
   /b.py
   /tests
  /__init__.py
  /test_a.py

a's code:
print(in a);
from subModule1 import b

b's code:
print(in b);
from subModule1 import a

test_a.py's code:
print(myname:,__name__);
from .. import a

startPoint.py is empty, and the __init__.py files are also empty.

If I run a PyDev unit test on test_a.py this is what I get:

Finding files... done.
Importing test modules ... myname: subModule1.tests.test_a
in a
in b
myname: test_a
Traceback (most recent call last):
  File 
C:\eclipse\plugins\org.python.pydev_2.6.0.2012062818\pysrc\pydev_runfiles.py, 
line 432, in __get_module_from_str
mod = __import__(modname)
  File C:\myfolder/relativeImportTest/subModule1/tests\test_a.py, line 6, in 
module
from .. import a
ValueError: Attempted relative import in non-package

Clearly in this case, the exception given doesn't make any sense. I have 
__init__.py, the error says the relative import line is failing, and the error 
says it's because I'm in a non-package. Except, I'm in a package. It seems to 
go through the a_test.py file twice, even though I never explicitly import it. 
The first time through, I'm clearly in a package. The second time through, my 
name is NOT __main__ but yet I'm apparently no longer a package, which is where 
it fails.

Now if I change:
from subModule1 import b to import subModule1.b
and
from subModule1 import a to import subModule1.a

then everything works. But then that means I have to reference everything by 
the full name in my submodules. In this example, there's clearly a circular 
reference between a and b that wouldn't work anyways.

So lets change some things.

Now:
a.py:
import subModule1.b

b.py:
from subModule1 import a

Now the circular reference is gone between a and b. I really don't like having 
to do this as a means to work around a circular reference because it forces me 
to vary the import style of one file to another.

If we try the test code again however, it gets the same problem. If I swap 
which file does the relative import, then it works.

So lets make one last change:

test_a.py:
import subModule1.b # added
from .. import a

This will work, seemingly magically. It only runs the code in test_a.py once. 
Recall that the code in a.py is import subModule1.b

So basically this brings up several issues:
1. import a.b isn't the same as from a import b by more than how you 
reference it in the code
2. submodules are re-imported as non-module without ever importing them if you 
import their parent module relatively. If this is documented I don't know where.
3. import order can matter drastically to if a code runs or not for seemingly 
magical reasons.

And back when I was a beginner Python user, the naming convention of the full 
path really threw a monkey wrench in my code when I would try to move a select 
number of files from one project to another, or would try relative imports. If 
relative imports cause such headaches with circular references then I should 
generally stick to the full module path when referencing things. But if the 
full module path isn't portable then I should use relative imports.

Likewise, if I run as a PyDev unitTest, my module name is NOT __main__, so 
special path checks for __main__ won't work

I think the bottom line is that the import system gave me headaches as a 
beginner user and as an advanced user it still does every now and then so it 
really should be changed to something more intuitive or forgiving. I really 
shouldn't have to sit and think how do I reference a function in the file just 
one directory level below mine?

If there is already some magic bullet for this then it should probably be more 
visible.

--
components: Interpreter Core
messages: 171208
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: relative import headaches
type: enhancement
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16031
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

New submission from James Hutchison:

Windows 7 64-bit, Python 3.2.3

This is a very odd issue and I haven't figured out what caused it. I have a 
python script that runs continuously. When it receives a request to do a task, 
it creates a new thread (not a new process), does the task, then sends out an 
e-mail to indicate it was done. It doesn't do this a lot (maybe 10 - 30 times a 
day). This is all the script does.

Well today, it randomly broke. It was working fine, then suddenly I was getting 
the following error when it would create an SMTP object:

socket.error: [Errno 10013] An attempt was made to access a socket in a way 
forbidden by its access permissions

What I found was, even after trying a different script that did nothing but 
create a simple SMTP connection to localhost, running with admin privileges, 
and even after rebooting the machine, I was getting this error. I checked my 
firewall and didn't see any changes (I share a group policy so I do not have 
full control). I also tried the script on a different machine and found no 
issue. The issue also persisted between IDLE and simply running python.exe

When I tried to debug the issue, I discovered the following piece of code made 
the issue go away:

s = socket.socket()
s.bind(('',50007))
s.listen(1);
s.close();

And it hasn't come back since. I've tried to reproduce the circumstances that 
my script was running by creating SMTP instances in a loop but haven't been 
able to recreate the error. Checking my log, there isn't anything abnormal that 
occurred just before this issue (like attempting to do the task several times 
at once or something).

--
components: Library (Lib)
messages: 169055
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: socket error [Errno 10013] when creating SMTP object
type: behavior
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

This is the traceback I was getting where it was just a script that simply made 
an SMTP connection then closed it. This fails before it attempts to connect to 
the server.

Traceback (most recent call last):
  File C:\tmp\manysmtptest.py, line 8, in module
main();
  File C:\tmp\manysmtptest.py, line 4, in main
a = SMTP(myserver);
  File C:\Python32\lib\smtplib.py, line 259, in __init__
(code, msg) = self.connect(host, port)
  File C:\Python32\lib\smtplib.py, line 319, in connect
self.sock = self._get_socket(host, port, self.timeout)
  File C:\Python32\lib\smtplib.py, line 294, in _get_socket
return socket.create_connection((host, port), timeout)
  File C:\Python32\lib\socket.py, line 404, in create_connection
raise err
  File C:\Python32\lib\socket.py, line 395, in create_connection
sock.connect(sa)
socket.error: [Errno 10013] An attempt was made to access a socket in a way 
forbidden by its access permissions

What I don't get is why rebooting didn't fix the problem. You'd think Python or 
Windows issue, things would resolve themselves after a reboot. All the other 
programs I was using seemed to work fine.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

That makes no sense. Why does:

s = socket.socket()
s.bind(('',50007))
s.listen(1);
s.close();

fix the issue then?

Re-opening, this issue should be understood because having such an operation 
randomly fail is unacceptable for a production system. How does python choose a 
port to open a connection on? If windows reports the wrong error then shouldn't 
python try a different port number anyways (considering I have no control over 
what port Python chooses to open a connection on)?

--
status: closed - open

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15780] IDLE (windows) with PYTHONPATH and multiple python versions

2012-08-24 Thread James Hutchison

New submission from James Hutchison:

One issue I've encountered is someone else's software setting PYTHONPATH to 
their install directory of python. We have some old software that installs and 
uses python 2.3 scripts and unfortunately this prevents the IDLE shortcuts for 
newer versions of python from working correctly on the system. This could be 
remedied with -E being added to the shortcut, but windows doesn't allow me to 
directly edit the IDLE shortcut, and it hides how IDLE is being launched via 
Python in the first place, making it a challenge to create my own. I'd remove 
PYTHONPATH from the system environment variables except I don't know why it's 
set in the first place.

Suggestion: IDLE safe-mode which runs with -E OR include a batch file for IDLE 
so that I can easily append my own command line arguments and create my own 
shortcuts.

--
components: IDLE
messages: 169082
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: IDLE (windows) with PYTHONPATH and multiple python versions
type: enhancement
versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15780
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

It's from the example.

http://docs.python.org/library/socket.html#example

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

The firewall is disabled for my machine.

So the options are:
1. Port was in-use: possible except that is normally a different error
2. Port was firewalled: firewall was disabled
3. Port mis-use: not likely because this wouldn't be random
4. Port was in restricted port range for some reason: admin privileges did not 
resolve

#1 seems the most plausible so far

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

I can connect to all of the IPs for my server without issue.

Found this:

Another possible reason for the WSAEACCES error is that when the bind function 
is called (on Windows NT 4.0 with SP4 and later), another application, service, 
or kernel mode driver is bound to the same address with exclusive access. Such 
exclusive access is a new feature of Windows NT 4.0 with SP4 and later, and is 
implemented by using the SO_EXCLUSIVEADDRUSE option.

So that explains why you may see either WSAEADDRINUSE or WSAEACCESS

So at this point, I think the question is - how does python choose an outgoing 
port?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

Changes by James Hutchison jamesghutchi...@gmail.com:


--
status: closed - open

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

James Hutchison added the comment:

Looks to me like python grabs an outgoing port number via unrandom means and if 
it happens to map to a port taken by a service that demands exclusive access, 
then it returns the WSAEACCESS error instead of WSAEADDRINUSE. Because this is 
a fairly new feature in windows (NT 4.0 w/ SP3), Python does not properly 
account for it by simply trying another port like it would if it got 
WSAEADDRINUSE

Arguably this would affect all aspects of Python that uses sockets.

The function in question seems to be:

getaddrinfo(host, port, 0, SOCK_STREAM)

Which, if I'm understanding the code correctly, returns the source address used 
to make the connection in this instance. Unfortunately, that function is in a 
compiled binary so I don't know how it works.

If windows is responsible for giving the bad port number, I would argue that 
Python still needs to account for the problem since this is obviously a major 
issue if it happens to someone. The fact that you mentioned that the cause is 
the fact WSAEADDRINUSE isn't the error, then that seems to imply Python is 
doing the selection.

If python is using a windows library to get outgoing port, it is also possible 
that maybe that library is outdated and it needs to use a newer one.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15779] socket error [Errno 10013] when creating SMTP object

2012-08-24 Thread James Hutchison

Changes by James Hutchison jamesghutchi...@gmail.com:


--
components: +Windows

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15779
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15702] Multiprocessing Pool deadlocks on join after empty map operation

2012-08-16 Thread James Hutchison

New submission from James Hutchison:

Following code deadlocks on Windows 7 64-bit, Python 3.2.3

If you have a pool issue a map operation over an empty iterable then try to 
join later, it will deadlock. If there is no map operation or blah in the code 
below isn't empty, it does not deadlock

from multiprocessing import Pool

def main():
p = Pool();
blah = [];
print(Mapping);
p.map(dummy, blah);
p.close();
p.join(); # deadlocks here
print(Done);

def dummy(x):
pass;

if __name__ == __main__:
main();

--
components: Library (Lib)
messages: 168408
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Multiprocessing Pool deadlocks on join after empty map operation
type: behavior
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15702
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10376] ZipFile unzip is unbuffered

2012-05-01 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

See attached, which will open a zipfile that contains one file and reads it a 
bunch of times using unbuffered and buffered idioms. This was tested on windows 
using python 3.2

You're in charge of coming up with a file to test it on. Sorry.

Example output:

Enter filename: test.zip
Timing unbuffered read, 5 bytes at a time. 10 loops
took 6.67131335449
Timing buffered read, 5 bytes at a time (4000 byte buffer). 10 loops
took 0.7350001335144043

--
Added file: http://bugs.python.org/file25432/zipfiletest.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10376
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14478] Decimal hashing very slow, could be cached

2012-04-10 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

In the patch:

This:
+except AttributeError:
+pass

should be:
+except:
everything inside except statement

Checking for the AttributeError is very slightly slower. Not by a lot, but I 
think if we're going for speed we might as well be as fast as possible. I can't 
imagine any other exception coming from that try statement.

Using except: pass as opposed to sticking everything inside the except 
statement is also very slightly slower as well

Simple test case, 10 million loops:
except: 7.140999794006348
except AttributeError: 7.8440001010894775

Exception code
in except: 7.48367575073
after except/pass: 7.75

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14478
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4892] Sending Connection-objects over multiprocessing connections fails

2012-04-07 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

@pitrou

You can just delete my original post. I'll repost an edited version here for 
reference

original post with paths removed:
This is an issue for me (Python 3.2). I have a custom pool that sends arguments 
for a function call over a pipe. I cannot send another pipe as an argument. 

Tim's workaround also does not work for me (win xp 32bit and 64bit)

From what I can tell, you can only send a connection as a direct argument to a 
function call. This limits what I can do because I cannot introduce new pipes 
to a worker process after it is instantiated.

Using this code:

def main():
from multiprocessing import Pipe, reduction
i, o = Pipe()
print(i);
reduced = reduction.reduce_connection(i)
print(reduced);
newi = reduced[0](*reduced[1])
print(newi);
newi.send(hi)
o.recv()

if __name__ == __main__:
main();

This is my output:

read-write PipeConnection, handle 1760
(function rebuild_connection at 0x00FD4C00, 
(('.\\pipe\\pyc-3156-1-q5wwnr', 1756, False), True, True))
read-write Connection, handle 1720
 newi.send(hi)
IOError: [Errno 10038] An operation was attempted on something that is not a 
socket

As you can see, the handle changes

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4892] Sending Connection-objects over multiprocessing connections fails

2012-04-07 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

Shouldn't reduce_pipe_connection just be an alias for reduce_connection in unix 
so that using reduce_pipe_connection would work for both win and unix? My 
understanding after looking at the code is that reduce_pipe_connection isn't 
defined for non-win32, although I haven't tested it to see if that's true.

Of course, ideally a pipe connection would just pickle and unpickle properly 
out-of-the-box, which I think was the original intent.

Here's a complete, working example with Python 3.2 tested on Win 7 64-bit:

import sys
from multiprocessing import Process,Pipe, reduction

def main():
print(starting);
i, o = Pipe(False)
parent, child = Pipe();
reducedchild = reduce_pipe(child);
p = Process(target=helper, args=(i,));
p.start();
parent.send(hi);
o.send(reducedchild);
print(parent.recv());
print(finishing);
p.join();
print(done);

def helper(inPipe):
childPipe = expand_reduced_pipe(inPipe.recv());
childPipe.send(child got:  + childPipe.recv());
return;

def reduce_pipe(pipe):
if sys.platform == win32:
return reduction.reduce_pipe_connection(pipe);
else:
return reduction.reduce_connection(pipe);

def expand_reduced_pipe(reduced_pipe):
return reduced_pipe[0](*reduced_pipe[1]);

if __name__ == __main__:
main();

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4892] Sending Connection-objects over multiprocessing connections fails

2012-04-06 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

This is an issue for me (Python 3.2). I have a custom pool that sends arguments 
for a function call over a pipe. I cannot send another pipe as an argument. 

Tim's workaround also does not work for me (win xp 32bit and 64bit)

From what I can tell, you can only send a connection as a direct argument to a 
function call. This limits what I can do because I cannot introduce new pipes 
to a worker process after it is instantiated.

Using this code:

def main():
from multiprocessing import Pipe, reduction
i, o = Pipe()
print(i);
reduced = reduction.reduce_connection(i)
print(reduced);
newi = reduced[0](*reduced[1])
print(newi);
newi.send(hi)
o.recv()

if __name__ == __main__:
main();

This is my output:

read-write PipeConnection, handle 1760
(function rebuild_connection at 0x00FD4C00, 
(('.\\pipe\\pyc-3156-1-q5wwnr', 1756, False), True, True))
read-write Connection, handle 1720
Traceback (most recent call last):
  File H:\mti\secure\Flash\Reliability\Perl_Rel\Ambyx\James\bugs\test.py, 
line 47, in module
main();
  File H:\mti\secure\Flash\Reliability\Perl_Rel\Ambyx\James\bugs\test.py, 
line 43, in main
newi.send(hi)
IOError: [Errno 10038] An operation was attempted on something that is not a 
socket

As you can see, the handle changes

--
nosy: +Jimbofbx

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4892] Sending Connection-objects over multiprocessing connections fails

2012-04-06 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

err, is it possible to edit out those file paths? I didn't intend them to be in 
the message. I'd appreciate it if someone with the privileges to do so could 
remove them.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4892
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14478] Decimal hashing very slow, could be cached

2012-04-05 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

I presume you mean in 3.2? Have you looked at the source code for that 
decorator? It's fundamentally a try/except but with a lot more unnecessary 
bloat than is needed for caching a single int result from a function with no 
arguments. Its actually a lot slower.

If this is likely going to see use in 3.3 then it would probably just be a long 
int since 3.3 uses C. 0 would indicate uncalculated.

Hash function would have to be set up to never return 0.

Also every function would need to be tested to make sure there isn't any 
in-place modification of the Decimal object that could alter the hash value.

I like how the cached hash in 3.3 is faster than int for hashing. Shouldn't an 
int just return itself? Why would it be slower?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14478
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14478] Decimal hashing very slow, could be cached

2012-04-02 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

Tested on 3.2

Note that I noticed that Decimal is supposed to be faster in 3.3 but I thought 
I would bring this to light just in case its still relevant

Decimal hashing is very slow, even for simple numbers. I found by caching the 
hash result, there is significant speed up whenever a Decimal value is reused.

I created a class that inherits from Decimal and changed the __hash__ function 
to try/except a stored hash value as such:

def __hash__(self):
try: return self.myHash;
except:
self.myHash = super().__hash__();
return self.myHash;

Example code:

d = dict();
start = time.time();
i = int(1);
j = int(2);
k = int(3);
for i in range(10):
d[i,j,k] = 5;
print(time.time() - start);

d = dict();
start = time.time();
i = Decimal(1);
j = Decimal(2);
k = Decimal(3);
for i in range(10):
d[i,j,k] = 5;
print(time.time() - start);

d = dict();
start = time.time();
i = CachingDecimal(1);
j = CachingDecimal(2);
k = CachingDecimal(3);
for i in range(10):
d[i,j,k] = 5;
print(time.time() - start);

Output
int:  0.0463133544922
Decimal:  2.31263760376
CachingDecimal:  0.111335144043

In a real life example, I changed some of the values in my code from int to 
Decimal because non-whole numbers needed to be supported (and this seemed like 
the easiest way to do so without having to worry about my == comparisons 
breaking) and it slowed my code down massively. Changing to a CachingDecimal 
type sped up one code block from 92 seconds with Decimal to 7 seconds, which 
was much closer to the original int speed.

Note that all of the relevant operations have to be overloaded to return the 
CachingDecimal type, which is a pain, so this makes a lot of sense to implement 
into the Decimal module. My understanding is that altering a Decimal value will 
always create a new Decimal object, so there shouldn't be any issues with the 
cached hash desyncing with the correct hash. Someone may want to check that 
though.

Thanks,

James

--
components: Library (Lib)
messages: 157369
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Decimal hashing very slow, could be cached
type: performance
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14478
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14478] Decimal hashing very slow, could be cached

2012-04-02 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

If I increase the cycles increased 10x with 3.2 I get:

int:  0.421313354492
Decimal:  24.20299983024597
CachingDecimal:  1.7809998989105225

The sample you have provided is basically what I'm using. See attached

What about worst case hash calculation time for Decimal? How does the C code 
handle that? This is if I increase the values themselves by 100x, same number 
of loops as above

int:  0.5309998989105225
CachingDecimal:  2.07868664551
Decimal:  41.2979998588562

--
Added file: http://bugs.python.org/file25100/cachedDecimal.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14478
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14478] Decimal hashing very slow, could be cached

2012-04-02 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

100x should be e100

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14478
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13678] way to prevent accidental variable overriding

2011-12-29 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

In python is currently there a way to elegantly throw an error if a variable is 
already in the current scope?

For example:

def longfunc(self, filename):
FILE = open(filename);
header = FILE.readline();
... bunch of code ...
childfiles = self.children;
for child in childfiles:
 FILE = open(child);
 header = FILE.readline();
 ... do something with header ...
for line in FILE:
   ... etc ...

In this case, I'm accidentally overriding the old values of FILE and header, 
resulting in a bug. But I'm not going to catch this. I've had a couple of real 
life bugs due to this that were a lot more subtle and lived for months without 
anyone noticing the output data was slightly wrong.

This situation could be prevented if there was a way to say something along the 
lines of new FILE = open(child) or new header = FILE.readline() and have 
python throw an error to let me know that it already exists. This would also 
make code clearer because it allows the intended scope of a variable to become 
more apparent. Since new var = something is a syntax error, adding this 
functionality wouldn't break old code, as long as python would allow for 'new' 
(or whatever the keyword would end up being) to also be a variable name (like 
new new = 1 or new = 1)

--
components: Interpreter Core
messages: 150344
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: way to prevent accidental variable overriding
type: enhancement
versions: Python 3.2, Python 3.3, Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13678
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13678] way to prevent accidental variable overriding

2011-12-29 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

For starters, this would be most efficient implementation:

def unique(varname, value, scope):
assert(not varname in scope);
scope[varname] = value;

Usage:
unique('b', 1, locals());
print(b);

But you can't put that in a loop else it will false trigger. Ideally this 
wouldn't false trigger. This could be done by having python internally 
associate a line number with each explicit variable declaration.

Anyways, an external python function would be too slow for my use case. 
Likewise, since it would be something you could use a lot, it should be 
implemented in the underlying C code to give it minimal overhead.

Keeping functions small is very impractical at times. I shouldn't create 50 
randomly named one use functions in my class as a safeguard against accidental 
overwriting when I have a legitimately complicated piece of code that can't be 
dissected without becoming unreadable. In many cases I might need 8 or 9 locals 
at a time in a single line in each loop section.

I don't see how this is the area of pylint/pyflakes at all. The idea is to make 
it so the human doesn't have to carefully inspect their code in order to decide 
if they made a mistake or not. Inspecting a long list of warnings is no better, 
and arguably I could pull up a bunch of python language design decisions and 
ask why they were made if pylint/pyflakes exists.

If such a change would have be implemented after much consideration and 
discussion, I don't see how closing my post helps accomplish that.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13678
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11990] redirected output - stdout writes newline as \n in windows

2011-05-16 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

I would like to add in windows, input now adds a \r at the end which wasn't 
in 3.1. It doesn't do it in idle. This is using just the regular console window 
that opens up when you double click. I'm guessing this is related to the issue 
I saw earlier:

code:
from time import sleep

item = input(Input something\n);
print(%s %i % (item, len(item)));
sleep(15);

in Idle:
Input something
something
something 9

in console window:
Input something
something
 10ething

ouch

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11990
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12020] Attribute error with flush on stdout,stderr

2011-05-06 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

When upgrading from Python 3.1 to Python 3.2 I noticed that when my program 
closed it printed out a non-consequential AttributeError Exception. My program 
had a custom class that replaced stdout and stderr for use in a piped program 
(it flushed the buffer after every print statement)

I was able to reduce my code down to this simple test case that will reproduce 
the issue. Note that this doesn't show up in idle.

code:
import sys
from time import sleep
import subprocess

python31loc = rC:\python31\python.exe;
python32loc = rC:\python32\python.exe;
myname = attributeError.py;

class FlushFile(object):
#Write-only flushing wrapper for file-type objects.
def __init__(self, f):
self.f = f
try:
self.encoding = f.encoding;
except:
pass;
def write(self, x):
self.f.write(x)
self.f.flush()

# sets stdout and stderr to autoflush
def setAutoFlush():
if sys.__stdout__ != None: # will be None in IDLE
sys.stdout = FlushFile(sys.__stdout__);
sys.stderr = FlushFile(sys.__stderr__);

if __name__ == __main__:
setAutoFlush();
if(len(sys.argv) == 1):
print(Testing python 3.1);
output = subprocess.check_output(%s %s -output % (python31loc, 
myname));
print(Should see no error);
print(Testing python 3.2);
output = subprocess.check_output(%s %s -output % (python32loc, 
myname));
print(Should see no error);
sleep(16);


Output:
Testing python 3.1
Should see no error
Testing python 3.2
Exception AttributeError: 'flush' in __main__.FlushFile object at 0x00C347F0 i
gnored
Should see no error

--
components: IO, Windows
messages: 135347
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Attribute error with flush on stdout,stderr
type: behavior
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12020
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12020] Attribute error with flush on stdout,stderr

2011-05-06 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

You are right, when I add:

def flush(self):
pass;

the error goes away.

When I have this:

def flush():
pass;

I get:

Exception TypeError: 'flush() takes no arguments (1 given)' in 
__main__.FlushFile object at 0x00C2AB70 ignored

This leads me to believe that sys.stdout.flush() is being called on program 
close

So this would be the correct implementation of my flushfile override:

class FlushFile(object):
#Write-only flushing wrapper for file-type objects.
def __init__(self, f):
self.f = f;
self.flush = f.flush;
try:
self.encoding = f.encoding;
except:
pass;
def write(self, x):
self.f.write(x)
self.f.flush()

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12020
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11990] redirected output - stdout writes newline as \n in windows

2011-05-04 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

Yes and no, I can give you a single process single child example that just 
shows that python 3.2 uses binary output while python 3.1 used system default 
when piping, but trying to reproduce the multiprocessing output inconsistencies 
would be... difficult. Unfortunately the software I used to spot the \n, \r\n 
inconsistency with is proprietary. After creating a sample case in only python 
I couldn't reproduce the inconsistent \r\n issue in python 3.2 so I cannot say 
for certain that it wasn't caused by my C# program. I wouldn't worry about the 
inconsistent endlines for now.

Note that I don't see in the what's new documentation it mentioning that 3.2 
changed the behavior of piped output. Kind of a big deal.

Sample code to compare 3.1 and 3.2 piped output:

import sys;
import os;
import subprocess;
from time import sleep;

python31loc = rC:\python31\python.exe;
python32loc = rC:\python32\python.exe;

myname = IOPipetest.py;

def main(args):
if(len(args) == 1):
# main code
print(Testing python 3.1 endlines., end='\r\n');
output = subprocess.check_output(%s %s -output % (python31loc, 
myname));
print(output);
print(Testing python 3.2 endlines., end='\r\n');
output = subprocess.check_output(%s %s -output % (python32loc, 
myname));
print(output);
sleep(7);
else:
for i in range(4):
print(TESTING DEFAULT); # endline is supposed to be automatic
print(TESTING SLASH-EN\n, end='');
print(TESTING WINDOW-RETURN\r\n, end='');


if __name__ == __main__:
main(sys.argv);

--

sample output:

Testing python 3.1 endlines.

b'TESTING DEFAULT\r\nTESTING SLASH-EN\r\nTESTING WINDOW-RETURN\r\r\nTESTING 
DEFAULT\r\nTESTING SLASH-EN\r\nTESTING WINDOW-RETURN\r\r\nTESTING 
DEFAULT\r\nTESTING SLASH-EN\r\nTESTING WINDOW-RETURN\r\r\nTESTING 
DEFAULT\r\nTESTING SLASH-EN\r\nTESTING WINDOW-RETURN\r\r\n'
Testing python 3.2 endlines.

b'TESTING DEFAULT\nTESTING SLASH-EN\nTESTING WINDOW-RETURN\r\nTESTING 
DEFAULT\nTESTING SLASH-EN\nTESTING WINDOW-RETURN\r\nTESTING DEFAULT\nTESTING 
SLASH-EN\nTESTING WINDOW-RETURN\r\nTESTING DEFAULT\nTESTING SLASH-EN\nTESTING 
WINDOW-RETURN\r\n'

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11990
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11990] redirected output - stdout writes newline as \n in windows

2011-05-03 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

In windows, 64-bit, python *mostly* writes only a \n to stdout even though it's 
mode is 'w'. However it sometimes writes a \r\n on certain print statements and 
erratically when I have multiple processes writing to stdout.

Output looks fine in console, in IDLE, and using v.3.1

Example with multiple processes writing to stdout out using the same code: 
print(TESTCODE); (note that in windows, the naked \n is ignored):

TESTCODETESTCODE
TESTCODE
TESTCODE
TESTCODETESTCODETESTCODE
TESTCODE

Windows program that calls python and receives its piped output is a C# .NET 
program.

--
components: IO, Windows
messages: 135076
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: redirected output - stdout writes newline as \n in windows
type: behavior
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11990
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11990] redirected output - stdout writes newline as \n in windows

2011-05-03 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

Sorry there isn't more info but I'm really busy right now

In fact a workaround would be appreciated if known.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11990
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11990] redirected output - stdout writes newline as \n in windows

2011-05-03 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

Nevermind, I have a workaround that didn't require rewriting all the print 
statements but its in the C# code not the python code

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11990
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10376] ZipFile unzip is unbuffered

2010-11-09 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

The Unzip module is always unbuffered (tested v.3.1.2 Windows XP, 32-bit). This 
means that if one has to do many small reads it is a lot slower than reading a 
chunk of data to a buffer and then reading from that buffer. It seems logical 
that the unzip module should default to buffered reading and/or have a buffered 
argument. Likewise, the documentation should clarify that there is no buffering 
involved when doing a read, which runs contrary to the default behavior of a 
normal read.

start Zipfile read
done
27432 reads done
took 0.859 seconds
start buffered Zipfile read
done
27432 reads done
took 0.072 seconds
start normal read (default buffer)
done
27432 reads done
took 0.139 seconds
start buffered normal read
done
27432
took 0.137 seconds

--
assignee: d...@python
components: Documentation, IO, Library (Lib)
messages: 120871
nosy: Jimbofbx, d...@python
priority: normal
severity: normal
status: open
title: ZipFile unzip is unbuffered
type: performance
versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10376
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10376] ZipFile unzip is unbuffered

2010-11-09 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

I should clarify that this is the zipfile constructor I am using:

zipfile.ZipFile(filename, mode='r', allowZip64=True);

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10376
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10332] Multiprocessing maxtasksperchild results in hang

2010-11-05 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

v.3.2a3

If the maxtasksperchild argument is used, the program will just hang after 
whatever that value is rather than working as expected. Tested in Windows XP 
32-bit

test code:

import multiprocessing

def f(x):
return 0;

if __name__ == '__main__':
pool = multiprocessing.Pool(processes=2,maxtasksperchild=1);
results = list();
for i in range(10):
results.append(pool.apply_async(f, (i)));
pool.close();
pool.join();
for r in results:
print(r);
print(Done);

--
components: Library (Lib)
messages: 120547
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Multiprocessing maxtasksperchild results in hang
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10332
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9801] Can not use append/extend to lists in a multiprocessing manager dict

2010-09-23 Thread James Hutchison

James Hutchison jamesghutchi...@gmail.com added the comment:

Is there a way to get this so it behaves more intuitively? You'd think adding a 
managed list to a managed dictionary (or another managed list) or making a deep 
copy would work but it still doesn't. When you get an item from a managed data 
structure, it seems to be returning a data-only copy of the object instead of a 
handle to the manager of the object. The fact that += (extend) works but 
.extend() doesn't work also seems to raise a flag for me (although I do 
understand why this is). I don't think it should behave this way.

i.e.:
currently:
d['l'] - return copy.deepcopy(d['l'])

should be:
d['l'] - return managerObject(d['l'])
where managerObject is a managed object that runs on the same process as the 
manager it came from

Problem: Currently there is no easy way to do random access without copying out 
and copying back in. I'd think that would be a real efficiency problem.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9801
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9847] Binary strings never compare equal to raw/normal strings

2010-09-13 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

Tested on Python 3.1.2 Windows XP 32-bit

Binary strings (such as what is returned by filereader.readline()) are never 
equal to raw or normal strings, even when both strings are empty

if(b == ):
print(Strings are equal);
else:
if(b == r):
print(raw and binary equal, normal isn't);
else:
print(they aren't equal);

output: they aren't equal

--
components: Interpreter Core
messages: 116331
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Binary strings never compare equal to raw/normal strings
type: behavior
versions: Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9847
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9801] Can not use append/extend to lists in a multiprocessing manager dict

2010-09-08 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

tested python 3.1.2

Man = multiprocessing.Manager();
d = man.dict();
d['l'] = list();
d['l'].append(hey);
print(d['l']);
 []

using debugger reveals a KeyError. Extend also does not work. Only thing that 
works is += which means you can't insert actual tuples or lists into the list. 
This was all done on a single process

--
components: Library (Lib)
messages: 115891
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: Can not use append/extend to lists in a multiprocessing manager dict
type: behavior

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9801
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9803] IDLE closes with save while breakpoint open

2010-09-08 Thread James Hutchison

New submission from James Hutchison jamesghutchi...@gmail.com:

I have multiple versions of python - 2.6.1 and 3.1.2. 2.6.1 is the primary 
install (i.e., right click on a file and edit with IDLE brings up 2.6), and 
was installed first. This issue occurs on 3.1.2, Windows XP 32-bit

If I highlight a breakpoint, run with the debugger, make a change, then save, 
IDLE will close all windows after saving without an error message. If I clear 
the breakpoint and then save, IDLE will not close. Reopening the file reveals 
that the save was successful. I do not need to actually be stopped at a 
highlighted breakpoint for this to occur (i.e. this will occur if I save at the 
beginning before the script has ran). If I run with 2.6.1 I do not see this 
issue. The save difference could be something simple, such as adding a space 
somewhere random.

--
components: IDLE
messages: 115894
nosy: Jimbofbx
priority: normal
severity: normal
status: open
title: IDLE closes with save while breakpoint open
type: crash
versions: Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9803
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com