Python IO Redirection to Console

2015-12-13 Thread austin aigbe
Hello,

I am trying to redirect the IO (stdout, stdin and stderr) to the console.
Is there a Python module for this?

Thanks.

Regards
-- 
https://mail.python.org/mailman/listinfo/python-list


IO Redirection to Console

2015-12-10 Thread austin aigbe
Hello,

I am trying to port the following C++ code for IO redirection to console.

// C++ (from Synfig)

void redirectIOToConsole()
{
int hConHandle;
HANDLE lStdHandle;
CONSOLE_SCREEN_BUFFER_INFO coninfo;
FILE *fp;

// allocate console
if( GetStdHandle(STD_OUTPUT_HANDLE) != INVALID_HANDLE_VALUE )
AllocConsole();
// set the screen buffer to be big enough to let us scroll text
GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &coninfo);
coninfo.dwSize.Y = MAX_LINES;
SetConsoleScreenBufferSize(GetStdHandle(STD_OUTPUT_HANDLE), 
coninfo.dwSize);
//redirect unbuffered STDOUT to the console
lStdHandle = GetStdHandle(STD_OUTPUT_HANDLE);
hConHandle = _open_osfhandle((intptr_t) lStdHandle, _O_TEXT);
fp = _fdopen( hConHandle, "w" );
*stdout = *fp;
setvbuf( stdout, NULL, _IONBF, 0 );
// redirect unbuffered STDIN to the console
lStdHandle = GetStdHandle(STD_INPUT_HANDLE);
hConHandle = _open_osfhandle((intptr_t) lStdHandle, _O_TEXT);
fp = _fdopen( hConHandle, "r" );
*stdin = *fp;
setvbuf( stdin, NULL, _IONBF, 0 );
// redirect unbuffered STDERR to the console
lStdHandle = GetStdHandle(STD_ERROR_HANDLE);
hConHandle = _open_osfhandle((intptr_t) lStdHandle, _O_TEXT);
fp = _fdopen( hConHandle, "w" );
*stderr = *fp;
setvbuf( stderr, NULL, _IONBF, 0 );
// make cout, wcout, cin, wcin, wcerr, cerr, wclog and clog 
// point to console as well
ios::sync_with_stdio();
}



My Python port:

from ctypes import windll, create_string_buffer,Structure, byref
from ctypes.wintypes import DWORD,SHORT, WORD
import os
import msvcrt
import sys

STD_INPUT_HANDLE  = -10
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE  = -12
INVALID_HANDLE_VALUE = DWORD(-1).value

MAX_LINES = 500

def consoleOptionEnabled(argv):
value = False
if ("--console" in argv) or ("-c" in argv):
value = True
return value

def redirectIOToConsole():
class CONSOLE_SCREEN_BUFFER_INFO(Structure):
_fields_ = [("dwSize", COORD),
("dwCursorPosition", COORD),
("wAttributes", WORD),
("srWindow", SMALL_RECT),
("dwMaximumWindowSize", DWORD)]

coninfo = CONSOLE_SCREEN_BUFFER_INFO()

# allocate console
if(windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE) != 
INVALID_HANDLE_VALUE):
windll.kernel32.AllocConsole()

# set the screen buffer to be big enough to let us scroll text

windll.kernel32.GetConsoleScreenBufferInfo(windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE),
 byref(coninfo))
coninfo.dwSize.Y = MAX_LINES

windll.kernel32.SetConsoleScreenBufferSize(windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE),
 coninfo.dwSize)

#redirect unbuffered STDOUT to the console
lStdHandle = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)

hConHandle = msvcrt.open_osfhandle(lStdHandle, os.O_TEXT)
fp = os.fdopen( hConHandle, "w" )
sys.stdout = fp
setvbuf( stdout, NULL, _IONBF, 0 )
# redirect unbuffered STDIN to the console
lStdHandle = windll.kernel32.GetStdHandle(STD_INPUT_HANDLE)
hConHandle = msvcrt.open_osfhandle(lStdHandle, os.O_TEXT)
fp = os.fdopen( hConHandle, "r" )
sys.stdin = fp
setvbuf( stdin, NULL, _IONBF, 0 )

#redirect unbuffered STDERR to the console
lStdHandle = windll.kernel32.GetStdHandle(STD_ERROR_HANDLE)
hConHandle = msvcrt.open_osfhandle(lStdHandle, os.O_TEXT)
fp = os.fdopen( hConHandle, "w" )
sys.stderr = fp
setvbuf( stderr, NULL, _IONBF, 0 )
# make cout, wcout, cin, wcin, wcerr, cerr, wclog and clog 
# point to console as well


Is there a better way to handling IO redirection to console in Python?

Thanks.

Austin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: list comparison vs integer comparison, which is more efficient?

2015-01-04 Thread austin aigbe
On Sunday, January 4, 2015 12:20:26 PM UTC+1, austin aigbe wrote:
> On Sunday, January 4, 2015 8:12:10 AM UTC+1, Terry Reedy wrote:
> > On 1/3/2015 6:19 PM, austin aigbe wrote:
> > 
> > > I am currently implementing the LTE physical layer in Python (ver 2.7.7).
> > > For the qpsk, 16qam and 64qam modulation I would like to know which is 
> > > more efficient to use, between an integer comparison and a list 
> > > comparison:
> > >
> > > Integer comparison: bit_pair as an integer value before comparison
> > >
> > >  # QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
> > >  def mp_qpsk(self):
> > >  r = []
> > >  for i in range(self.nbits/2):
> > >  bit_pair = (self.sbits[i*2] << 1) | self.sbits[i*2+1]
> > >  if bit_pair == 0:
> > >  r.append(complex(1/math.sqrt(2),1/math.sqrt(2)))
> > >  elif bit_pair == 1:
> > >  r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
> > >  elif bit_pair == 2:
> > >  r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
> > >  elif bit_pair == 3:
> > >  r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
> > >  return r
> > >
> > > List comparison: bit_pair as a list before comparison
> > >
> > >  # QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
> > >  def mp_qpsk(self):
> > >  r = []
> > >  for i in range(self.nbits/2):
> > >  bit_pair = self.sbits[i*2:i*2+2]
> > >  if bit_pair == [0,0]:
> > >  r.append()
> > >  elif bit_pair == [0,1]:
> > >  r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
> > >  elif bit_pair == [1,0]:
> > >  r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
> > >  elif bit_pair == [1,1]:
> > >  r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
> > >  return r
> > 
> > Wrong question.  If you are worried about efficiency, factor out all 
> > repeated calculation of constants and eliminate the multiple comparisons.
> > 
> > sbits = self.sbits
> > a = 1.0 / math.sqrt(2)
> > b = -a
> > points = (complex(a,a), complex(a,b), complex(b,a), complex(b,b))
> >  complex(math.sqrt(2),1/math.sqrt(2))
> > def mp_qpsk(self):
> >  r = [points[sbits[i]*2 + sbits[i+1]]
> >  for i in range(0, self.nbits, 2)]
> >  return r
> > 
> > -- 
> > Terry Jan Reedy
> 
> Cool. Thanks a lot.

Hi Terry,

No difference between the int and list comparison in terms of the number of 
calls(24) and time (0.004s). Main part is the repeated call to sqrt().

However, it took a shorter time (0.004s) with 24 function calls than your code 
(0.005s) which took just 13 function calls to execute.

Why is this?

Integer comparison profile result:
>>> p = pstats.Stats('lte_phy_mod.txt')
>>> p.strip_dirs().sort_stats(-1).print_stats()
Sun Jan 04 12:36:32 2015lte_phy_mod.txt

 24 function calls in 0.004 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10.0040.0040.0040.004 lte_phy_layer.py:16()
10.0000.0000.0000.000 lte_phy_layer.py:20(Scrambling)
10.0000.0000.0000.000 lte_phy_layer.py:276(LayerMapping)

10.0000.0000.0000.000 lte_phy_layer.py:278(Precoding)
10.0000.0000.0000.000 
lte_phy_layer.py:280(ResourceElementMapping)
10.0000.0000.0000.000 
lte_phy_layer.py:282(OFDMSignalGenerator)
10.0000.0000.0000.000 lte_phy_layer.py:65(Modulation)
10.0000.0000.0000.000 lte_phy_layer.py:71(__init__)
10.0000.0000.0000.000 lte_phy_layer.py:87(mp_qpsk)
10.0000.0000.0000.000 {len}
80.0000.0000.0000.000 {math.sqrt}
40.0000.0000.0000.000 {method 'append' of 'list' 
objects}
10.0000.0000.0000.000 {method 'disable' of 
'_lsprof.Profiler' objects}
10.0000.0000.0000.000 {range}



>>>

List comparison:
>>> import pstats
>>> p = pstats.Stats('lte_phy_mod2.txt')
>>> p.strip_dirs().sort_stats(-1).print_stats()
Sun Jan 04 12:57:24 2015lte_phy_mod2.txt

 24 function calls in 0.004 seconds

   Ordered by: stan

Re: list comparison vs integer comparison, which is more efficient?

2015-01-04 Thread austin aigbe
On Sunday, January 4, 2015 8:12:10 AM UTC+1, Terry Reedy wrote:
> On 1/3/2015 6:19 PM, austin aigbe wrote:
> 
> > I am currently implementing the LTE physical layer in Python (ver 2.7.7).
> > For the qpsk, 16qam and 64qam modulation I would like to know which is more 
> > efficient to use, between an integer comparison and a list comparison:
> >
> > Integer comparison: bit_pair as an integer value before comparison
> >
> >  # QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
> >  def mp_qpsk(self):
> >  r = []
> >  for i in range(self.nbits/2):
> >  bit_pair = (self.sbits[i*2] << 1) | self.sbits[i*2+1]
> >  if bit_pair == 0:
> >  r.append(complex(1/math.sqrt(2),1/math.sqrt(2)))
> >  elif bit_pair == 1:
> >  r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
> >  elif bit_pair == 2:
> >  r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
> >  elif bit_pair == 3:
> >  r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
> >  return r
> >
> > List comparison: bit_pair as a list before comparison
> >
> >  # QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
> >  def mp_qpsk(self):
> >  r = []
> >  for i in range(self.nbits/2):
> >  bit_pair = self.sbits[i*2:i*2+2]
> >  if bit_pair == [0,0]:
> >  r.append()
> >  elif bit_pair == [0,1]:
> >  r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
> >  elif bit_pair == [1,0]:
> >  r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
> >  elif bit_pair == [1,1]:
> >  r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
> >  return r
> 
> Wrong question.  If you are worried about efficiency, factor out all 
> repeated calculation of constants and eliminate the multiple comparisons.
> 
> sbits = self.sbits
> a = 1.0 / math.sqrt(2)
> b = -a
> points = (complex(a,a), complex(a,b), complex(b,a), complex(b,b))
>  complex(math.sqrt(2),1/math.sqrt(2))
> def mp_qpsk(self):
>  r = [points[sbits[i]*2 + sbits[i+1]]
>  for i in range(0, self.nbits, 2)]
>  return r
> 
> -- 
> Terry Jan Reedy

Cool. Thanks a lot.
-- 
https://mail.python.org/mailman/listinfo/python-list


list comparison vs integer comparison, which is more efficient?

2015-01-03 Thread austin aigbe
Hi,

I am currently implementing the LTE physical layer in Python (ver 2.7.7).
For the qpsk, 16qam and 64qam modulation I would like to know which is more 
efficient to use, between an integer comparison and a list comparison:

Integer comparison: bit_pair as an integer value before comparison

# QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
def mp_qpsk(self):
r = []
for i in range(self.nbits/2):
bit_pair = (self.sbits[i*2] << 1) | self.sbits[i*2+1] 
if bit_pair == 0:
r.append(complex(1/math.sqrt(2),1/math.sqrt(2)))
elif bit_pair == 1:
r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
elif bit_pair == 2:
r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
elif bit_pair == 3:
r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
return r

List comparison: bit_pair as a list before comparison

# QPSK - TS 36.211 V12.2.0, section 7.1.2, Table 7.1.2-1
def mp_qpsk(self):
r = []
for i in range(self.nbits/2):
bit_pair = self.sbits[i*2:i*2+2] 
if bit_pair == [0,0]:
r.append(complex(1/math.sqrt(2),1/math.sqrt(2)))
elif bit_pair == [0,1]:
r.append(complex(1/math.sqrt(2),-1/math.sqrt(2)))
elif bit_pair == [1,0]:
r.append(complex(-1/math.sqrt(2),1/math.sqrt(2)))
elif bit_pair == [1,1]:
r.append(complex(-1/math.sqrt(2),-1/math.sqrt(2)))
return r

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


super() in class defs?

2011-05-25 Thread Jess Austin
I may be attempting something improper here, but maybe I'm just going
about it the wrong way. I'm subclassing
http.server.CGIHTTPRequestHandler, and I'm using a decorator to add
functionality to several overridden methods.

def do_decorate(func):
.   def wrapper(self):
.   if appropriate():
.   return func()
.   complain_about_error()
.   return wrapper

class myHandler(CGIHTTPRequestHandler):
.   @do_decorate
.   def do_GET(self):
.   return super().do_GET()
.   # also override do_HEAD and do_POST

My first thought was that I could just replace that whole method
definition with one line:

class myHandler(CGIHTTPRequestHandler):
.   do_GET = do_decorate(super().do_GET)

That generates the following error:

SystemError: super(): __class__ cell not found

So I guess that when super() is called in the context of a class def
rather than that of a method def, it doesn't have the information it
needs. Now I'll probably just say:

do_GET = do_decorate(CGIHTTPRequestHandler.do_GET)

but I wonder if there is a "correct" way to do this instead? Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Python users in Stavanger, Norway?

2011-04-03 Thread Austin Bingham
Hei!

I'm a Python developer in Stavanger, Norway looking for other Python
users/developers/etc. who might be interested in starting a local user
group. Anyone interested? This group might actually evolve into a
general programming/computer group, depending on the mix of people,
but I think that's probably a good thing.

I know there are a lot of computer types in the area, but there
doesn't seem to be much of a "community". I'd like to change that if
we can, so let me know if you're interested.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Compiling python without ssl?

2011-04-01 Thread Austin Bingham
Is there any way to compile python (3.1.3, in case it matters) without
ssl support? OpenSSL is on my system, and configure finds it, but I
can't find a way to tell configure to explicitly ignore it.

I need a version of python without ssl for trade compliance reasons (I
don't make the dumb rules, I just follow 'em), and we used to be able
to just remove the ssl module after the build. With more recent
releases, though, this approach causes problems with e.g. hashlib.

Ideally I'd like something like "configure --disable-ssl", but I can
also do surgery on configure or something if that's what it takes.
Thanks in advance!

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C/C++ Import

2010-02-08 Thread Austin Bingham
Just to elaborate on Terry's point a bit, sys.path is influenced (in
part) by the PYTHONPATH environment variable. If you find that the
directory containing 'python' is not in sys.path (which you can check
with 'import sys; print sys.path'), add that directory to PYTHONPATH
and try again. This may not be the solution you ultimately end up
using, but it'll get you pointed in the right direction.

Austin

On Mon, Feb 8, 2010 at 5:52 PM, Terry Reedy  wrote:
> On 2/7/2010 10:56 PM, 7H3LaughingMan wrote:
>>
>> To make the background information short, I am trying to take a
>> program that uses Python for scripting and recompile it for Linux
>> since it originally was built to run on Win32. The program itself was
>> designed to be able to be compiled on Linux and someone made there on
>> release with source that added python scripting. After some issues I
>> got it to compile but now it is unable to import the files that it
>> needs.
>>
>> The program is running the following code...
>> PyImport_Import( PyString_FromString("python.PlayerManager") );
>>
>> This is meant to import the file PlayerManager.py inside of the python
>> folder. However it throws the following Python Error (Gotten through
>> PyErr_Print())
>> ImportError: No module named python.PlayerManager
>>
>> I am using 2.6.4 so I can't call it by the filename, does anyone know
>> how to do a proper import?
>
> Your 'python' package directory must be in a directory listed in sys.path. I
> would print that check.
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C/C++ Import

2010-02-07 Thread Austin Bingham
Does the 'python' directory contain a file named '__init__.py'? This
is required to let that directory act as a package (see:
http://docs.python.org/tutorial/modules.html#packages); without it,
you'll see the symptoms you're seeing.

Austin

On Mon, Feb 8, 2010 at 4:56 AM, 7H3LaughingMan  wrote:
> To make the background information short, I am trying to take a
> program that uses Python for scripting and recompile it for Linux
> since it originally was built to run on Win32. The program itself was
> designed to be able to be compiled on Linux and someone made there on
> release with source that added python scripting. After some issues I
> got it to compile but now it is unable to import the files that it
> needs.
>
> The program is running the following code...
> PyImport_Import( PyString_FromString("python.PlayerManager") );
>
> This is meant to import the file PlayerManager.py inside of the python
> folder. However it throws the following Python Error (Gotten through
> PyErr_Print())
> ImportError: No module named python.PlayerManager
>
> I am using 2.6.4 so I can't call it by the filename, does anyone know
> how to do a proper import?
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some C-API functions clear the error indicator?

2010-01-29 Thread Austin Bingham
That makes a lot of sense. And if I take the approach that any Py*
function might do this, it actually looks like I can simplify my code
(rather than managing some list of ill-behaved functions or
something.) Thanks!

On Fri, Jan 29, 2010 at 3:58 PM, Duncan Booth
 wrote:
> Austin Bingham  wrote:
>
>> The functions that do this don't seem to indicate in their
>> documentation that this will happen. So first, does anyone know why
>> this is happening? Is it because of the context in which I'm making
>> the calls? Is there any pattern or reason behind which functions will
>> do this? Or am I just doing something wrong?
>>
> (Just guessing here)
> I would expect that any function that executes Python code will clear the
> error.
>
> I think that has to happen otherwise the Python code will throw an
> exception whenever it gets round to checking for errors. In the past I've
> found that if you fail to check for an error in C code before returning to
> the interpreter you get the exception thrown a few instructions later, so
> something similar would probably happen if you call other Python code from
> C.
>
> If it is anything that executes Python then that would include any function
> that creates or destroys an object with Python constructor or destructor
> code. or that compares or otherwise operates on instances defined in
> Python. In particular it might mean that any function that doesn't appear
> to clear the error could do so in a slightly different situation.
>
> --
> Duncan Booth http://kupuguy.blogspot.com
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some C-API functions clear the error indicator?

2010-01-29 Thread Austin Bingham
The original post was, in a nutshell, the "use case"; it's very scaled
down the from the real example, and not intended to "make sense". The
notion on which I was (apparently incorrectly) basing my exception
translation was that I could 1) get and reset references to the error
indicators, 2) call other python methods that don't themselves throw,
and 3) later retrieve the same "active" exceptions. I was relying on
this ability to "re-fetch" exceptions insofar as the functions in my
code which dealt with python exceptions all looked up the exception
separately. The predicate that "a successful function won't modify the
error indicators" appears to be wrong, however, and I've modified my
code accordingly.

Austin

On Sat, Jan 30, 2010 at 1:11 AM, Gabriel Genellina
 wrote:
> En Fri, 29 Jan 2010 18:25:14 -0300, Austin Bingham
>  escribió:
>
>> Maybe I'm not following what you're saying. In my case, I already know
>> that an exception has been thrown. In the course of processing that
>> exception, I call another function which, for whatever reason and even
>> when it succeeds, clears the exception indicators. How can I address
>> this issue by checking function calls for failure?
>
> Maybe if you provide an actual use case we can suggest how to handle it. The
> code in your original post does not make any sense to me (except by showing
> that PyImport_ImportModule does clear the error indicator). If you already
> know there was an error, and you even have retrieved the error details, why
> do you care if the error indicator gets reset?
>
> --
> Gabriel Genellina
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Some C-API functions clear the error indicator?

2010-01-29 Thread Austin Bingham
Maybe I'm not following what you're saying. In my case, I already know
that an exception has been thrown. In the course of processing that
exception, I call another function which, for whatever reason and even
when it succeeds, clears the exception indicators. How can I address
this issue by checking function calls for failure?

Austin

On Fri, Jan 29, 2010 at 9:04 PM, Gabriel Genellina
 wrote:
> En Fri, 29 Jan 2010 11:37:09 -0300, Austin Bingham
>  escribió:
>
>> I've noticed that several (many?) python functions seem to clear the
>> error/exception indicators when they're called from a C/C++ program.
>> For example, both PyImport_ImportModule and traceback.extract_tb()
>> (called via the function call methods) do this: if error indicators
>> are set prior to their call (as indicated by PyErr_Fetch, and
>> including a call to PyErr_Restore), I see that they are unset (using
>> the same method) after the call. This happens even when the functions
>> succeed.
>
> It's simple: you have to check *every* function call for failure. Many
> functions return new object references and you have to properly decrement
> them in case of failure, so in most cases this means that you have to check
> each and every call.
>
> --
> Gabriel Genellina
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Some C-API functions clear the error indicator?

2010-01-29 Thread Austin Bingham
I've noticed that several (many?) python functions seem to clear the
error/exception indicators when they're called from a C/C++ program.
For example, both PyImport_ImportModule and traceback.extract_tb()
(called via the function call methods) do this: if error indicators
are set prior to their call (as indicated by PyErr_Fetch, and
including a call to PyErr_Restore), I see that they are unset (using
the same method) after the call. This happens even when the functions
succeed.

The functions that do this don't seem to indicate in their
documentation that this will happen. So first, does anyone know why
this is happening? Is it because of the context in which I'm making
the calls? Is there any pattern or reason behind which functions will
do this? Or am I just doing something wrong?

If the problem is context-dependent (e.g. I haven't properly
established a call stack, or something of that flavor), any pointers
on doing things properly would be great.

Here's some example code demonstrating the problem:

---

  #include 

  int main(int argc, char** argv)
  {
Py_Initialize();

// Cause an IndexError
PyObject* list = PyList_New(0);
PyObject* obj = PyList_GetItem(list, 100);

PyObject *t = NULL, *v = NULL, *tb = NULL;

// Verify that we see the error
PyErr_Fetch(&t, &v, &tb);
assert(t);
PyErr_Restore(t, v, tb);

// Import a module, which seems to be clearing the error indicator
PyObject* mod = PyImport_ImportModule("sys");
assert(PyObject_HasAttrString(mod, "path"));

// Verify that the error indicator has been cleared
PyErr_Fetch(&t, &v, &tb);
assert(!t); // <=== The error is gone!
PyErr_Restore(t, v, tb);

Py_Finalize();

return 0;
  }

---

Thanks in advance.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __eq__() inconvenience when subclassing set

2009-11-02 Thread Jess Austin
On Nov 1, 1:13 am, "Gabriel Genellina"  wrote:
> Looks like in 3.1 this can be done with bytes+str and viceversa, even if  
> bytes and str don't have a common ancestor (other than object; basestring  
> doesn't exist in 3.x):
>
> p3> Base = bytes
> p3> Other = str
> p3>
> p3> class Derived(Base):
> ...   def __eq__(self, other):
> ...     print('Derived.__eq__')
> ...     return True
> ...
> p3> Derived()==Base()
> Derived.__eq__
> True
> p3> Base()==Derived()
> Derived.__eq__
> True
> p3> Derived()==Other()
> Derived.__eq__
> True
> p3> Other()==Derived()
> Derived.__eq__            # !!!
> True
> p3> Base.mro()
> [, ]
> p3> Other.mro()
> [, ]
>
> The same example with set+frozenset (the one you're actually interested  
> in) doesn't work, unfortunately.
> After further analysis, this works for bytes and str because both types  
> refuse to guess and compare to each other; they return NotImplemented when  
> the right-side operand is not of the same type. And this gives that other  
> operand the chance of being called.
>
> set and frozenset, on the other hand, are promiscuous: their  
> tp_richcompare slot happily accepts any set of any kind, derived or not,  
> and compares their contents. I think it should be a bit more strict: if  
> the right hand side is not of the same type, and its tp_richcompare slot  
> is not the default one, it should return NotImplemented. This way the  
> other type has a chance to be called.

Thanks for this, Gabriel!  There seems to be a difference between the
two cases, however:

>>> str() == bytes()
False
>>> set() == frozenset()
True

I doubt that either of these invariants is amenable to modification,
even for purposes of "consistency".  I'm not sure how to resolve this,
but you've definitely helped me here.  Perhaps the test in
set_richcompare can return NotImplemented in particular cases but not
in others?  I'll think about this; let me know if you come up with
anything more.

thanks,
Jess
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __eq__() inconvenience when subclassing set

2009-10-30 Thread Jess Austin
On Oct 29, 10:41 pm, "Gabriel Genellina" 
wrote:
> We know the last test fails because the == logic fails to recognize mySet  
> (on the right side) as a "more specialized" object than frozenset (on the  
> left side), because set and frozenset don't have a common base type  
> (although they share a lot of implementation)
>
> I think the only way would require modifying tp_richcompare of  
> set/frozenset objects, so it is aware of subclasses on the right side.  
> Currently, frozenset() == mySet() effectively ignores the fact that mySet  
> is a subclass of set.

I don't think even that would work.  By the time set_richcompare() is
called (incidentally, it's used for both set and frozenset), it's too
late.  That function is not responsible for calling the subclass's
method.  It does call PyAnySet_Check(), but only to short-circuit
equality and inequality for non-set objects.  I believe that something
higher-level in the interpreter decides to call the right-side type's
method because it's a subclass of the left-side type, but I'm not
familiar enough with the code to know where that happens.  It may be
best not to sully such generalized code with a special case for
this.

I may do some experiments with bytes, str, and unicode, since that
seems to be an analogous case.  There is a basestring type, but at
this point I don't know that it really helps with anything.

cheers,
Jess
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __eq__() inconvenience when subclassing set

2009-10-29 Thread Jess Austin
On Oct 29, 3:54 pm, Mick Krippendorf  wrote:
> Jess Austin wrote:
> > That's nice, but it means that everyone who imports my class will have
> > to import the monkeypatch of frozenset, as well.  I'm not sure I want
> > that.  More ruby than python, ne?
>
> I thought it was only a toy class?

Well, I posted a toy, but it's a stand-in for something else more
complicated.  Trying to conserve bytes, you know.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __eq__() inconvenience when subclassing set

2009-10-29 Thread Jess Austin
On Oct 28, 10:07 pm, Mick Krippendorf  wrote:
> You could just overwrite set and frozenset:
>
> class eqmixin(object):
>     def __eq__(self, other):
>         print "called %s.__eq__()" % self.__class__
>         if isinstance(other, (set, frozenset)):
>             return True
>         return super(eqmixin, self).__eq__(other)
>
> class frozenset(eqmixin, frozenset):
>     pass

That's nice, but it means that everyone who imports my class will have
to import the monkeypatch of frozenset, as well.  I'm not sure I want
that.  More ruby than python, ne?

thanks,
Jess
-- 
http://mail.python.org/mailman/listinfo/python-list


__eq__() inconvenience when subclassing set

2009-10-28 Thread Jess Austin
I'm subclassing set, and redefining __eq__().  I'd appreciate any
relevant advice.

>>> class mySet(set):
... def __eq__(self, other):
... print "called mySet.__eq__()!"
... if isinstance(other, (set, frozenset)):
... return True
... return set.__eq__(self, other)
...

I stipulate that this is a weird thing to do, but this is a toy class
to avoid the lengthy definition of the class I actually want to
write.  Now I want the builtin set and frozenset types to use the new
__eq__() with mySet symmetrically.

>>> mySet() == set([1])
called mySet.__eq__()!
True
>>> mySet() == frozenset([1])
called mySet.__eq__()!
True
>>> set([1]) == mySet()
called mySet.__eq__()!
True
>>> frozenset([1]) == mySet()
False

frozenset doesn't use mySet.__eq__() because mySet is not a subclass
of frozenset as it is for set.  I've tried a number of techniques to
mitigate this issue. If I multiple-inherit from both set and
frozenset, I get the instance lay-out conflict error.  I have similar
problems setting mySet.__bases__ directly, and hacking mro() in a
metaclass.  So far nothing has worked.  If it matters, I'm using 2.6,
but I can change versions if it will help.

Should I give up on this, or is there something else I can try?  Keep
in mind, I must redefine __eq__(), and I'd like to be able to compare
instances of the class to both set and frozenset instances.

cheers,
Jess
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 8:10 PM, Anthony Tolle  wrote:
> I think that without a practical example of what this would be used
> for, we're all going to be a little lost on this one.
>
> So far, we've not seen the original problem, only the author's
> preferred method for solving it.  My guess is there are other, more
> pythonic ways to solve the original problem.

The original problem was just a question statement: can I use
alternative uniqueness functions on a set? Indeed, I proposed an idea,
which that was sets could be constructed with user-defined hash and
equality functions, the strengths and weaknesses of which have been
gone over in some detail. The short answer is that what I was looking
for (admittedly, a desire motivated by experiences in other languages)
is not really feasible, at least not without a great deal more work.

The other proposed solutions all require linearly extra storage,
linearly extra time, both, or simply don't solve the problem. And in
any event, they miss the point of the original post which was not "How
can I get a particular behavior?" (which is fairly trivial, both in my
own estimation and as evidenced by the abundance of proposals) but
"Can I get a particular behavior in a particular way?" (the answer to
which, again, seems to be no.)

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 7:49 PM, Ethan Furman  wrote:
> Austin Bingham wrote:
> I'm feeling really dense about now... What am I missing?

What you're missing is the entire discussion up to this point. I was
looking for a way to use an alternative uniqueness criteria in a set
instance without needing to modify my class.

> So is that the behavior you're wanting, keeping the first object and
> discarding all others?  Or is there something else I'm still missing?

Yes and yes. I want "normal" set behavior, but I want the set to use
user-provided hash and equality tests, i.e. ones that don't
necessarily call __hash__ and __eq__ on the candidate elements.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 7:05 PM, Anthony Tolle  wrote:
> I wrote a quick subclass of set that does something similar, but uses
> just one function for the object uniqueness:
>
> class MySet(set):
>    def __init__(self, iterable = (), idfunc = lambda x: x):
>        self.idfunc = idfunc
>        self.ids = set()
>        for item in iterable:
>            self.add(item)
>
>    def add(self, item):
>        id = self.idfunc(item)
>        if id not in self.ids:
>            self.ids.add(id)
>            set.add(self, item)

Yes, what you've got there provides the interface of what I want. And
no doubt we could concoct countless ways to implement some version of
this. However, if set itself accepted an alternate hash function, then
I could do it with no extra overhead. Consider your implementation: it
requires an extra set (extra space) and an extra lookup on many
operations (extra time.) My original hope was to not have to do that.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 5:15 PM, Gabriel Genellina
 wrote:
> En Thu, 15 Oct 2009 11:42:20 -0300, Austin Bingham
>  escribió:
> I think you didn't understand correctly Anthony Tolle's suggestion:
>
> py> class Foo:
> ...   def __init__(self, name): self.name = name
> ...
> py> objs = [Foo('Joe'), Foo('Jim'), Foo('Tom'), Foo('Jim')]
> py> objs

I understand Anthony perfectly. Yes, I can construct a dict as you
specify, where all of the keys map to values with name attributes
equal to the key. My point is that dict doesn't really help me enforce
that beyond simply letting me set it up; it doesn't care about the
values at all, just the keys. All that I'm asking for, and I think
it's been made pretty clear, is a set that let's me define a
uniqueness criteria function other than hash(). As has been thrashed
out already, this is not as straightforward as I might have liked.

To put it in code, I want this:

  s = set(hash_func = lambda obj: hash(obj.name), eq_func = ...)
  ...
  x.name = 'foo'
  y.name = 'foo'
  s.add(x)
  s.add(y) # no-op because of uniqueness criteria
  assert len(s) == 1

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 4:06 PM, Anthony Tolle  wrote:
> Why not use a dict?  The key would be the object name.  Pretty much
> the same behavior as a set (via the key), and you can still easily
> iterate over the objects.

To reiterate, dict only gets me part of what I want. Whereas a set
with uniqueness defined over 'obj.name' would guarantee no name
collisions, dict only sorta helps me keep things straight; it doesn't
actually enforce that my values have unique names.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 3:50 PM, Mick Krippendorf  wrote:
> Austin Bingham schrieb:
> What you seem to imply is that the hash function imposes some kind of
> uniqueness constraint on the set which uses it. That's just not the
> case, the uniqueness constraint is always the (in-)equality of objects,
> and for this you can override __eq__. But then you must also ensure that
> x.__eq__(y) --> y.__eq__(x) & x.__hash() == y.__hash__().

Yes, as was pointed out earlier, I was reading too much into the hash
function's role. However, given well behaved hash and equality (which
would be a great name for a band), I don't see why those functions
need to be defined on the object itself, per se. Right now that's the
case because hash() only knows how to call obj.__hash__ (etc. for
__eq__).

I guess a good analog for what I'm thinking about is list.sort(). It's
more than happy to take a comparison operator, and in spirit that's
exactly what I'm looking for with sets.

> Diez' solution is the pythonic way, and BTW, mathematically and
> pythonically sound, wheras, if the hash function would really impose
> uniqueness: . . .

Yes, my gray c++ roots are showing here; you're right that my brain
was secretly expecting the "compiler" to take care of things. Your
points about set operations are the strongest in this discussion, and
I haven't got a great answer.

> Python isn't strong on encapsulation, so "extrinsic" functionality is
> quite common.

I guess that's part of what's frustrating for me on this issue. Python
is generally extremely flexible and open, but in this case I feel like
I'm being shut out. Oh well...in the end it looks like there are ways
to get what I need, if not what I want. Thanks for the input.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 3:43 PM, Diez B. Roggisch  wrote:
> The context-decider isn't the same thing because it isn't designed yet :)
> And most probably won't ever be. It's just the abstract idea that
> hashing/equality change for one object depending on the circumstances they
> are used in, and that one wishes that the decision what to use is as simple
> & transparent as possible.

Fair enough :)

> Your approach is certainly viable, but I guess the current
> set-implementation is optimized on working with __hash__ and __eq__ on
> objects because for these exist slots in the python object structure in C.
> So while you can implement your idea, you will end up passing wrappers as
> Chris & me proposed into the current set implementation.

Yes, I figured that the guts of set relied on particulars to which we
are not privy at the python level. If the syntax let me describe sets
the way I've been laying out here, I could probably tolerate the
underlying implementation relying on wrappers.

> However, it might be worth thinking of proposing this change to the set-type
> in general. But then for orthogonality, dicts should have it as well I
> guess. Which opens a whole new can of worms.

dicts would certainly have to be looked at as well, but I don't think
the can would have that many worms in it if we solve the set issue to
everyone's satisfaction.

In any event, thanks for helping me work through this issue.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 3:02 PM, Diez B. Roggisch  wrote:
> Austin Bingham wrote:
> You do. Hashes can collide, and then you need equality. Sets are *based* on
> equality actually, the hash is just one optimization. ...

Right, thanks for clearing that up. Not reading closely enough ==
public shaming ;)

> Because these functions need intimate knowledge of the objects to actually
> work. Sure, in python it's easy to define them outside, and just reach into
> the object as you wish. But aside this implementation detail, a hash (and
> equality of course) always are based upon the object in question. So I
> think it's natural to have it defined right there (which __hash__ and
> __eq__ show that this seems to be accepted in general).
>
> And if there were something that would decide on context which of several
> implementations to use, you'd have less to worry. As things are, there
> isn't such thing (I don't even have the slightest idea what could work),
> you are as well off with defining two functions.

But this "context decider" you're talking about sounds exactly like
what I'm talking about.  If I had some class with __hash1__ and
__hash2__ (and associated equality operators), you're saying it would
be nice to be able to select which to use based on the context (e.g.
what type of set I'm using.) It might look like this:

   s = set(hash_func = lambda x: x.__hash2__, eq_func = lambda x, y:
x.__eq2__(y))

And if *that* works for you, do you still have a problem with:

  s = set(hash_func = lambda x: hash(x.name), eq_func = lambda x,y:
x.name == y.name)

?

If you don't like those, what would work for you? As far as I can
tell, your "context decider" and my "alternative hash functions" are
the same thing, and we just need to agree on a syntax.
-- 
http://mail.python.org/mailman/listinfo/python-list


set using alternative hash function?

2009-10-15 Thread Austin Bingham
On Thu, Oct 15, 2009 at 2:23 PM, Diez B. Roggisch  wrote:
> Austin Bingham wrote:
> This is a POV, but to to me, the set just deals with a very minimal
> protocol - hash-value & equality. Whatever you feed it, it has to cope with
> that. It strikes *me* as odd to ask for something else.

But I'm not really asking for anything that changes the paradigm of
how things currently work. All I need is the ability to say something
like this:

 s = set(hash_func = lambda x: hash(x.name))

If set could be de-hardcoded from using hash(), nothing else about how
it works would have to change. Or am I wrong on that? I see that you
mention equality in the protocol, but I don't know why a set would
need that if a) hash(x) == hash(y) --> x == y and b) hash function
must return a 32 bit value (which I think is the case)...so maybe I
misunderstand something.

I wonder...does the requirement of using hash() have to do with the
low-level implementation of set? That might explain the state of
things.

> The ideal solution would be to have several hash/eq-methods on your object,
> and somehow make the surroundings decide which to use. But there is no
> OO-pragma for that.

But why force objects to know about all of the wacky contexts in which
they might be used? Let objects be objects and hash-functions be
hash-functions. If someone wants a set where uniqueness is defined by
the number of occurrences of 'a' in an object's name, the object
should never have to know that...it should just have a name.

In any event, it sounds like set doesn't have any notion of switching
out its hash function, and that more or less answers my question.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
I guess we see things differently. I think it's quite natural to want
a set of unique objects where "unique" is defined as an operation on
some subset/conflation/etc. of the attributes of the elements. That's
all that the regular set class is, except that it always uses the
hash() function to calculate uniqueness. In any event, the notions of
set and uniqueness I'm using are well established in other languages,
so I don't see why it couldn't be made to work in python.

As far as using a dict, that doesn't really solve my problem. It could
be part of a solution, I guess, but I would still need functionality
extrinsic to the dict. What I want is to make sure that no values in
my set have the same name, and dict won't guarantee that for me. A set
that calculated uniqueness based on its elements' names, on the other
hand, would.

Austin

On Thu, Oct 15, 2009 at 1:48 PM, Duncan Booth
 wrote:
> It doesn't make sense to use just part of an object as the key for a set. A
> set is just a collection of values and there is no key separate from the
> value itself. If you build a set from x.name then that works fine, but only
> the names are stored.
>
> What you want in this particular case is a dict: a dict stores both a key
> and a value and lets you retrieve the value from the key.
>
>
> --
> Duncan Booth http://kupuguy.blogspot.com
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: set using alternative hash function?

2009-10-15 Thread Austin Bingham
That's definitely a workable solution, but it still rubs me the wrong
way. The uniqueness criteria of a set seems, to me, like a property of
the set, whereas the python model forces it onto each set element.

Another issue I have with the HashWrapper approach is its space
requirements. Logically, what I'm asking to do is switch out a single
function reference (i.e. to point at get_name() rather than hash()),
but in practice I'm forced to instantiate a new object for each of my
set members. On a large set, this could be disastrous.

Don't get me wrong...your solution is a good one, but it's just not
what I am looking for.

Austin

On Thu, Oct 15, 2009 at 1:36 PM, Chris Rebert  wrote:
> On Thu, Oct 15, 2009 at 4:24 AM, Austin Bingham
>  wrote:
>> If I understand things correctly, the set class uses hash()
>> universally to calculate hash values for its elements. Is there a
>> standard way to have set use a different function? Say I've got a
>> collection of objects with names. I'd like to create a set of these
>> objects where the hashing is done on these names. Using the __hash__
>> function seems inelegant because it means I have to settle on one type
>> of hashing for these objects all of the time, i.e. I can't create a
>> set of them based on a different uniqueness criteria later. I'd like
>> to create a set instance that uses, say, 'hash(x.name)' rather than
>> 'hash(x)'.
>>
>> Is this possible? Am I just thinking about this problem the wrong way?
>> Admittedly, I'm coming at this from a C++/STL perspective, so perhaps
>> I'm just missing the obvious. Thanks for any help on this.
>
> You could use wrapper objects that define an appropriate __hash__():
>
> #*completely untested*
> class HashWrapper(object):
>    def __init__(self, obj, criteria):
>        self._wrapee = obj
>        self._criteria = criteria
>
>    #override __hash__() / hash()
>    def __hash__(self):
>        return hash(self._criteria(self._wrapee))
>
>    #proxying code
>    def __getattr__(self, name):
>        return getattr(self._wrapee, name)
>
>    def __setattr__(self, name, val):
>        setattr(self._wrapee, name, val)
>
> #example
> def name_of(obj):
>    return obj.name
>
> def name_and_serial_num(obj):
>    return obj.name, obj.serial_number
>
> no_same_names = set(HashWrapper(obj, name_of) for obj in some_collection)
> no_same_name_and_serial = set(HashWrapper(obj, name_and_serial_num)
> for obj in some_collection)
>
> Cheers,
> Chris
> --
> http://blog.rebertia.com
>
-- 
http://mail.python.org/mailman/listinfo/python-list


set using alternative hash function?

2009-10-15 Thread Austin Bingham
If I understand things correctly, the set class uses hash()
universally to calculate hash values for its elements. Is there a
standard way to have set use a different function? Say I've got a
collection of objects with names. I'd like to create a set of these
objects where the hashing is done on these names. Using the __hash__
function seems inelegant because it means I have to settle on one type
of hashing for these objects all of the time, i.e. I can't create a
set of them based on a different uniqueness criteria later. I'd like
to create a set instance that uses, say, 'hash(x.name)' rather than
'hash(x)'.

Is this possible? Am I just thinking about this problem the wrong way?
Admittedly, I'm coming at this from a C++/STL perspective, so perhaps
I'm just missing the obvious. Thanks for any help on this.

Austin Bingham
-- 
http://mail.python.org/mailman/listinfo/python-list


Walking an object/reference graph

2009-10-05 Thread Austin Bingham
I'm looking for the proper way to "walk" a graph of python objects. My
specific task is to find all objects of a given type that are referred
to (transitively) by some starting object. My approach has been to
loop through the object's attributes, examining each, and then
recursing on them.

This approach has immediate problems in that I immediately hit
infinite recursion due to method-wrappers; I'm not sure *exactly*
what's going on, but it appears that method-wrappers refer to
method-wrappers ad infinitum. If I add code to avoid this issue,
others soon crop up, so before I wrote too much more code, I thought
I'd check with experts to see if there's a better way.

So, if I've explained what I'm looking for well enough, does anyone
know of a proper way to walk a graph of object references? I need to
support cycles in the graph, so keeping track of visited object IDs or
something will have to be employed. Also, I need to be able to follow
references stored in lists, dictionaries, etc...basically any way to
programatically get from one object to another should be followed by
the algorithm. I feel like I'm missing something simple, but perhaps
this is just a tricky problem for python. Any ideas or insight would
be great. Thanks!

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Crypto and export laws

2009-09-24 Thread Austin Bingham
I'm trying to get a handle on how python intersects with
crypto-related export control laws in the US and elsewhere. My current
understanding, per the PSF's wiki, is that any crypto related and
potentially export-sensitive code is in the ssl wrapper, and that, in
fact, this only links to the actual encryption implementation
(presumably libssl or something.) One caveat is that windows
installations may include the ssl implementation.

Does this effectively sum up python's exposure to export laws? On a
technical level, does removing the ssl module from a distribution
remove all references to encryption? Of course I'm not asking for
actual legal advice, but can anyone think of any other part of the
code that might run afoul of export rules? Thanks.

Austin
-- 
http://mail.python.org/mailman/listinfo/python-list


Concrete Factory Pattern syntax?

2009-03-19 Thread Austin Schutz

I have a fairly simple bit of code, something like:

# This should be importing the subclasses somehow, so that the factory
# can make them.
# import Parser.One
# import Parser.Two
# or.. from Parser import *?
class Parser():
   def parse:
'Implemented only in subclass'

   def make_parser(which_parser):
   if(which_parser = 'one'):
 return One()
   else:
  return Two()

# import Parser?
class One(Parser):
   def parse:
   'one implementation'

class Two(Parser):
   def parse:
   'another implementation'

The problem I have is that I don't understand how to put this into
actual files in actual directories and have the interpreter do
something actually useful :-) . What I would like to do is something
like:

lib/
  Parser.py
  Parser/
  __init__.py (maybe?)
  One.py
  Two.py

But I'm not clear on how to structure the import statements. I'm a bit
of a newb wrt python, and I get any number of different errors depending
on how I arrange the import statements, everything from

AttributeError: 'module' object has no attribute 'make_parser'
to
ImportError: cannot import name
to
TypeError: Error when calling the metaclass bases

depending on how I use import. Nothing seems to be the correct
combination. Any help would be much appreciated!

Austin





--
http://mail.python.org/mailman/listinfo/python-list


Re: Which Lisp to Learn?

2009-03-09 Thread Michael Austin

Arne Vajhøj wrote:

Xah Lee wrote:

For those of you imperative programers who kept on hearing about lisp
and is tempted to learn, then, ...


You:
* consider yourself unfairly treated by various communities
* post a long drivel about various Lisp flavors to newsgroups
  that are not in any way Lisp related
?

There seems to be a disconnect somewhere.

Arne


Hey Arne - like he even knows what LISP is... ;)
--
http://mail.python.org/mailman/listinfo/python-list


Re: GeneratorExit should derive from BaseException, not Exception

2007-08-21 Thread Chad Austin
Oops, forgot to mention this:

I wouldn't be opposed to a different extension that would effectively 
let me accomplish the same goals...  arbitrary exception filters. 
Imagine this:

try:
raise GeneratorExit
except ExceptionFilter:
# blah

where ExceptionFilter is any object that can be tested for containment. 
  Perhaps implemented like this:

class ExceptionFilter(object):
def __init__(self):
self.includes = set()
self.excludes = set()

self.include = self.includes.add
self.exclude = self.excludes.add

def __contains__(self, exc):
return any(isinstance(exc, cls) for cls in 
self.includes) and \
not any(isinstance(exc, cls) for cls in 
self.excludes)

ImvuExceptionFilter = ExceptionFilter()
ImvuExceptionFilter.include(Exception)
ImvuExceptionFilter.exclude(GeneratorExit)

Then, our code could just "catch" ImvuExceptionFilter.  This type of 
extension would be backwards compatible with the current except 
(FooError, BarError) tuple syntax.

I've never hacked on CPython itself, so I don't know what kind of 
changes there would be involved, but if there is sufficient pushback 
against making GeneratorExit derive from BaseException, I think this is 
a fine alternative.  Thoughts?

Chad

Chad Austin wrote:
> Hi Terry,
> 
> Thank you for your feedback.  Responses inline:
> 
> Terry Reedy wrote:
>> "Chad Austin" <[EMAIL PROTECTED]> wrote in message 
>> news:[EMAIL PROTECTED]
>> || try:
>> | result = yield chatGateway.checkForInvite({'userId': userId})
>> | logger.info('checkForInvite2 returned %s', result)
>>
>> would not
>> except GeneratorExit: 
>> solve your problem?
> 
> Yes, we could add an "except GeneratorExit: raise" clause to every place
> we currently catch Exception, but I feel like this is one of those
> things where it's hard to get it right in all places and also hard to
> cover with unit tests.  Instead, we'll have subtle bugs where finally
> clauses don't run because the GeneratorExit was swallowed.
> 
> Also, SystemExit and KeyboardInterrupt were made into BaseExceptions for
> the same reasons as I'm giving.  (As I understand it, anyway.)
> 
>> | except Exception:
>>
>> Such catchalls are known to be prone to catch too much
>> and are therefore not encouraged ;-).
>> As in 'use at your own risk'.
>> Guido encourages specific catches just for the reasons you give here.
> 
> More below:
> 
>> There was a *long* discussion of the current 2.5 exception hierarchy on 
>> pydev.  Search either python.org's or gmane's archive if you want to pursue 
>> this.  But I expect the people involved would say much the same as above.
> 
> I've actually read the background on the exception hierarchy (and agree
> with it all), especially other suggestions that GeneratorExit derive
> from BaseException.  As I understand it, Guido's objections are threefold:
> 
> 1) The previous "generators as coroutines" examples were too
> theoretical:  I've wanted GeneratorExit to derive from BaseException for
> months now, but didn't write this proposal until I actually wrote code
> that failed in the presence of task cancellation.
> 
> 2) You should avoid catching everything with except Exception:  I think
> that's too idealistic. Just do a search for try: except: through
> publicly available Python.  :)  Sometimes, you really _do_ want to catch
> everything.  When you're making a network request that involves
> xmlrpclib, urllib2, httplib, etc. you don't actually care what the error
> was.  (Well, except that the exceptions are submitted for automated
> analysis.)  Similarly, when loading a cache file with pickle, I don't
> care what went wrong, because it's not critical and should not be turned
> into a crash for the user.  (We automatically report exceptions that
> bubble into the main loop as crashes.)
> 
> 3) If GeneratorExit escapes from the generator somehow and gets raised
> in the main loop, then it will bubble out of the application like
> SystemExit and KeyboardInterrupt would:  I think this argument is
> somewhat specious, because I can't imagine how that would happen.  You'd
> have to store exceptions in your generator and explicitly bubble them
> out somehow.  Our crash handling has to specially handle
> KeyboardInterrupt and SystemExit anyway, since there are currently
> non-Exception exceptions, such as strings and custom classes tha

Re: GeneratorExit should derive from BaseException, not Exception

2007-08-21 Thread Chad Austin
Hi Terry,

Thank you for your feedback.  Responses inline:

Terry Reedy wrote:
> "Chad Austin" <[EMAIL PROTECTED]> wrote in message 
> news:[EMAIL PROTECTED]
> || try:
> | result = yield chatGateway.checkForInvite({'userId': userId})
> | logger.info('checkForInvite2 returned %s', result)
> 
> would not
> except GeneratorExit: 
> solve your problem?

Yes, we could add an "except GeneratorExit: raise" clause to every place
we currently catch Exception, but I feel like this is one of those
things where it's hard to get it right in all places and also hard to
cover with unit tests.  Instead, we'll have subtle bugs where finally
clauses don't run because the GeneratorExit was swallowed.

Also, SystemExit and KeyboardInterrupt were made into BaseExceptions for
the same reasons as I'm giving.  (As I understand it, anyway.)

> | except Exception:
> 
> Such catchalls are known to be prone to catch too much
> and are therefore not encouraged ;-).
> As in 'use at your own risk'.
> Guido encourages specific catches just for the reasons you give here.

More below:

> There was a *long* discussion of the current 2.5 exception hierarchy on 
> pydev.  Search either python.org's or gmane's archive if you want to pursue 
> this.  But I expect the people involved would say much the same as above.

I've actually read the background on the exception hierarchy (and agree
with it all), especially other suggestions that GeneratorExit derive
from BaseException.  As I understand it, Guido's objections are threefold:

1) The previous "generators as coroutines" examples were too
theoretical:  I've wanted GeneratorExit to derive from BaseException for
months now, but didn't write this proposal until I actually wrote code
that failed in the presence of task cancellation.

2) You should avoid catching everything with except Exception:  I think
that's too idealistic. Just do a search for try: except: through
publicly available Python.  :)  Sometimes, you really _do_ want to catch
everything.  When you're making a network request that involves
xmlrpclib, urllib2, httplib, etc. you don't actually care what the error
was.  (Well, except that the exceptions are submitted for automated
analysis.)  Similarly, when loading a cache file with pickle, I don't
care what went wrong, because it's not critical and should not be turned
into a crash for the user.  (We automatically report exceptions that
bubble into the main loop as crashes.)

3) If GeneratorExit escapes from the generator somehow and gets raised
in the main loop, then it will bubble out of the application like
SystemExit and KeyboardInterrupt would:  I think this argument is
somewhat specious, because I can't imagine how that would happen.  You'd
have to store exceptions in your generator and explicitly bubble them
out somehow.  Our crash handling has to specially handle
KeyboardInterrupt and SystemExit anyway, since there are currently
non-Exception exceptions, such as strings and custom classes that forgot
to derive from Exception, that should count as crashes.

I personally can't think of any cases where I would _want_ to handle
GeneratorExit.  I just want finally: and with: clauses to do the right
thing when a task is cancelled.  Anyway, I haven't yet encountered any
serious bugs due to this yet...  I'm just worried that if a task is
holding some resource and blocking on something, then the resource won't
get released.  If this really does come up, then I do have a little bit
of python + ctypes that replaces GeneratorExit with ImvuGeneratorExit
(deriving from BaseException), but that's not very appealing.

Thanks again,

-- 
Chad Austin
http://imvu.com/technology

-- 
http://mail.python.org/mailman/listinfo/python-list


GeneratorExit should derive from BaseException, not Exception

2007-08-20 Thread Chad Austin
Hi all,

First, I'd like to describe a system that we've built here at IMVU in 
order to manage the complexity of our network- and UI-heavy application:

Our application is a standard Windows desktop application, with the main 
thread pumping Windows messages as fast as they become available.  On 
top of that, we've added the ability to queue arbitrary Python actions 
in the message pump so that they get executed on the main thread when 
its ready.  You can think of our EventPump as being similar to Twisted's 
reactor.

On top of the EventPump, we have a TaskScheduler which runs "tasks" in 
parallel.  Tasks are generators that behave like coroutines, and it's 
probably easiest to explain how they work with an example (made up on 
the spot, so there may be minor typos):

def openContentsWindow(contents):
# Imagine a notepad-like window with the URL's contents...
# ...

@threadtask
def readURL(url):
return urllib2.urlopen(url).read()

@task
def displayURL(url):
with LoadingDialog():
# blocks this task from running while contents are 
being downloaded, 
but does not block
# main thread because readURL runs in the threadpool.
contents = yield readURL(url)

openContentsWindow(contents)

A bit of explanation:

The @task decorator turns a generator-returning function into a 
coroutine that is run by the scheduler.  It can call other tasks via 
"yield" and block on network requests, etc.

All blocking network calls such as urllib2's urlopen and friends and 
xmlrpclib ServerProxy calls go behind the @threadtask decorator.  This 
means those functions will run in the thread pool and allow other ready 
tasks to execute in the meantime.

There are several benefits to this approach:

1) The logic is very readable.  The code doesn't have to go through any 
hoops to be performant or correct.
2) It's also very testable.  All of the threading-specific logic goes 
into the scheduler itself, which means our unit tests don't need to deal 
with any (many?) thread safety issues or races.
3) Exceptions bubble correctly through tasks, and the stack traces are 
what you would expect.
4) Tasks always run on the main thread, which is beneficial when you're 
dealing with external objects with thread-affinity, such as Direct3D and 
Windows.
5) Unlike threads, tasks can be cancelled.

ANYWAY, all advocacy aside, here is one problem we've run into:

Imagine a bit of code like this:

@task
def pollForChatInvites(chatGateway, userId, decisionCallback, 
startChatCallback, timeProvider, minimumPollInterval = 5):
while True:
now = timeProvider()

try:
result = yield 
chatGateway.checkForInvite({'userId': userId})
logger.info('checkForInvite2 returned %s', 
result)
except Exception:
logger.exception('checkForInvite2 failed')
result = None
# ...
yield Sleep(10)

This is real code that I wrote in the last week.  The key portion is the 
try: except:  Basically, there are many reasons the checkForInvite2 call 
can fail.  Maybe a socket.error (connection timeout), maybe some kind of 
httplib error, maybe an xmlrpclib.ProtocolError...  I actually don't 
care how it fails.  If it fails at all, then sleep for a while and try 
again.  All fine and good.

The problem is that, if the task is cancelled while it's waiting on 
checkForInvite2, GeneratorExit gets caught and handled rather than 
(correctly) bubbling out of the task.  GeneratorExit is similar in 
practice to SystemExit here, so it would make sense for it to be a 
BaseException as well.

So, my proposal is that GeneratorExit derive from BaseException instead 
of Exception.

p.s. Should I have sent this mail to python-dev directly?  Does what I'm 
saying make sense?  Does this kind of thing need a PEP?

-- 
Chad Austin
http://imvu.com/technology
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to push data into Ical from Python ?

2006-12-18 Thread Philip Austin
"The Night Blogger" <[EMAIL PROTECTED]> writes:

> Is there a way to pull & push data into (Apple Mac OS X Calendar) Ical from
> Python ?
>

see: http://vobject.skyhouseconsulting.com/

-- regards, Phil
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Which compiler will Python 2.5 / Windows (Intel) be built with?

2006-06-16 Thread Philip Austin
[EMAIL PROTECTED] writes:

>> This is the .NET 11 SDK, I belive it includes the 2003 compiler (*):
>
> Last time I checked the .NET SDK they had the C# compiler in there, but
> not the C++ optimizing 2003 compiler. Might be wrong though

I just downloaded and installed this, and see a directory called

c:\program files\microsoft visual studio .net 2003\vc7

with bin\cl.exe  and  lib and include directories.  So presumably
I'm good to go?  

I'm following this thread because I'll need to
compile and install some extensions I've written for linux/gcc/python2.4
in our Windows computer lab.   Presuming I succeed in setting up
vc7 correctly, is it as simple as 'python setup.py install' from here?

Thanks, Phil




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: clearerr called on NULL FILE* ?

2006-05-10 Thread Chad Austin
Sorry to respond to myself; I wanted to give an update on this crash.  It turns 
out it's a race condition with multiple threads accessing the same Python file 
object!

http://sourceforge.net/tracker/index.php?func=detail&aid=595601&group_id=5470&atid=105470

Python-dev thread at 
http://mail.python.org/pipermail/python-dev/2003-June/036537.html

I wrote about the experience at http://aegisknight.livejournal.com/128191.html. 
  I agree that our program was incorrect to be writing to a log on one thread 
while it rotated them on another, but it'd be nice to get an exception that 
unambiguously shows what's going on rather than having random crashes reported 
in the field.

Chad

Chad Austin wrote:
> Hi all,
> 
> My first post to the list.  :)  I'm debugging one of our application 
> crashes, and I thought maybe one of you has seen something similar 
> before.  Our application is mostly Python, with some work being done in 
> a native C++ module.  Anyway, I'm getting a memory access violation at 
> the following stack:
> 
> 
> CRASHING THREAD
> EXCEPTION POINTERS: 0x0012e424
>  ExceptionRecord: 0x0012e518
>  ExceptionCode: 0xc005 EXCEPTION_ACCESS_VIOLATION
>  ExceptionFlags: 0x
>  ExceptionAddress: 0x7c901010
>  NumberParameters: 2
>  ExceptionInformation[0]: 0x
>  ExceptionInformation[1]: 0x0034
>  ExceptionRecord: 0x
> 
> THREAD ID: 10b0frame count: 4
> PYTHON23!0x000baa00 - PyFile_Type
> PYTHON23!0x0003ac27 - PyFile_SetEncoding
>MSVCRT!0x00030a06 - clearerr
> ntdll!0x1010 - RtlEnterCriticalSection
> 
> 
> Here's my understanding:  something is getting called on a PyFileObject 
> where f_fp is NULL, and clearerr in the multithreaded runtime tries to 
> enter an invalid critical section.  It looks like PyFile_SetEncoding in 
> the stack, but I can't figure out how in the Python source how 
> SetEncoding calls clearerr.
> 
> Based on the timing of the crashes, I also think it might have something 
> to do with log rollovers in RotatingFileHandler.
> 
> Has anyone run into something similar?  I don't expect anyone to spend a 
> lot of time on this, but if there are any quick tips, they would be 
> greatly appreciated...
> 
> We're using Python 2.3.5 and Visual C++ 6.
> 
> --
> Chad Austin
> http://imvu.com/technology
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


clearerr called on NULL FILE* ?

2006-05-02 Thread Chad Austin
Hi all,

My first post to the list.  :)  I'm debugging one of our application 
crashes, and I thought maybe one of you has seen something similar 
before.  Our application is mostly Python, with some work being done in 
a native C++ module.  Anyway, I'm getting a memory access violation at 
the following stack:


CRASHING THREAD
EXCEPTION POINTERS: 0x0012e424
 ExceptionRecord: 0x0012e518
 ExceptionCode: 0xc005 EXCEPTION_ACCESS_VIOLATION
 ExceptionFlags: 0x
 ExceptionAddress: 0x7c901010
 NumberParameters: 2
 ExceptionInformation[0]: 0x
 ExceptionInformation[1]: 0x0034
 ExceptionRecord: 0x

THREAD ID: 10b0frame count: 4
PYTHON23!0x000baa00 - PyFile_Type
PYTHON23!0x0003ac27 - PyFile_SetEncoding
   MSVCRT!0x00030a06 - clearerr
ntdll!0x1010 - RtlEnterCriticalSection


Here's my understanding:  something is getting called on a PyFileObject 
where f_fp is NULL, and clearerr in the multithreaded runtime tries to 
enter an invalid critical section.  It looks like PyFile_SetEncoding in 
the stack, but I can't figure out how in the Python source how 
SetEncoding calls clearerr.

Based on the timing of the crashes, I also think it might have something 
to do with log rollovers in RotatingFileHandler.

Has anyone run into something similar?  I don't expect anyone to spend a 
lot of time on this, but if there are any quick tips, they would be 
greatly appreciated...

We're using Python 2.3.5 and Visual C++ 6.

--
Chad Austin
http://imvu.com/technology

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python advocacy in scientific computation

2006-03-09 Thread Philip Austin
Michael McNeil Forbes <[EMAIL PROTECTED]> writes:
>
> I find that version control (VC) has many advantages for
> scientific research (I am a physicist).
>

Greg Wilson also makes that point in this note:

http://www.nature.com/naturejobs/2005/050728/full/nj7050-600b.html

Where he describes his excellent (Python Software Foundation sponsored)
course on software carpentry for scientists:

http://www.third-bit.com/swc2/index.html

Regards, Phil
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generators shared among threads

2006-03-09 Thread jess . austin
Bryan,

You'll get the same result without the lock.  I'm not sure what this
indicates.  It may show that the contention on the lock and the race
condition on i aren't always problems.  It may show that generators, at
least in CPython 2.4, provide thread safety for free.  It does seem to
disprove my statement that, "the yield leaves the lock locked".

More than that, I don't know.  When threading is involved, different
runs of the same code can yield different results.  Can we be sure that
each thread starts where the last one left off?  Why wouldn't a thread
just start where it had left off before?  Of course, this case would
have the potential for problems that Alex talked about earlier.  Why
would a generator object be any more reentrant than a function object?
Because it has a gi_frame attribute?  Would generators be thread-safe
only in CPython?

I started the discussion with simpler versions of these same questions.
 I'm convinced that using Queue is safe, but now I'm not convinced that
just using a generator is not safe.

cheers,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generators shared among threads

2006-03-08 Thread jess . austin
I just noticed, if you don't define maxsize in _init(), you need to
override _full() as well:

def _full(self):
return False

cheers,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generators shared among threads

2006-03-07 Thread jess . austin
Paul wrote:
>def f():
>lock = threading.Lock()
>i = 0
>while True:
>lock.acquire()
>yield i
>i += 1
>lock.release()
>
> but it's easy to make mistakes when implementing things like that
> (I'm not even totally confident that the above is correct).

The main problem with this is that the yield leaves the lock locked.
If any other thread wants to read the generator it will block.  Your
class Synchronized fixes this with the "finally" hack (please note that
from me this is NOT a pejorative).  I wonder... is that future-proof?
It seems that something related to this might change with 2.5?  My
notes from GvR's keynote don't seem to include this.  Someone that
knows more than I do about the intersection between "yield" and
"finally" would have to speak to that.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generators shared among threads

2006-03-07 Thread jess . austin
Alex wrote:
> Last, I'm not sure I'd think of this as a reentrantQueue, so
> much as a ReentrantCounter;-).

Of course!  It must have been late when I named this class...  I think
I'll go change the name in my code right now.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generators shared among threads

2006-03-06 Thread jess . austin
Thanks for the great advice, Alex.  Here is a subclass that seems to
work:

from Queue import Queue
from itertools import count

class reentrantQueue(Queue):
def _init(self, maxsize):
self.maxsize = 0
self.queue = []   # so we don't have to override put()
self.counter = count()
def _empty(self):
return False
def _get(self):
return self.counter.next()
def next(self):
return self.get()
def __iter__(self):
return self

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy immutability in python?

2006-03-04 Thread jess . austin
I guess we think a bit differently, and we think about different
problems.  When I hear, "immutable container", I think "tuple".  When I
hear, "my own class that is an immutable container", I think, "subclass
tuple, and probably override __new__ because otherwise tuple would be
good enough as is".

I'm not sure how this relates to the clp thread that you cite.  I
didn't read the whole thing, but I didn't find it to be a flamewar so
much as a typical clp contest of tedium, which failed to devolve into a
flamewar simply due to the maturity of the interlocutors.  To
summarize: first post is a use case, second post is an implementation
of that use case, and subsequent posts alternate between "that's not
how I want to do it" and "please provide a more specific use case for
which the provided implementation is not acceptable".

good luck,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: do design patterns still apply with Python?

2006-03-04 Thread jess . austin
msoulier wrote:

> I find that DP junkies don't tend to keep things simple.

+1 QOTW.  There's something about these "political" threads that seems
to bring out the best quotes.  b^)

-- 
http://mail.python.org/mailman/listinfo/python-list


generators shared among threads

2006-03-04 Thread jess . austin
hi,

This seems like a difficult question to answer through testing, so I'm
hoping that someone will just know...  Suppose I have the following
generator, g:

def f()
i = 0
while True:
yield i
i += 1
g=f()

If I pass g around to various threads and I want them to always be
yielded a unique value, will I have a race condition?  That is, is it
possible that the cpython interpreter would interrupt one thread after
the increment and before the yield, and then resume another thread to
yield the first thread's value, or increment the stored i, or both,
before resuming the first thread?  If so, would I get different
behavior if I just set g like:

g=itertools.count()

If both of these idioms will give me a race condition, how might I go
about preventing such?  I thought about using threading.Lock, but I'm
sure that I don't want to put a lock around the yield statement.

thanks,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy immutability in python?

2006-03-04 Thread jess . austin
To be clear, in this simple example I gave you don't have to override
anything.  However, if you want to process the values you place in the
container in some way before turning on immutability (which I assume
you must want to do because otherwise why not just use a tuple to begin
with?), then that processing should take place in a.__new__.

cheers,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Easy immutability in python?

2006-03-04 Thread jess . austin
Since this is a container that needs to be "immutable, like a tuple",
why not just inherit from tuple?  You'll need to override the __new__
method, rather than the __init__, since tuples are immutable:

class a(tuple):
def __new__(cls, t):
return tuple.__new__(cls, t)

cheers,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pulling all n-sized combinations from a list

2006-02-17 Thread jess . austin
hi,

I'm not sure why this hasn't come up yet, but this seems to beg for
list comprehensions, if not generator expressions.  All of the
following run in under 2 seconds on my old laptop:

>>> alph = 'abcdefghijklmnopqrstuvwxyz'
>>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in 
>>> alph])
456976
>>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in 
>>> alph
...  if (a>=b and b>=c and c>=d)])
23751
>>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in 
>>> alph
...  if (a!=b and b!=c and c!=d and d!=a and b!=d and a!=c)])
358800
>>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in 
>>> alph
...  if (a>b and b>c and c>d)])
14950

cheers,
Jess

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: void * C array to a Numpy array using Swig

2006-01-12 Thread Philip Austin
"Travis E. Oliphant" <[EMAIL PROTECTED]> writes:

> Krish wrote:

> Yes, you are right that you need to use typemaps.  It's been awhile
> since I did this kind of thing, but here are some pointers.

Also, there's http://geosci.uchicago.edu/csc/numptr



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: c/c++ extensions and help()

2005-07-31 Thread Philip Austin
Robert Kern <[EMAIL PROTECTED]> writes:

> Lenny G. wrote:
>> Is there a way to make a c/c++ extension have a useful method
>> signature?  Right now, help(myCFunc) shows up like:
>> myCFunc(...)
>>   description of myCFunc
>> I'd like to be able to see:
>> myCFunc(myArg1, myArg2)
>>   description of myCFunc
>> Is this currently possible?
>
> There really isn't a way to let the inspect module know about
> extension function arguments. Just put it in the docstring.
>

The next release of boost.python should do this automatically:

(http://mail.python.org/pipermail/c++-sig/2005-July/009243.html)


>>> help(rational.lcm)

Help on built-in function lcm:

lcm(...)
C++ signature:
lcm(int, int) -> int

>>> help(rational.int().numerator)

Help on method numerator:

numerator(...) method of boost_rational_ext.int instance
C++ signature:
numerator(boost::rational {lvalue}) -> int


Regards, Phil
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question about lists

2005-07-20 Thread Austin
Well, that answers that. Thank you!

-- 
http://mail.python.org/mailman/listinfo/python-list


Newbie question about lists

2005-07-20 Thread Austin Cox
Hello, I just started with python and have run into a problem using
lists.

If I enter:
li = [.25,.10,.05,.01]
and then enter:
print li
it'll output:
[0.25, 0.10001, 0.050003, 0.01]

Can anyone tell me why it does this, and how I can get just the value
.10, and .05 into a list? Thanks.

-- 
http://mail.python.org/mailman/listinfo/python-list


windows service problem

2005-06-24 Thread Austin
class HelloService(win32serviceutil.ServiceFramework):
_svc_name_ = "HelloService"
_svc_display_name_ = "Hello Service"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
self.check = 1

def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
self.check = 0

def SvcDoRun(self):
win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)
while True:
CheckPoint('SvcDoRun')
if self.check == 0:
break

if __name__ == '__main__':
win32serviceutil.HandleCommandLine(HelloService)

---

I modified the demo code from Python Programming on Win32.
CheckPoint is a class to open a file and write some words.
But the problem is that no matter how I re-start the service and the
CheckPoint() doesn't work.
Anything wrong?


-- 
http://mail.python.org/mailman/listinfo/python-list


Detect windows shutdown

2005-06-22 Thread Austin
My program is running on windows and it is wrritten by Python and wxPython,
built by py2exe.

If my program is executed minimized, and the user want to shutdown or
reboot.
Meanwhile, my program is running and it has several threads running, too.
The shutdown or reboot will cause a error of my program.
Is there any way to dectect windows shutdown or reboot?
If I detect the message of shutdown, I could close all threads of my
program.


-- 
http://mail.python.org/mailman/listinfo/python-list


py2exe problem

2005-06-21 Thread Austin
I use py2exe to build python program to "aa.exe".
If this program has a bug, after closed this program, it will show
"aa.exe.log" in this folder.
Is there any ways to avoid this log file?



-- 
http://mail.python.org/mailman/listinfo/python-list


windows directory

2005-06-14 Thread Austin
I would like to write a program which creates the folders in specific
directory.
For example, I want to create folder in Program Files. How do I know which
is in C:\ or D:\
Is there any function to get the active path?

Thanks in advance.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.system

2005-06-13 Thread Austin
>> My code is "  os.system("NET SEND computer hihi") "
>> i use this funtion on Windows server 2003.
>> it's ok.
>>
>> But the same code running on Windows XP SP2, it shows the command window
>> twice.
>> How do i remove the command window?
>
> Hi,
>
> You can remove the command window which comes
> from python if you use ".pyw" as extension.
> This is not an answer why the system method
> opens a second window, but maybe it helps, too.
>
> Thomas
>

Hi, but my program is complied from py2exe.
All extension is 'pyd'.
Hm... is there any way to do? 


-- 
http://mail.python.org/mailman/listinfo/python-list


os.system

2005-06-10 Thread Austin
My code is "  os.system("NET SEND computer hihi") "
i use this funtion on Windows server 2003.
it's ok.

But the same code running on Windows XP SP2, it shows the command window
twice.
How do i remove the command window?




-- 
http://mail.python.org/mailman/listinfo/python-list


wxTextCtrl problem

2005-05-30 Thread austin
I produced 5 wxTextCtrl.
My program is to let the user enter the serial number.
Each wxTextCtrl has 4 maxlength.
My ideal is
if the user enter 4 digits on first wxTextCtrl, and the program will move
the cursor to the second wxTextCtrl.
I checked the wxTextCtrl::SetInsertingPoint.
But this function is only move on itself.
For exammple,

xxx




self.tctrl_1 = wxTextCtrl(..)
self.tctrl_2 = wxTextCtrl(..)

self.Bind(EVT_TEXT,self.OnTextProcessing_1,self.tctrl_1)
self.Bind(EVT_TEXT,self.OnTextProcessing_2,self.tctrl_2)




def OnTextProcessing_1(self,evt):
 if len(evt.GetString())==4:
  self.tctrl_2.SetInsertionPoint(0)

xxx


The code "self.tctrl_2.SetInsertingPoint(0)" doesn't work.
But if i change to "self.tctrl_1..." works fine.

So what's problem?



-- 
http://mail.python.org/mailman/listinfo/python-list


wxTimer problem

2005-05-13 Thread Austin
I wrote a GUI program on windows. (python & wxPython)
One function is to refresh the data from the COM Object continously.
In the beginning, I used the thread.start_new_thread(xxx,())
But no matter how i try, it will cause the win32com error.

After that, i use the wx.Timer to do the refresh function.
It works fine, but i find one problem.
I think timer should be independant, just like thread, but wxTimer doesn't.

1. Does python have timer function( not included by thread)?
2. About the wxTimer, does any parameter to let it be independent? 


-- 
http://mail.python.org/mailman/listinfo/python-list


Read the windows event log

2005-04-12 Thread Austin
My codes are below:

***
import win32evtlog

def check_records(records):
for i in range(0,len(records)):
print records[i].SourceName

h = win32evtlog.OpenEventLog(None,"System")
flags = 
win32evtlog.EVENTLOG_BACKWARD_READ|win32evtlog.EVENTLOG_SEQUENTIAL_READ
records = win32evtlog.ReadEventLog(h,flags,0)

print "Total " + str(len(records))
check_records(records)



The result from my codes are total 2.
But the event log in windows event viewer are 24.
How could I get all events? 


-- 
http://mail.python.org/mailman/listinfo/python-list


How to minimize the window

2005-04-12 Thread Austin
I wrote a GUI program with wxPython.
In the window, there are 3 attributes on left top.
"_" could let the program to minimize to tool bar.
I want to let the program minimized to the system tray.
Is there any way to let the window have 4 attributes?
"."  "_"  "O " "x" 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to detect windows shutdown

2005-04-06 Thread Austin
I wrote a GUI program with wxPython.
The error message is:

Unhandled exception
An unhandled exception occured. Press "Abort" to terminate the program, 
"Retry" to exit the program normally and "Ignore" to try to continue.

Actually, besides the main program, there is another thread running 
background.




> Austin wrote:
>> I wrote a program running on windows.
>> I put the link of the program in "Start up" folder and let it executed 
>> minimized.
>> Every time when I open the computer, my program will be running in system 
>> tray.
>>
>> But if the user would like to shutdown the computer, the OS will show an 
>> error about exception.
>
> Important missing information:  is this a GUI program or
> a console program, and if it's a GUI program, what framework
> did you use to write it (wxPython, PyQt, other...)?  Also,
> what is the exception that you got?  (Always report the
> specific error: we can't guess what exception you got,
> and the answer could well point directly to a cause that
> is different than you think it is.)
>
> -Peter 


-- 
http://mail.python.org/mailman/listinfo/python-list


How to detect windows shutdown

2005-04-06 Thread Austin
I wrote a program running on windows.
I put the link of the program in "Start up" folder and let it executed 
minimized.
Every time when I open the computer, my program will be running in system 
tray.

But if the user would like to shutdown the computer, the OS will show an 
error about exception.

At first, I think windows will terminate all processes when it shutdown.
So, if python has the way to dectect shutdown process, I can kill the 
process in advance.

Thanks a lot. 


-- 
http://mail.python.org/mailman/listinfo/python-list


py2app on Mac OS X 10.3

2004-12-31 Thread Austin
"""
Minimal setup.py example, run with:
% python setup.py py2app
"""

from distutils.core import setup
import py2app
setup(
app = ['main.py'],
)

That is a sample code of wiki.
I have a file 'main.py' and several sub-folders.
After I execute 'pythonw setup.py py2exe', I see 2 folders, 'dist' & 'build'
which is the same as py2exe.
I open the 'dist' folder and see a file 'main'. Then I double-click the
'main' and appeared the error message.
'IOError:[Errno 2] No such file or directory:
'/user/austin/Desktop/disk/main.app/Contents/Resources/log/eventlog.xml
I feel so wildered because 'main.app' should be a file not a folder.

I was wondering if some extra codes needed by setup.py
Could anyone give me an advice?
Thanks a lot.


-- 
http://mail.python.org/mailman/listinfo/python-list


Python on Linux

2004-12-26 Thread Austin
On Red Hat 9, Python is installed by default and it's version is 2.2.2
If I want to upgrade Python to 2.3.4(newer version), how could I do?
If I compile source code of Python, how do I uninstall the old version?
I tried rpm packages but failed with dependence.
Could everyone give me a advise?


-- 
http://mail.python.org/mailman/listinfo/python-list