[issue43529] pathlib.Path.glob causes OSError encountering symlinks to long filenames

2021-03-19 Thread Eric Frederich


Eric Frederich  added the comment:

I'm happy to create a pull request but would need some help.

Looking at that routine it has changed over time and I cannot simply create a 
single patch against 3.6 and have it merge cleanly into newer versions.

I'd need help explaining the process

--

___
Python tracker 
<https://bugs.python.org/issue43529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43529] pathlib.Path.glob causes OSError encountering symlinks to long filenames

2021-03-17 Thread Eric Frederich


Eric Frederich  added the comment:

I verified against all versions available for me to select.
For 3.10 I used the 3.10-rc Docker image.

--
versions: +Python 3.10, Python 3.6, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue43529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43529] pathlib.Path.glob causes OSError encountering symlinks to long filenames

2021-03-17 Thread Eric Frederich


Change by Eric Frederich :


--
versions: +Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43529] pathlib.Path.glob causes OSError encountering symlinks to long filenames

2021-03-17 Thread Eric Frederich


New submission from Eric Frederich :

Calling pathlib.Path.glob("**/*) on a directory containing a symlink which 
resolves to a very long filename causes OSError.

This is completely avoidable since symlinks are not followed anyway.

In pathlib.py, the _RecursiveWildcardSelector has a method _iterate_directories 
which first calls entry.is_dir() prior to excluding based on entry.is_symlink().

It's the entry.is_dir() which is failing.
If the check for entry.is_symlink() were to happen first this error would be 
avoided.

It's worth noting that on Linux "ls -l bad_link" works fine.
Also "find /some/path/containing/bad/link" works fine.
You do get an error however when running "ls bad_link"
I believe Python's glob() should act like "find" on Linux and not fail.
Because it is explicitly ignoring symlinks anyway, it has no business calling 
is_dir() on a symlink.

I have attached a file which reproduces this problem.  It's meant to be ran 
inside of an empty directory.

--
files: uhoh.py
messages: 388927
nosy: eric.frederich
priority: normal
severity: normal
status: open
title: pathlib.Path.glob causes OSError encountering symlinks to long filenames
Added file: https://bugs.python.org/file49884/uhoh.py

___
Python tracker 
<https://bugs.python.org/issue43529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41482] docstring errors in ipaddress.IPv4Network

2020-08-04 Thread Eric Frederich


New submission from Eric Frederich :

The __init__ method for IPv4Network has a typo where it says all three of 
'192.0.2.0/24', '192.0.2.0/255.255.255.0' and '192.0.0.2/0.0.0.255' should be 
equal.

--
assignee: docs@python
components: Documentation
messages: 374841
nosy: docs@python, eric.frederich
priority: normal
severity: normal
status: open
title: docstring errors in ipaddress.IPv4Network
type: enhancement
versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue41482>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Windows / ctypes issue with custom build

2017-03-27 Thread Eric Frederich
I built my own Python 2.7.13 for Windows because I'm using bindings to a
3rd party application which were built with Visual Studio 2012.
I started to code up some stuff using the "click" module and found an error
when using click.echo with any kind of unicode input.

Python 2.7.13 (default, Mar 27 2017, 11:11:01) [MSC v.1700 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import click
>>> click.echo('Hello World')
Hello World
>>> click.echo(u'Hello World')
Traceback (most recent call last):
  File "", line 1, in 
  File "C:\Users\eric\my_env\lib\site-packages\click\utils.py", line
259, in echo
file.write(message)
  File "C:\Users\eric\my_env\lib\site-packages\click\_winconsole.py",
line 180, in write
return self._text_stream.write(x)
  File "C:\Users\eric\my_env\lib\site-packages\click\_compat.py", line
63, in write
return io.TextIOWrapper.write(self, x)
  File "C:\Users\eric\my_env\lib\site-packages\click\_winconsole.py",
line 164, in write
raise OSError(self._get_error_message(GetLastError()))
OSError: Windows error 6

If I download and install the Python 2.7.13 64-bit installer I don't get
this issue.  It echo's just fine.
I have looked into this a lot and am at a loss right now.
I'm not too familiar with Windows, Visual Studio, or ctypes.

I spent some time looking at the code path to produce the smallest file
(without click) which demonstrates this problem (see below)
It produces the same "Windows error 6"... again, this works fine with the
python installed from the 2.7.13 64 bit MSI installer.
Can someone share the process used to create the Windows installers?  Is
this a manual process or is it automated?
Maybe I'm missing some important switch to msbuild or something.  Any help
or ideas are appreciated.
I cannot use a downloaded copy of Python... it needs to be built with a
specific version, update, patch, etc of Visual Studio.

All I did was
1) clone cpython from github and checkout 2.7.13
2) edit some xp stuff out of tk stuff to get it to compile on Windows
Server 2003
In `externals\tk-8.5.15.0\win\Makefile.in` remove
`ttkWinXPTheme.$(OBJEXT)` line
In `externals\tk-8.5.15.0\win\makefile.vc` remove
`$(TMP_DIR)\ttkWinXPTheme.obj` line
In `externals\tk-8.5.15.0\win\ttkWinMonitor.c` remove 2
`TtkXPTheme_Init` lines
In `PCbuild\tcltk.props` change VC9 to VC11 at the bottom
3) PCbuild\build.bat -e -p x64 "/p:PlatformToolset=v110"

After that I created an "install" by copying .exe, .pyd, .dll files, ran
get-pip.py, then python -m pip install virtualenv, then virtualenv my_env,
then activated it, then did a pip install click.
But with this stripped down version you don't need pip, virtualenv or
click... just ctypes.
You could probably even build it without the -e switch to build.bat.

from ctypes import byref, POINTER, py_object, pythonapi, Structure,
windll
from ctypes import c_char, c_char_p, c_int, c_ssize_t, c_ulong, c_void_p
c_ssize_p = POINTER(c_ssize_t)

kernel32 = windll.kernel32
STDOUT_HANDLE = kernel32.GetStdHandle(-11)

PyBUF_SIMPLE = 0
MAX_BYTES_WRITTEN = 32767

class Py_buffer(Structure):
_fields_ = [
('buf', c_void_p),
('obj', py_object),
('len', c_ssize_t),
('itemsize', c_ssize_t),
('readonly', c_int),
('ndim', c_int),
('format', c_char_p),
('shape', c_ssize_p),
('strides', c_ssize_p),
('suboffsets', c_ssize_p),
('internal', c_void_p)
]
_fields_.insert(-1, ('smalltable', c_ssize_t * 2))

bites = u"Hello World".encode('utf-16-le')
bytes_to_be_written = len(bites)
buf = Py_buffer()
pythonapi.PyObject_GetBuffer(py_object(bites), byref(buf), PyBUF_SIMPLE)
buffer_type = c_char * buf.len
buf = buffer_type.from_address(buf.buf)
code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN)
// 2
code_units_written = c_ulong()

kernel32.WriteConsoleW(STDOUT_HANDLE, buf, code_units_to_be_written,
byref(code_units_written), None)
bytes_written = 2 * code_units_written.value

if bytes_written == 0 and bytes_to_be_written > 0:
raise OSError('Windows error %s' % kernel32.GetLastError())
-- 
https://mail.python.org/mailman/listinfo/python-list


Compiling new Pythons on old Windows compilers

2017-03-12 Thread Eric Frederich
There is a commercial application which allows customizations through a C
API.
There are 3 different releases of this application each compiled with
different versions of Visual Studio, 2008, 2010, and 2012.

I'd like to release a customization which embeds a Python interpreter, but
I'd like to use the same Python release across all 3 application versions.
There may be versions of Python like 2.7 or something which compile on all
3 of those Visual Studio releases, but I'd really prefer to use Python 3.

Any idea why compatibility was dropped recently?  There used to be a PC
directory with different VS directories in the source tree, now it isn't
there any more.

Thanks,
~Eric
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue29319] Embedded 3.6.0 distribution cannot run pyz files

2017-03-07 Thread Eric Frederich

Eric Frederich added the comment:

I can confirm that this is NOT fixed in 3.6.1rc1 embeddable zip.

This is extremely easy to reproduce.  Look at the contents of foo.py and 
bar.py.  Just throw them in the same directory and try to run 
C:\path\to\extracted\python.exe foo.py

--
versions: +Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29319] Embedded 3.6.0 distribution cannot run pyz files

2017-03-07 Thread Eric Frederich

Eric Frederich added the comment:

I'm wondering if I'm experiencing this same issue.
In a simple directory with a foo.py and a bar.py where foo tries to import from 
bar I cannot get it to work with the embeddable 3.6.0 zip, but the standard 
3.6.0 that gets "installed" works fine.  Also 3.5.3 works fine

C:\Users\eric\Desktop\wtf>more foo.py
from bar import bar

print(bar('hi'))

C:\Users\eric\Desktop\wtf>more bar.py
def bar(s):
return s.upper()

C:\Users\eric\Desktop\wtf>C:\Users\eric\Downloads\python-3.5.3-embed-amd64\python.exe
 foo.py
HI

C:\Users\eric\Desktop\wtf>C:\Users\eric\Downloads\python-3.6.0-embed-amd64\python.exe
 foo.py
Traceback (most recent call last):
  File "foo.py", line 1, in 
from bar import bar
ModuleNotFoundError: No module named 'bar'

--
nosy: +eric.frederich

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-24 Thread Eric Frederich

Eric Frederich added the comment:

I understand that in the general case you cannot just swap the order around and 
get the same behaviour.

This LoggingDict just prints some stuff to the screen and delegates to super 
and so I believe it should work wherever it is placed in a cooperative 
hierarchy.  Do you agree?

Now, I understand that OrderedDict is not cooperative.  You stated that this is 
a design decision and I respect that choice, but you also stated that classes 
can be made to be cooperative by creating a wrapper.

The reason I re-opened this bug is because I fail to see a way in which to 
create such a wrapper for Python3.  Do you believe that it should be possible 
to create a cooperative wrapper?

If it is possible (and its just my inability to create one) then I have no 
issue and the bug should be closed.
If it is not possible, then perhaps it could be noted somewhere that its not 
cooperative and impossible to make it cooperative and it should be listed last 
when using multiple inheritance.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-24 Thread Eric Frederich

Eric Frederich added the comment:

Éric (Araujo),

Combinding defaultdict and OrderedDict is a little easier since one of them 
(defaultdict) has special behavior on getitem while the other (OrderedDict) has 
special behavior on setitem.

I played with mixing those two myself and saw some issues and found that I had 
to explicitly call __init__ on both base classes to get them primed properly.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-23 Thread Eric Frederich

Eric Frederich added the comment:

Raymond,

Thanks for the explanation of your reasoning.
Could you please provide an example of how to create a cooperative subclass of 
OrderedDict?

I have attempted to make one.
I succeeded to make it work where the previous example failed but in doing made 
the example that used to work now fail.
I have attached my attempt at making a cooperative OrderedDict in inj2.py

class coop_OrderedDict(OrderedDict):

A cooperative version of OrderedDict

def __setitem__(self, k, v, **kwargs):
# OrderedDict calls dict.__setitem__ directly skipping over LoggingDict
# fortunately we can control this with dict_setitem keyword argument

# calculate OrderedDict's real parent instead of skipping right to 
dict.__setitem__
# though depending on the hierarchy it may actually be dict.__setitem__
m = super(OrderedDict, self).__setitem__
# dict_setitem wants an unbound method
unbound_m = m.im_func
return super(coop_OrderedDict, self).__setitem__(k, v, 
dict_setitem=unbound_m)

In Python2 it fails with:

Traceback (most recent call last):
  File /tmp/inj2.py, line 51, in module
old1['hooray'] = 'it worked'
  File /tmp/inj2.py, line 30, in __setitem__
return super(LoggingDict, self).__setitem__(k, v)
  File /tmp/inj2.py, line 18, in __setitem__
unbound_m = m.im_func
AttributeError: 'method-wrapper' object has no attribute 'im_func'

In Python3 both cases fail.

--
Added file: http://bugs.python.org/file39998/inj2.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-23 Thread Eric Frederich

Changes by Eric Frederich eric.freder...@gmail.com:


Removed file: http://bugs.python.org/file39998/inj2.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-23 Thread Eric Frederich

Changes by Eric Frederich eric.freder...@gmail.com:


Added file: http://bugs.python.org/file3/inj2.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-23 Thread Eric Frederich

Eric Frederich added the comment:

Attached, as inj3.py, is a version I made which seems to work with Python2 but 
not with Python3's C implementation of OrderedDict.

I had to walk the MRO myself to get the unbound method to pass along as 
dict_setitem.

With Python3 it doesn't look like doing this was left configurable.
It crashes complaining TypeError: wrapper __setitem__ doesn't take keyword 
arguments

Re-opening this bug since it seems impossible to make OrderedDict cooperative 
in Python3 even with a wrapper.

Perhaps Python3's OrderedDict should either
(a) be cooperative at the C level
(b) support dict_setitem keyword argument to maintain compatibility with 
Python2.

--
resolution: not a bug - 
status: closed - open
versions: +Python 3.5
Added file: http://bugs.python.org/file4/inj3.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24685] collections.OrderedDict collaborative subclassing

2015-07-22 Thread Eric Frederich

New submission from Eric Frederich:

After watching the PyCon talk Super considered super[1] and reading the 
corresponding blog post[2] I tried playing with dependency injection.

I was surprised to notice that the example he gave did not work if I swap the 
order of the classes around.  I think it should have.  See attached file.

I think this is a bug in collections.OrderedDict
OrderedDict is not well-behaved as far as cooperative subclassing is concerned.

The source code is hard wired with a keyword argument 
dict_setitem=dict.__setitem__ which it then calls at the end with 
dict_setitem(self, key, value)

A quick search of github for dict_setitem shows that this
bad practice seems be making its way into other projects

If dict_setitem keyword arg is really necessary to have, then maybe:

(a) have it default to None
(b) at the end of __setitem__ do something like:

if dict_setitem is not None:
return dict_setitem(self, key, value)

super(OrderedDict, self).__setitem__(key, value)

After a discussion on #py-dev this seemed like a reasonable request (not 
necessarily the implementation, but the idea that OrderedDict should cooperate).
I tested this against the C implementation of OrderedDict in Python 3.5 and 
noticed that it doesn't cooperate either.


[1] https://www.youtube.com/watch?v=EiOglTERPEo
[2] https://rhettinger.wordpress.com/2011/05/26/super-considered-super/

--
components: Library (Lib)
files: inj.py
messages: 247136
nosy: eric.frederich, eric.snow, rhettinger
priority: normal
severity: normal
status: open
title: collections.OrderedDict collaborative subclassing
versions: Python 2.7, Python 3.5
Added file: http://bugs.python.org/file39982/inj.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24685
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Passing C pionters to Python for use with cffi

2013-10-18 Thread Eric Frederich
Dieter,

Thanks for the reply.
I actually have a fully working set of bindings using Cython.
I'm looking to move away from Cython and use cffi.
My reasoning is that with cffi my binding package would be pure python.

Also, I want my all my code to be Python, not Cython.
I don't care about performance or compiling my Python to C so that it can
be used.
I just want usable bindings so that I can write extensions as well as use
an interactive Python interpreter.

So... again, just wondering if Py_BuildValue((k), some_structure); is
the proper way to send a void* over to Python so that cffi can cast and use
it.


On Fri, Oct 11, 2013 at 2:09 AM, dieter die...@handshake.de wrote:

 Eric Frederich eric.freder...@gmail.com writes:

  I'm extending an application that supports customization using the C
  language.
  I am able to write standalone python applications that use the C API's
  using cffi.
  This is good, but only a first step.
 
  This application allows me to register code that will run on various
 events
  but it has to be C code.

 You might want to have a look at cython.

 cython is a compiler compiling source programs in a Python
 extension into C. The corresponding C functions can
 then be called from C (you may need to annotate the
 functions used in this way to get proper GIL (Global Interpreter Lock)
 handling).

 --
 https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


Passing C pionters to Python for use with cffi

2013-10-10 Thread Eric Frederich
Hello,

I'm extending an application that supports customization using the C
language.
I am able to write standalone python applications that use the C API's
using cffi.
This is good, but only a first step.

This application allows me to register code that will run on various events
but it has to be C code.
I'd like to write Python code instead.
So basically, my C code will use the Python C API to get a handle to the
module and function (like how they do it here
http://docs.python.org/2/extending/embedding.html#pure-embedding)
What would be the proper way to pass a pointer to a C structure from C to
Python so that I can use ffi.cast and be able to use it from within Python?

I have got this to work but I'm not certain that it is correct, fool-proof,
or portable.

This is how I got it to work from the C side

PyObject* pArgs = Py_BuildValue((k), some_structure);
PyObject_CallObject(pFunc, pArgs)

... and from the Python side...

def my_function(struct_ptr):
struct = ffi.cast(mystruct_t *, struct_ptr)


Like I said, this works fine.  I am able to manipulate the structure from
within Python.
I just want to know the correct way to to this.

Thanks,
~Eric
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: embedding interactive python interpreter

2013-09-11 Thread Eric Frederich
Scott,

Yes.
As Mark Hammond suggested I took a look into the code/console modules.
I wound up using InteractiveConsole from the code module.
I'm sure my requirements are completely different from yours but I'll
explain what I did anyway...

I was working with a client/server architecture where I wanted the client
to be able to interact with a server-side Python console.
Yes, I know this is a huge security hole.
It is only used as a debugging console to be able to interactively inspect
the state of the remote server process from within the client application
itself.

This 3rd party client server application I was customizing allows me to
write my own services.
So I wrote a service that accepts a string (the python line) and returns
two strings and a boolean (stdout, stderr, and whether or not the
interpreter expects more input).
That boolean basically controls whether the console should show a  or
a ... and comes from InteractiveConsole.push.

To get stdout and stderr I monkey patched sys.stdout and sys.stderr with an
instance of MyBuffer.

class MyBuffer(object):
def __init__(self):
self.buffer = []
def write(self, data):
self.buffer.append(data)
def get(self):
ret = ''.join(self.buffer)
self.buffer = []
return ret

In my subclass of InteractiveConsole I defined the following...

def __init__(self, *args, **kwargs):
InteractiveConsole.__init__(self, *args, **kwargs)
self.mb_out = MyBuffer()
self.mb_err = MyBuffer()

def process(self, s):
sys.stdout, sys.stderr = self.mb_out, self.mb_err
more = self.push(s)
sys.stdout, sys.stderr = sys.__stdout__, sys.__stderr__
return self.mb_out.get(), self.mb_err.get(), more

My service (written in c++), upon initialization gets a handle to a
singleton instance of this InteractiveConsole sublcass using the Python C
APIs.
When the service responds to a request it uses the Python C APIs to call
the process method above and returns the two strings and the boolean.

The client, which happens to be Java/Eclipse based, then has something that
resembles a Python console in the GUI and uses that single service to
interact with the remote Python console.

Lots of stuff going on but it works very well.



On Wed, Sep 11, 2013 at 1:42 PM, Scott ihearthondu...@gmail.com wrote:

 Eric,

 https://mail.python.org/pipermail/python-list/2011-March/600441.html

 Did you ever figure this out?  I'm trying to embed an interactive
 interpreter, and I'm sure that python has an easy way to do this, but I
 can't seem to find it...

 Thanks,
 -C. Scott! Brown

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: distributing a binary package

2013-05-07 Thread Eric Frederich
I see where I can specify a module that distutils will try to compile.
I already have the .so files compiled.

I'm sure its simple, I just can't find it or don't know what to look for.

On Mon, May 6, 2013 at 9:13 PM, Miki Tebeka miki.teb...@gmail.com wrote:

 Basically, I'd like to know how to create a proper setup.py script
 http://docs.python.org/2/distutils/setupscript.html
 --
 http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


distributing a binary package

2013-05-06 Thread Eric Frederich
Hello,

Hopefully a simple question.
Basically, I'd like to know how to create a proper setup.py script to
install a package.
The package exists as a single directory with a single __init__.py
file and (currently) 93 .so files.

Right now I just copy it into the site-packages directory but I'd like
to start using virtualenv / pip so I'd like to do the installation via
python setup.py install.

I need to keep my build process external to this for performance
reasons (with a Makefile I can do parallel builds and I have a machine
with 12 cores).

My Makefile does all the work.  It produces a directory that simply
needs to be copied to site-packages but how do I craft a setup.py
script to do the actually installation?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Python platform/framework for new RESTful web app

2013-04-25 Thread Eric Frederich
If I wanted to create a new web application (RESTful) today with Python
what are my options given the following requirements.

* Google Account authentication
* Facebook authentication
* Managed hosting (like Google App Engine or Heroku) but with the ability
to be self-hosted later down the road.

I am familiar with Django (well I was a 3 years ago).
I have played a little with web.py and like the ideas there.

I like the idea of using something like GAE but don't want to be locked in.

Is Django the answer?
I think you can run Django on GAE (and obviously you can self-host it).
I see there is a Django REST framework.  Is this a good framework?
Are there good Google and Facebook authentication extensions?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


empty object from C

2012-12-07 Thread Eric Frederich
Hello,

From C, I'd like to call a Python function that takes an object and sets
some attributes on it.
Lets say this is the function...

def foo(msg):
msg.bar = 123
msg.spam = 'eggs'

How do I create an empty object in C?
In Python I would do something like this...

class Msg(object):
pass

... and then instantiate an instance, and call the function.

msg = Msg()
foo(msg)



I know how to create an empty dictionary and I get get by with that, but
I'd like to create an object.

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: error importing smtplib

2012-11-19 Thread Eric Frederich
I can do this in stand alone programs because my code does the import and
calls the login function so I can control the order of things.
Unfortunately stand alone programs are not the only ways in which I am
using these Python bindings.

You can customize and extend this 3rd party application at various
extension points all of which are invoked after login.
We have a lot of extensions written in Python.

I guess I will have to back to the BAR vendor and ask if it is okay to
remove their old .so file.
Perhaps their code will just work with the newer 0.9.8e or perhaps they'll
have to relink or recompile.

On Fri, Nov 16, 2012 at 5:00 PM, Terry Reedy tjre...@udel.edu wrote:

 [easy] Do the import before the function call, which is the proper order
 and the one that works.

 --
 Terry Jan Reedy

 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: error importing smtplib

2012-11-16 Thread Eric Frederich
So I inspected the process through /proc/pid/maps
That seemed to show what libraries had been loaded (though there is
probably an easier way to do this).

In any case, I found that if I import smtplib before logging in I see these
get loaded...

/opt/foo/python27/lib/python2.7/lib-dynload/_ssl.so
/lib64/libssl.so.0.9.8e

Then after logging in, I see this other .so get loaded...

/opt/bar/lib64/libssl.so.0.9.7

So that is what happens when when things are well and I don't get any error
messages.
However, when I do the log in first I see the /opt/bar .so file loaded first

/opt/bar/lib64/libssl.so.0.9.7

Then after importing smtplib I see the other two show up...

/opt/foo/python27/lib/python2.7/lib-dynload/_ssl.so
/lib64/libssl.so.0.9.8e

So I'm guessing the problem is that after I log in, the process has a
conflicting libssl.so file loaded.
Then when I try to import smtplib it tries getting things from there and
that is where the errors are coming from.

The question now is how do I fix this?




On Thu, Nov 15, 2012 at 4:37 PM, Terry Reedy tjre...@udel.edu wrote:

 On 11/15/2012 1:48 PM, Eric Frederich wrote:

 Thanks for the idea.
 sys.path was the same before and after the login


 Too bad. That seems to be a typical cause of import failure.


  What else should I be checking?


 No idea. You are working beyond my knowledge. But I might either look at
 the foo-login code carefully, or disable (comment out) parts of it to see
 what makes the import fail.


 --
 Terry Jan Reedy

 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


error importing smtplib

2012-11-15 Thread Eric Frederich
Hello,

I created some bindings to a 3rd party library.
I have found that when I run Python and import smtplib it works fine.
If I first log into the 3rd party application using my bindings however I
get a bunch of errors.

What do you think this 3rd party login could be doing that would affect the
ability to import smtp lib.

Any suggestions for debugging this further.  I am lost.

This works...

import smtplib
FOO_login()

This doesn't...

FOO_login()
import smtplib

Errors.

 import smtplib
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 139, in module
globals()[__func_name] = __get_hash(__func_name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 103, in
__get_openssl_constructor
return __get_builtin_constructor(name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 91, in
__get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 139, in module
globals()[__func_name] = __get_hash(__func_name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 103, in
__get_openssl_constructor
return __get_builtin_constructor(name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 91, in
__get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 139, in module
globals()[__func_name] = __get_hash(__func_name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 103, in
__get_openssl_constructor
return __get_builtin_constructor(name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 91, in
__get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 139, in module
globals()[__func_name] = __get_hash(__func_name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 103, in
__get_openssl_constructor
return __get_builtin_constructor(name)
  File /opt/foo/python27/lib/python2.7/hashlib.py, line 91, in
__get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha512
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: error importing smtplib

2012-11-15 Thread Eric Frederich
Thanks for the idea.
sys.path was the same before and after the login

What else should I be checking?

On Thu, Nov 15, 2012 at 11:57 AM, Terry Reedy tjre...@udel.edu wrote:

 On 11/15/2012 9:38 AM, Eric Frederich wrote:

 Hello,

 I created some bindings to a 3rd party library.
 I have found that when I run Python and import smtplib it works fine.
 If I first log into the 3rd party application using my bindings however
 I get a bunch of errors.

 What do you think this 3rd party login could be doing that would affect
 the ability to import smtp lib.


 I don't know what 'login' actually means,...


  This works...

 import smtplib
 FOO_login()

 This doesn't...

 FOO_login()
 import smtplib


 but my first guess is that FOO_login alters the module search path so that
 at least one of smtplib, hashlib, or the _xxx modules imported by hashlib
 is being imported from a different place. To check that

 import sys
 before = sys.path
 FOO_login()
 print sys.path==before

 Similar code can check anything else accessible through sys.


  Errors.

   import smtplib
 ERROR:root:code for hash sha224 was not found.


 I am puzzled by this line before the traceback. I cannot find 'ERROR' in
 either smtplib or hashlib.


  Traceback (most recent call last):
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 139, in
 module
  globals()[__func_name] = __get_hash(__func_name)
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 103, in
 __get_openssl_constructor
  return __get_builtin_constructor(**name)
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 91, in
 __get_builtin_constructor
  raise ValueError('unsupported hash type %s' % name)
 ValueError: unsupported hash type sha224

 [snip similar messages]

 It is also unusual to get multiple tracebacks. *Exactly* how are you
 running python and is 2.7 what you intend to run?

 --
 Terry Jan Reedy

 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: error importing smtplib

2012-11-15 Thread Eric Frederich
Sorry, only saw your first response, didn't see the others.

I compiled Python 2.7.2 myself with --enable-shared
To create standalone applications that interact with this 3rd party program
your main C file instead of having a main function has a FOO_user_main
function.
When you link your program you link against a provided foo_main.o file.

My python executable (called FOO_user_main) was created from python.c but
with main replaced with FOO_user_main no other differences.

I have been using this type of installation for a couple of years and
everything works fine.

I only get these errors/warnings on one environment out of 10 or so
development and qa machines.

Any help trying to figure out what is different before and after the login
would be appreciated.
Is there some place in /proc I could look to see what happened?

Thanks,
~Eric


On Thu, Nov 15, 2012 at 11:57 AM, Terry Reedy tjre...@udel.edu wrote:

 On 11/15/2012 9:38 AM, Eric Frederich wrote:

 Hello,

 I created some bindings to a 3rd party library.
 I have found that when I run Python and import smtplib it works fine.
 If I first log into the 3rd party application using my bindings however
 I get a bunch of errors.

 What do you think this 3rd party login could be doing that would affect
 the ability to import smtp lib.


 I don't know what 'login' actually means,...


  This works...

 import smtplib
 FOO_login()

 This doesn't...

 FOO_login()
 import smtplib


 but my first guess is that FOO_login alters the module search path so that
 at least one of smtplib, hashlib, or the _xxx modules imported by hashlib
 is being imported from a different place. To check that

 import sys
 before = sys.path
 FOO_login()
 print sys.path==before

 Similar code can check anything else accessible through sys.


  Errors.

   import smtplib
 ERROR:root:code for hash sha224 was not found.


 I am puzzled by this line before the traceback. I cannot find 'ERROR' in
 either smtplib or hashlib.


  Traceback (most recent call last):
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 139, in
 module
  globals()[__func_name] = __get_hash(__func_name)
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 103, in
 __get_openssl_constructor
  return __get_builtin_constructor(**name)
File /opt/foo/python27/lib/**python2.7/hashlib.py, line 91, in
 __get_builtin_constructor
  raise ValueError('unsupported hash type %s' % name)
 ValueError: unsupported hash type sha224

 [snip similar messages]

 It is also unusual to get multiple tracebacks. *Exactly* how are you
 running python and is 2.7 what you intend to run?

 --
 Terry Jan Reedy

 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: remote read eval print loop

2012-08-21 Thread Eric Frederich
This isn't really for users.  It is for developers like me.
Yes it is a security hole but again, it is a debugger.

The people who will be using it can all ssh into the server machine with
the same ID that the server process is running on.
In fact, this is quite normal.

As it is right now, we log into these machines and start an interactive
Python and use the bindings to debug things.
This works well most of the time but when you do that you need to start
another session of the application.
It is useful to run code interactively from within an actual client session
where something has gone wrong.

In any case I got this working with a rudimentary SWT Java client
(yuck, but the application is based on Eclipse).

Below is the code I used.  It has a singleton interactive console object.
I sub-classed it and defined another method process which simply calls
the push method after wrapping stdout and stderr.
It returns anything that was printed to stdout, stderr, and the return
value of the push method.

So now from the client I can process one line at a time and it behaves much
like the real interactive console... you wouldn't even realize there is all
this client / server / WSDL / xml / https junk going on.

 BEGIN CODE

import sys
from code import InteractiveConsole

class MyBuffer(object):
def __init__(self):
self.buffer = []
def write(self, data):
self.buffer.append(data)
def get(self):
ret = ''.join(self.buffer)
self.buffer = []
return ret

class MyInteractiveConsole(InteractiveConsole):

def __init__(self, *args, **kwargs):
InteractiveConsole.__init__(self, *args, **kwargs)
self.mb_out = MyBuffer()
self.mb_err = MyBuffer()

def process(self, s):
sys.stdout, sys.stderr = self.mb_out, self.mb_err
more = self.push(s)
sys.stdout, sys.stderr = sys.__stdout__, sys.__stderr__
return self.mb_out.get(), self.mb_err.get(), more

print 'creating new interactive console'
mic = MyInteractiveConsole()


On Fri, Aug 17, 2012 at 10:06 AM, Chris Angelico ros...@gmail.com wrote:

 On Fri, Aug 17, 2012 at 11:28 PM, Eric Frederich
 eric.freder...@gmail.com wrote:
  Within the debugging console, after importing all of the bindings, there
  would be no reason to import anything whatsoever.
  With just the bindings I created and the Python language we could do
  meaningful debugging.
  So if I block the ability to do any imports and calls to eval I should be
  safe right?

 Nope. Python isn't a secured language in that way. I tried the same
 sort of thing a while back, but found it effectively impossible. (And
 this after people told me It's not possible, don't bother trying. I
 tried anyway. It wasn't possible.)

 If you really want to do that, consider it equivalent to putting an
 open SSH session into your debugging console. Would you give that much
 power to your application's users? And if you would, is it worth
 reinventing SSH?

 ChrisA
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: remote read eval print loop

2012-08-17 Thread Eric Frederich
What I wanted to implement was a debugging console that runs right on the
client rather than on the server.
You'd have to be logged into the application to do anything meaningful or
even start it up.
All of the C functions that I created bindings for respect the security of
the logged in user.

Within the debugging console, after importing all of the bindings, there
would be no reason to import anything whatsoever.
With just the bindings I created and the Python language we could do
meaningful debugging.
So if I block the ability to do any imports and calls to eval I should be
safe right?

On Fri, Aug 17, 2012 at 7:09 AM, rusi rustompm...@gmail.com wrote:

 On Aug 17, 12:25 pm, Chris Angelico ros...@gmail.com wrote:
  On Fri, Aug 17, 2012 at 12:27 PM, Steven D'Aprano
 
  steve+comp.lang.pyt...@pearwood.info wrote:
   There is already awesome protocols for running Python code remotely
 over
   a network. Please do not re-invent the wheel without good reason.
 
   See pyro, twisted, rpyc, rpclib, jpc, and probably many others.
 
  But they're all tools for building protocols. I like to make
  line-based protocols

 Dont know if this is relevant.  If it is, its more in the heavyweight
 direction.
 Anyway just saw this book yesterday

 http://springpython.webfactional.com/node/39
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


remote read eval print loop

2012-08-16 Thread Eric Frederich
Hello,

I have a bunch of Python bindings for a 3rd party software running on the
server side.
I can add client side extensions that communicate over some http / xml type
requests.
So I can define functions that take a string and return a string.
I would like to get a simple read eval print loop working.

Without adding a bunch of syntax checking on the client side can I get the
behavior of the regular interpreter?
What I mean is things like going from  to ... after you start a block
(like if, while, for, etc).

Is this possible or can I not send over one line at a time and I'd have to
send over a complete block?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: properly catch SIGTERM

2012-07-20 Thread Eric Frederich
On Fri, Jul 20, 2012 at 1:51 AM, Jason Friedman ja...@powerpull.net wrote:

  This seems to work okay but just now I got this while hitting ctrl-c
  It seems to have caught the signal at or in the middle of a call to
  sys.stdout.flush()
 
 
  --- Caught SIGTERM; Attempting to quit gracefully ---
  Traceback (most recent call last):
File /home/user/test.py, line 125, in module
  sys.stdout.flush()
  IOError: [Errno 4] Interrupted system call
 
 
  How should I fix this?
  Am I doing this completely wrong?

 Instead of rolling your own others have written Python code to
 implement daemons.  Try a search on Python daemon.


I found a stackoverflow question that linked to this...
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
... I'm not sure how that would help.  They don't attempt to exit
gracefully.
I don't think my problem is the daemon part, it is the handling the SIGTERM
signal where I'm having trouble.
-- 
http://mail.python.org/mailman/listinfo/python-list


properly catch SIGTERM

2012-07-19 Thread Eric Frederich
So I wrote a script which acts like a daemon.
And it starts with something like this

### Begin Code

import signal

STOPIT = False

def my_SIGTERM_handler(signum, frame):
global STOPIT
print '\n--- Caught SIGTERM; Attempting to quit gracefully ---'
STOPIT = True

signal.signal(signal.SIGTERM, my_SIGTERM_handler)
signal.signal(signal.SIGINT , my_SIGTERM_handler)

### End Code

My main loop looks something like this...

login()
while not STOPIT:
foo1()
foo2()
foo3()
if STOPIT:
break
bar1()
bar2()
bar3()

print 'bye'
logout()

This seems to work okay but just now I got this while hitting ctrl-c
It seems to have caught the signal at or in the middle of a call to
sys.stdout.flush()


--- Caught SIGTERM; Attempting to quit gracefully ---
Traceback (most recent call last):
  File /home/user/test.py, line 125, in module
sys.stdout.flush()
IOError: [Errno 4] Interrupted system call


How should I fix this?
Am I doing this completely wrong?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get stack trace from C

2012-04-18 Thread Eric Frederich
There are several things I'd like to do with the exceptions.
Printing the 3rd party applications log via their own printf-like function.
Also, calling one of their functions that stores an error string on a stack.
In either case, I need access to the error string as a char*.

On Tue, Apr 17, 2012 at 1:15 AM, Stefan Behnel stefan...@behnel.de wrote:

 Eric Frederich, 16.04.2012 20:14:
  I embed Python in a 3rd party application.
  I need to use their conventions for errors.
 
  Looking here...
  http://docs.python.org/extending/embedding.html#pure-embedding
  the example uses PyErr_Print() but that goes to stdout or stderr or
  something.
 
  I need to put the error somewhere else.  How can I get at the traceback
  text?

 You can use the traceback module to format it, just as you would in Python.

 If you need further help, you may want to provide more information, e.g.
 what their conventions are and what kind of application we are talking
 about (graphic, headless server, ...).

 Stefan

 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Get stack trace from C

2012-04-16 Thread Eric Frederich
I embed Python in a 3rd party application.
I need to use their conventions for errors.

Looking here...
http://docs.python.org/extending/embedding.html#pure-embedding
...the example uses PyErr_Print() but that goes to stdout or stderr or
something.

I need to put the error somewhere else.  How can I get at the traceback
text?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing, what am I doing wrong?

2012-02-28 Thread Eric Frederich
If I do a time.sleep(0.001) right at the beginning of the run() method,
then it completes fine.
I was able to run it through a couple hundred times without problem.
If I sleep for less time than that or not at all, it may or may not
complete.

On Mon, Feb 27, 2012 at 9:38 PM, MRAB pyt...@mrabarnett.plus.com wrote:

 On 27/02/2012 16:57, Eric Frederich wrote:

 Still freezing sometimes, like 1 out of 10 times that I run it.
 Here is updated code and a couple of outputs.

  [snip]
 I don't know what the problem is. All I can suggest is a slightly
 modified version.

 If a worker that says it's terminating without first saying that it's
 got nothing, then I can only assume that the worker had some uncaught
 exception.



 #!/usr/bin/env python

 import sys
 import Queue
 import multiprocessing
 import time

 def FOO(a, b, c):
print 'foo', a, b, c
return (a + b) * c

 class MyWorker(multiprocessing.**Process):
def __init__(self, name, inbox, outbox):
super(MyWorker, self).__init__()
self.name = name
self.inbox = inbox
self.outbox = outbox
print  sys.stderr, 'Created %s' % self.name; sys.stderr.flush()
def run(self):
print  sys.stderr, 'Running %s' % self.name; sys.stderr.flush()
try:

while True:
try:
args = self.inbox.get_nowait()
print  sys.stderr, '%s got something to do' %
 self.name; sys.stderr.flush()
except Queue.Empty:
print  sys.stderr, '%s got nothing' % self.name;
 sys.stderr.flush()
break
self.outbox.put(FOO(*args))
finally:
print  sys.stderr, '%s is terminating' % self.name;
 sys.stderr.flush()


 if __name__ == '__main__':
# This file is being run as the main script. This part won't be
# run if the file is imported.

print  sys.stderr, 'Creating todo queue'; sys.stderr.flush()
todo = multiprocessing.Queue()

for i in xrange(100):
todo.put((i, i + 1, i + 2))

print  sys.stderr, 'Creating results queue'; sys.stderr.flush()
result_queue = multiprocessing.Queue()

print  sys.stderr, 'Creating Workers'; sys.stderr.flush()
w1 = MyWorker('Worker 1', todo, result_queue)
w2 = MyWorker('Worker 2', todo, result_queue)

print  sys.stderr, 'Starting Worker 1'; sys.stderr.flush()
w1.start()
print  sys.stderr, 'Starting Worker 2'; sys.stderr.flush()
w2.start()

for i in xrange(100):
print result_queue.get()
 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing, what am I doing wrong?

2012-02-27 Thread Eric Frederich
Still freezing sometimes, like 1 out of 10 times that I run it.
Here is updated code and a couple of outputs.

 code

#!/usr/bin/env python

import sys
import Queue
import multiprocessing
import time

def FOO(a, b, c):
   print 'foo', a, b, c
   return (a + b) * c

class MyWorker(multiprocessing.Process):
def __init__(self, name, inbox, outbox):
super(MyWorker, self).__init__()
self.name = name
self.inbox = inbox
self.outbox = outbox
print  sys.stderr, 'Created %s' % self.name; sys.stderr.flush()
def run(self):
print  sys.stderr, 'Running %s' % self.name; sys.stderr.flush()
while True:
try:
args = self.inbox.get_nowait()
print  sys.stderr, '%s got something to do' % self.name;
sys.stderr.flush()
except Queue.Empty:
break
self.outbox.put(FOO(*args))

if __name__ == '__main__':
# This file is being run as the main script. This part won't be
# run if the file is imported.


print  sys.stderr, 'Creating todo queue'; sys.stderr.flush()
todo = multiprocessing.Queue()

for i in xrange(100):
todo.put((i, i+1, i+2))

print  sys.stderr, 'Creating results queue'; sys.stderr.flush()
result_queue = multiprocessing.Queue()

print  sys.stderr, 'Creating Workers'; sys.stderr.flush()
w1 = MyWorker('Worker 1', todo, result_queue)
w2 = MyWorker('Worker 2', todo, result_queue)

print  sys.stderr, 'Starting Worker 1'; sys.stderr.flush()
w1.start()
print  sys.stderr, 'Starting Worker 2'; sys.stderr.flush()
w2.start()

for i in xrange(100):
print result_queue.get()


 output 1 (I ctrl-c'd it after it froze)

Creating todo queue
Creating results queue
Creating Workers
Created Worker 1
Created Worker 2
Starting Worker 1
Running Worker 1
Starting Worker 2
Running Worker 2
Traceback (most recent call last):
  File ./multi.py, line 53, in module
print result_queue.get()
  File
/home/frede00e/software/python/lib/python2.7/multiprocessing/queues.py,
line 91, in get
res = self._recv()
KeyboardInterrupt


### output 2
Creating todo queue
Creating results queue
Creating Workers
Created Worker 1
Created Worker 2
Starting Worker 1
Running Worker 1
Worker 1 got something to do
Starting Worker 2
foo 0 1 2
Worker 1 got something to do
foo 1 2 3
Worker 1 got something to do
foo 2 3 4
Worker 1 got something to do
foo 3 4 5
Worker 1 got something to do
foo 4 5 6
Worker 1 got something to do
foo 5 6 7
Worker 1 got something to do
foo 6 7 8
Worker 1 got something to do
foo 7 8 9
Worker 1 got something to do
foo 8 9 10
Worker 1 got something to do
foo 9 10 11
Worker 1 got something to do
foo 10 11 12
Worker 1 got something to do
foo 11 12 13
Worker 1 got something to do
foo 12 13 14
Worker 1 got something to do
foo 13 14 15
Worker 1 got something to do
foo 14 15 16
Worker 1 got something to do
foo 15 16 17
Worker 1 got something to do
foo 16 17 18
Worker 1 got something to do
foo 17 18 19
Running Worker 2
2
9
20
35
54
77
104
135
170
209
252
299
350
405
464
527
594
665
Traceback (most recent call last):
  File ./multi.py, line 53, in module
print result_queue.get()
  File
/home/frede00e/software/python/lib/python2.7/multiprocessing/queues.py,
line 91, in get
res = self._recv()
KeyboardInterrupt




On Fri, Feb 24, 2012 at 1:36 PM, MRAB pyt...@mrabarnett.plus.com wrote:

 On 24/02/2012 17:00, Eric Frederich wrote:

 I can sill get it to freeze and nothing is printed out from the other
 except block.
 Does it look like I'm doing anything wrong here?

  [snip]
 I don't normally use multiprocessing, so I forgot about a critical
 detail. :-(

 When the multiprocessing module starts a process, that process
 _imports_ the module which contains the function which is to be run, so
 what's happening is that when your script is run, it creates and starts
 workers, the multiprocessing module makes a new process for each
 worker, each of those processes then imports the script, which creates
 and starts workers, etc, leading to an ever-increasing number of
 processes.

 The solution is to ensure that the script/module distinguishes between
 being run as the main script and being imported as a module:


 #!/usr/bin/env python

 import sys
 import Queue
 import multiprocessing
 import time

 def FOO(a, b, c):
print 'foo', a, b, c
return (a + b) * c

 class MyWorker(multiprocessing.**Process):
def __init__(self, inbox, outbox):
super(MyWorker, self).__init__()
self.inbox = inbox
self.outbox = outbox
print  sys.stderr, '1' * 80; sys.stderr.flush()
def run(self):
print  sys.stderr, '2' * 80; sys.stderr.flush()
while True:
try:
args = self.inbox.get_nowait()
except Queue.Empty:
break
self.outbox.put(FOO(*args))

 if __name__ == '__main__':
# This file

Re: multiprocessing, what am I doing wrong?

2012-02-24 Thread Eric Frederich
I can sill get it to freeze and nothing is printed out from the other
except block.
Does it look like I'm doing anything wrong here?

On Thu, Feb 23, 2012 at 3:42 PM, MRAB pyt...@mrabarnett.plus.com wrote:

 On 23/02/2012 17:59, Eric Frederich wrote:

 Below is some pretty simple code and the resulting output.
 Sometimes the code runs through but sometimes it just freezes for no
 apparent reason.
 The output pasted is where it just got frozen on me.
 It called start() on the 2nd worker but the 2nd worker never seemed to
 enter the run method.

  [snip]

 The 2nd worker did enter the run method; there are 2 lines of 2.

 Maybe there's an uncaught exception in the run method for some reason.
 Try doing something like this:


 try:
args = self.inbox.get_nowait()
 except Queue.Empty:
break
 except:
import traceback
print *** Exception in worker
print  sys.stderr, traceback.print_exc()
sys.stderr.flush()
print ***
raise
 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


multiprocessing, what am I doing wrong?

2012-02-23 Thread Eric Frederich
Below is some pretty simple code and the resulting output.
Sometimes the code runs through but sometimes it just freezes for no
apparent reason.
The output pasted is where it just got frozen on me.
It called start() on the 2nd worker but the 2nd worker never seemed to
enter the run method.

### the code

#!/usr/bin/env python

import sys
import Queue
import multiprocessing
import time
todo = multiprocessing.Queue()

for i in xrange(100):
todo.put((i, i+1, i+2))

def FOO(a, b, c):
print 'foo', a, b, c
return (a + b) * c

class MyWorker(multiprocessing.Process):
def __init__(self, inbox, outbox):
super(MyWorker, self).__init__()
self.inbox = inbox
self.outbox = outbox
print  sys.stderr, '1' * 80; sys.stderr.flush()
def run(self):
print  sys.stderr, '2' * 80; sys.stderr.flush()
while True:
try:
args = self.inbox.get_nowait()
except Queue.Empty:
break
self.outbox.put(FOO(*args))

print  sys.stderr, 'a' * 80; sys.stderr.flush()
result_queue = multiprocessing.Queue()

print  sys.stderr, 'b' * 80; sys.stderr.flush()
w1 = MyWorker(todo, result_queue)
print  sys.stderr, 'c' * 80; sys.stderr.flush()
w2 = MyWorker(todo, result_queue)

print  sys.stderr, 'd' * 80; sys.stderr.flush()
w1.start()
print  sys.stderr, 'e' * 80; sys.stderr.flush()
w2.start()
print  sys.stderr, 'f' * 80; sys.stderr.flush()

for i in xrange(100):
print result_queue.get()



### the output









foo 0 1 2
foo 1 2 3
foo 2 3 4
foo 3 4 5
foo 4 5 6


2
9
20
35
54
-- 
http://mail.python.org/mailman/listinfo/python-list


nested embedding of interpreter

2012-02-06 Thread Eric Frederich
Hello,

I work with a 3rd party tool that provides a C API for customization.

I created Python bindings for this C API so my customizations are nothing
more than this example wrapper code almost verbatim:
http://docs.python.org/extending/embedding.html#pure-embedding
I have many .c files just like that only differing by the module and
function that they load and execute.

This has been working fine for a long time.  Now, as we add more and more
customizations that get triggered at various events we have come to a
problem.
If some of the Python customization code gets triggered from inside another
Python customization I get a segfault.
I thought this might have something to do with the nesting which is the
equivalent of calling Py_Initialize() twice followed by Py_Finalize() twice.
I went into the C wrapper code for the inner-most customization and
commented out the Py_Initialize and Py_Finalize calls and it worked nicely.
Further testing showed that I only needed to remove the Py_Finalize call
and that calling Py_Initialize twice didn't cause a segfault.

So, now that I think I verified that this is what was causing the segfault,
I'd like to ask some questions.

1)
Is calling Py_Initialize twice correct, or will I run into other problems
down the road?

2)
Another option I have is that I can remove all Py_Initialize / Py_Finalize
calls from the individual customizations and just call Py_Initialize once
when a user first starts the program.  I am not sure if there is a
mechanism to get something called at the end of the user's session with the
program though, so is it a problem if I don't call Py_Finalize at the end?

3)
Is there a proper way to nest these things?  I imagine not since
Py_Finalize doesn't take any arguments.  If I could do...
int session = Py_Initialize()
Py_Finalize(session)
But obviously, CPython is not coded that way so it is not supported.

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


portable multiprocessing code

2011-05-17 Thread Eric Frederich
I have written some code using Python 2.7 but I'd like these scripts
to be able to run on Red Hat 5's 2.4.3 version of Python which doesn't
have multiprocessing.
I can try to import multiprocessing and set a flag as to whether it is
available.  Then I can create a Queue.Queue instead of a
multiprocessing.Queue for the arg_queue and result_queue.
Without actually trying this yet it seems like things would work okay
except for the Worker class.  It seems I can conditionally replace
multiprocessing.Queue with Queue.Queue, but is there anything to
replace multiprocessing.Process with?

Are there any best practices for doing something like this?
Below is a dumb example that just counts lines in files.
What would be the best way to make this runnable in older (2.4.3)
versions of Python?


#!/usr/bin/env python

import sys
import os
import multiprocessing
import Queue

fnames = sys.argv[1:]

def SimpleWorker(func):
class SimpleWorker_wrapped(multiprocessing.Process):
def __init__(self, arg_queue, result_queue):
super(SimpleWorker_wrapped, self).__init__()
self.arg_queue= arg_queue
self.result_queue = result_queue
def run(self):
while True:
try:
args = self.arg_queue.get_nowait()
except Queue.Empty:
break
self.result_queue.put(func(*args))
return SimpleWorker_wrapped

@SimpleWorker
def line_counter(fname):
lc = len(open(fname).read().splitlines())
return fname, lc

arg_queue = multiprocessing.Queue()
result_queue = multiprocessing.Queue()
for fname in fnames:
arg_queue.put((fname,))

for i in range(multiprocessing.cpu_count()):
w = line_counter(arg_queue, result_queue)
w.start()

results = {}
for fname in sorted(fnames):
while fname not in results:
n, i = result_queue.get()
results[n] = i
print %-40s %d % (fname, results[fname])
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: installing setuptools on Windows custom python install

2011-04-19 Thread Eric Frederich
I do not have a DLLs folder.
I created this installation of Python myself since I needed it built
with Visual Studio 2005.
I followed instructions under PC\readme.txt
This file mentioned nothing about a DLLs folder.

From PC\readme.txt .

The best installation strategy is to put the Python executable (and
DLL, for Win32 platforms) in some convenient directory such as
C:/python, and copy all library files and subdirectories (using XCOPY)
to C:/python/lib.

I see that there is a _socket project in the Visual Studio solution.
I built this manually and copied the _socket.pyx file do a manually
created DLLs folder.
This seems to have worked but it bothers me that there is no mention
of this stuff in the readme.txt file.

The readme file did mention that there is a config.c file that I am to
edit to enable other modules.
If I added a call to init_socket() in this file would the socket
library then be built into the main dll file?
I'm not sure exactly how to use this config.c file.

Thanks,
~Eric

On Mon, Apr 18, 2011 at 2:30 PM, Wolfgang Rohdewald
wolfg...@rohdewald.de wrote:
 On Montag 18 April 2011, Eric Frederich wrote:
   File F:\My_Python27\lib\socket.py, line 47, in module
     import _socket
 ImportError: No module named _socket

 F:\pyside\setuptools-0.6c11

 I have C:\Python27

 and within that, DLLS\_socket.pyd

 this is what import _socket should find

 do you have that?

 --
 Wolfgang

-- 
http://mail.python.org/mailman/listinfo/python-list


multiple Python 2.7 Windows installations

2011-04-19 Thread Eric Frederich
Hello,

I am trying to get an installer built with distutils to recognize
multiple installations.
The installer currently finds my installation at C:\Python27
I have a custom Python27 built myself with Visual Studio sitting
somewhere else, say C:\MyPython27.

I looked at PC/bdist_wininst/install.c in the GetPythonVersions
routine and see that it is searching Software\Python\PythonCore.

So, I assume I need to get my Python installation listed in the registry.
I am unfamiliar with the Windows Registry
I tried to create another 2.7 key but regedit wouldn't let me.
So, if I can only have one 2.7 key, it would seem that the routine
GetPythonVersions will only ever get 1 version of 2.7.
Does this mean that it is unsupported to have more than one Python 2.7
installation on Windows?

Again, that GetPythonVersions routine looks pretty alien to me so
I may be wrong.

Some help please?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


installing setuptools on Windows custom python install

2011-04-18 Thread Eric Frederich
Hello,

I have a python installation that I built myself using Visual Studio 2005.
I need this version because I need to link Python bindings to a 3rd
party library that uses VS 2005.

I want to get setuptools installed to this Python installation but the
installer won't find my version of Python even if it is on the PATH
and PYTHONHOME is set.
So, I am trying to build setuptools but when I run python setup.py
install I get the following error.
Any ideas either on how to get the installer to find my installation
or on how to get this to compile?

F:\pyside\setuptools-0.6c11python setup.py install
running install
Traceback (most recent call last):
  File setup.py, line 94, in module
scripts = scripts,
  File F:\My_Python27\lib\distutils\core.py, line 152, in setup
dist.run_commands()
  File F:\My_Python27\lib\distutils\dist.py, line 953, in run_commands
self.run_command(cmd)
  File F:\My_Python27\lib\distutils\dist.py, line 972, in run_command
cmd_obj.run()
  File F:\pyside\setuptools-0.6c11\setuptools\command\install.py,
line 76, in run
self.do_egg_install()
  File F:\pyside\setuptools-0.6c11\setuptools\command\install.py,
line 85, in do_egg_install
easy_install = self.distribution.get_command_class('easy_install')
  File F:\pyside\setuptools-0.6c11\setuptools\dist.py, line 395, in
get_command_class
self.cmdclass[command] = cmdclass = ep.load()
  File F:\pyside\setuptools-0.6c11\pkg_resources.py, line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File F:\pyside\setuptools-0.6c11\setuptools\command\easy_install.py,
line 21, in module
from setuptools.package_index import PackageIndex, parse_bdist_wininst
  File F:\pyside\setuptools-0.6c11\setuptools\package_index.py, line
2, in module
import sys, os.path, re, urlparse, urllib2, shutil, random,
socket, cStringIO
  File F:\My_Python27\lib\urllib2.py, line 94, in module
import httplib
  File F:\My_Python27\lib\httplib.py, line 71, in module
import socket
  File F:\My_Python27\lib\socket.py, line 47, in module
import _socket
ImportError: No module named _socket

F:\pyside\setuptools-0.6c11
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue6498] Py_Main() does not return on SystemExit

2011-03-28 Thread Eric Frederich

Eric Frederich eric.freder...@gmail.com added the comment:

So there is a disconnect.

You can either change the documentation to match the behavior or you can change 
the code to match the documentation.

I would prefer to leave the documentation alone and make Py_Main return rather 
than exit on sys.exit.  That's just my 2 cents.

--
nosy: +eric.frederich

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6498
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: embedding interactive python interpreter

2011-03-27 Thread Eric Frederich
This is behavior contradicts the documentation which says the value
passed to sys.exit will be returned from Py_Main.
Py_Main doesn't return anything, it just exits.
This is a bug.

On Sun, Mar 27, 2011 at 3:10 AM, Mark Hammond mhamm...@skippinet.com.au wrote:
 On 26/03/2011 4:37 AM, Eric Frederich wrote:
 exit() will winf up causing the C exit() function after
 finalizing, hence the behaviour you see.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: embedding interactive python interpreter

2011-03-27 Thread Eric Frederich
I'm not talking about the documentation for sys.exit()
I'm talking about the documentation for Py_Main(int argc, char **argv)

http://docs.python.org/c-api/veryhigh.html?highlight=py_main#Py_Main

This C function never returns anything whether in the interpreter I
type exit(123) or sys.exit(123).
I cannot call any of my C cleanup code because of this.

On Sun, Mar 27, 2011 at 1:55 PM, Jerry Hill malaclyp...@gmail.com wrote:
 On Sun, Mar 27, 2011 at 9:33 AM, Eric Frederich
 eric.freder...@gmail.com wrote:
 This is behavior contradicts the documentation which says the value
 passed to sys.exit will be returned from Py_Main.
 Py_Main doesn't return anything, it just exits.
 This is a bug.

 Are you sure that calling the builtin exit() function is the same as
 calling sys.exit()?

 You keep talking about the documentation for sys.exit(), but that's
 not the function you're calling.  I played around in the interactive
 interpreter a bit, and the two functions do seem to behave a bit
 differently from each other.  I can't seem to find any detailed
 documentation for the builtin exit() function though, so I'm not sure
 exactly what the differences are.

 A little more digging reveals that the builtin exit() function is
 getting set up by site.py, and it does more than sys.exit() does.
 Particularly, in 3.1 it tries to close stdin then raises SystemExit().
  Does that maybe explain the behavior you're seeing?  I didn't go
 digging in 2.7, which appears to be what you're using, but I think you
 need to explore the differences between sys.exit() and the builtin
 exit() functions.

 --
 Jerry
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: embedding interactive python interpreter

2011-03-27 Thread Eric Frederich
I'm not sure that I know how to run this function in such a way that
it gives me an interactive session.
I passed in stdin as the first parameter and NULL as the second and
I'd get seg faults when running exit() or even imnport sys.

I don't want to pass a file.  I want to run some C code, start an
interactive session, then run some more C code once the session is
over, but I cannot find a way to start an interactive Python session
within C that won't exit pre-maturely before I have a chance to run my
cleanup code in C.

On Sun, Mar 27, 2011 at 5:59 PM, eryksun () eryk...@gmail.com wrote:
 On Friday, March 25, 2011 12:02:16 PM UTC-4, Eric Frederich wrote:

 Is there something else I should call besides exit() from within the
 interpreter?
 Is there something other than Py_Main that I should be calling?

 Does PyRun_InteractiveLoop also have this problem?
-- 
http://mail.python.org/mailman/listinfo/python-list


embedding interactive python interpreter

2011-03-25 Thread Eric Frederich
I am able to embed the interactive Python interpreter in my C program
except that when the interpreter exits, my entire program exits.

#include stdio.h
#include Python.h

int main(int argc, char *argv[]){
printf(line %d\n, __LINE__);
Py_Initialize();
printf(line %d\n, __LINE__);
Py_Main(argc, argv);
printf(line %d\n, __LINE__);
Py_Finalize();
printf(line %d\n, __LINE__);
return 0;
}

When I run the resulting binary I get the following

$ ./embedded_python
line 5
line 7
Python 2.7.1 (r271:86832, Mar 25 2011, 11:56:07)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type help, copyright, credits or license for more information.
 print 'hi'
hi
 exit()


I never see line 9 or 11 printed.
I need to embed python in an application that needs to do some cleanup
at the end so I need that code to execute.
What am I doing wrong?

Is there something else I should call besides exit() from within the
interpreter?
Is there something other than Py_Main that I should be calling?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: embedding interactive python interpreter

2011-03-25 Thread Eric Frederich
So I found that if I type ctrl-d then the other lines will print.

It must be a bug then that the exit() function doesn't do the same thing.
The documentation says The return value will be the integer passed to
the sys.exit() function but clearly nothing is returned since the
call to Py_Main exits rather than returning (even when calling
sys.exit instead of just exit).

In the mean time is there a way to redefine the exit function in
Python to do the same behavior as ctrl-d?
I realize that in doing that (if its even possible) still won't
provide a way to pass a value back from the interpreter via sys.exit.

Thanks,
~Eric



On Fri, Mar 25, 2011 at 12:02 PM, Eric Frederich
eric.freder...@gmail.com wrote:
 I am able to embed the interactive Python interpreter in my C program
 except that when the interpreter exits, my entire program exits.

    #include stdio.h
    #include Python.h

    int main(int argc, char *argv[]){
        printf(line %d\n, __LINE__);
        Py_Initialize();
        printf(line %d\n, __LINE__);
        Py_Main(argc, argv);
        printf(line %d\n, __LINE__);
        Py_Finalize();
        printf(line %d\n, __LINE__);
        return 0;
    }

 When I run the resulting binary I get the following

 $ ./embedded_python
 line 5
 line 7
 Python 2.7.1 (r271:86832, Mar 25 2011, 11:56:07)
 [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
 Type help, copyright, credits or license for more information.
 print 'hi'
 hi
 exit()


 I never see line 9 or 11 printed.
 I need to embed python in an application that needs to do some cleanup
 at the end so I need that code to execute.
 What am I doing wrong?

 Is there something else I should call besides exit() from within the
 interpreter?
 Is there something other than Py_Main that I should be calling?

 Thanks,
 ~Eric

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: embedding interactive python interpreter

2011-03-25 Thread Eric Frederich
Added a fflush(stdout) after each printf and, as I expectedstill
only the first 2 prints.


On Fri, Mar 25, 2011 at 1:47 PM, MRAB pyt...@mrabarnett.plus.com wrote:
 On 25/03/2011 17:37, Eric Frederich wrote:

 So I found that if I type ctrl-d then the other lines will print.

 It must be a bug then that the exit() function doesn't do the same thing.
 The documentation says The return value will be the integer passed to
 the sys.exit() function but clearly nothing is returned since the
 call to Py_Main exits rather than returning (even when calling
 sys.exit instead of just exit).

 In the mean time is there a way to redefine the exit function in
 Python to do the same behavior as ctrl-d?
 I realize that in doing that (if its even possible) still won't
 provide a way to pass a value back from the interpreter via sys.exit.

 You could flush stdout after each print or turn off buffering on stdout
 with:

    setvbuf(stdout, NULL, _IONBF, 0);

 Thanks,
 ~Eric



 On Fri, Mar 25, 2011 at 12:02 PM, Eric Frederich
 eric.freder...@gmail.com  wrote:

 I am able to embed the interactive Python interpreter in my C program
 except that when the interpreter exits, my entire program exits.

    #includestdio.h
    #includePython.h

    int main(int argc, char *argv[]){
        printf(line %d\n, __LINE__);
        Py_Initialize();
        printf(line %d\n, __LINE__);
        Py_Main(argc, argv);
        printf(line %d\n, __LINE__);
        Py_Finalize();
        printf(line %d\n, __LINE__);
        return 0;
    }

 When I run the resulting binary I get the following

 $ ./embedded_python
 line 5
 line 7
 Python 2.7.1 (r271:86832, Mar 25 2011, 11:56:07)
 [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
 Type help, copyright, credits or license for more information.

 print 'hi'

 hi

 exit()


 I never see line 9 or 11 printed.
 I need to embed python in an application that needs to do some cleanup
 at the end so I need that code to execute.
 What am I doing wrong?

 Is there something else I should call besides exit() from within the
 interpreter?
 Is there something other than Py_Main that I should be calling?

 Thanks,
 ~Eric


 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Creating custom Python objects from C code

2011-01-05 Thread Eric Frederich
I have read through all the documentation here:

http://docs.python.org/extending/newtypes.html

I have not seen any documentation anywhere else explaining how to
create custom defined objects from C.
I have this need to create custom objects from C and pass them as
arguments to a function call.

Question 1: how am I to create those objects from C code?

The other thing I would like to know is how I can create helper
functions in my extension so they can be created and manipulated
easily.
I am thinking along the lines of the built-in helper functions
PyList_New and PyList_Append.
Once I have an answer to question 1, the problem won't be creating the
helper functions, but making them available from something built with
distutils.
To use the builtin python functions from C I need to link against
python27.lib but when I create my own package using distutils it
creates dll or pyd files.

Question 2: How do I make C helper functions that are part of my
extension available to other C projects in the same way that PyList_*,
PyString_*, PyInt_* functions are available?
Is it possible to have distutils make a .lib file for me?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating custom Python objects from C code

2011-01-05 Thread Eric Frederich
On Wed, Jan 5, 2011 at 11:39 AM, Antoine Pitrou solip...@pitrou.net wrote:
 On Wed, 5 Jan 2011 11:27:02 -0500
 Eric Frederich eric.freder...@gmail.com wrote:
 I have read through all the documentation here:

     http://docs.python.org/extending/newtypes.html

 I have not seen any documentation anywhere else explaining how to
 create custom defined objects from C.
 I have this need to create custom objects from C and pass them as
 arguments to a function call.

 What do you mean? Create instances of a type defined in Python code?

 The C API is not very different from Python-land. When you want to
 instantiate a type, just call that type (as a PyObject pointer) with the
 right arguments (using PyObject_Call() and friends). Whether that type
 has been defined in C or in Python does not make a difference.

No, the custom types are defined in C.
I need to create the objects in C.
I need to pass those custom C objects created in C to a python
function via PyObject_CallObject(pFunc, pArgs).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Creating custom types from C code

2010-12-20 Thread Eric Frederich
Thanks for the reply.

I remember reading about named tuples when they were back-ported to
the 2.X series but I never played with them.
Is there a way to instantiate a named tuple from C code?

Maybe I'm over-thinking this whole thing.
Is there a simple way that I can define a class in Python and
instantiate that type from C?

On Sat, Dec 18, 2010 at 1:18 AM, Stefan Behnel stefan...@behnel.de wrote:
 Eric Frederich, 17.12.2010 23:58:

 I have an extension module for a 3rd party library in which I am
 wrapping some structures.
 My initial attempt worked okay on Windows but failed on Linux.
 I was doing it in two parts.
 The first part on the C side of things I was turning the entire
 structure into a char array.
 The second part in Python code I would unpack the structure.

 Anyway, I decided I should be doing things a little cleaner so I read
 up on Defining New Types
 http://docs.python.org/extending/newtypes.html

 I got it to work but I'm not sure how to create these new objects from C.

 You may want to take a look at Cython. It makes writing C extensions easy.
 For one, it will do all sorts of type conversions for you, and do them
 efficiently and safely (you get an exception on overflow, for example). It's
 basically Python, so creating classes and instantiating them is trivial.

 Also note that it's generally not too much work to rewrite an existing C
 wrapper in Cython, but it's almost always worth it. You immediately get more
 maintainable code that's much easier to extend and work on. It's also often
 faster than hand written code.

 http://cython.org


 My setup is almost exactly like the example on that page except
 instead of 2 strings and an integer I have 5 unsigned ints.

 I do not expect to ever be creating these objects in Python.  They
 will only be created as return values from my wrapper functions to the
 3rd party library.

 In Cython 0.14, you can declare classes as final and internal using a
 decorator, meaning that they cannot be subtyped from Python and do not show
 up in the module dict. However, note that there is no way to prevent users
 from getting their hands at the type once you give them an instance.


 I could return a tuple from those functions but I want to use dot
 notation (e.g. somestruct.var1).

 Then __getattr__ or properties are your friend.


 So, question number 1:
     Is defining my own type like that overkill just to have an object
 to access using dots?

 Creating wrapper objects is totally normal.

 Also note that recent Python versions have named tuples, BTW.


     I'll never create those objects from Python.
     Is there a shortcut to creating objects and setting attributes
 from within C?

 The Cython code for instantiating classes is identical to Python.


 In any case, I was able to create my own custom object from C code like
 so...

     PyObject *foo(SomeCStruct bar){
         PyObject *ret;
         ret = _PyObject_New(mymodule_SomeStructType);
         PyObject_SetAttrString(ret, var1 , Py_BuildValue(I, bar.var1
 ));
         PyObject_SetAttrString(ret, var2 , Py_BuildValue(I, bar.var2
 ));
         PyObject_SetAttrString(ret, var3 , Py_BuildValue(I, bar.var3
 ));
         PyObject_SetAttrString(ret, var4 , Py_BuildValue(I, bar.var4
 ));
         PyObject_SetAttrString(ret, var5 , Py_BuildValue(I, bar.var5
 ));
         return ret;
     }

 When using _PyObject_New I notice that neither my new or init function
 are ever called.
 I verified that they are getting called when creating the object from
 Python

 Things often work a little different in Python and C. Directly calling
 _PyObject_New() is a lot less than what Python does internally. The
 canonical way is to PyObject_Call() the type (or to use one of the other
 call functions, depending on what your arguments are).


 (which I would never do anyway).

 Your users could do it, though, so you should make sure that won't crash the
 interpreter that way by leaving internal data fields uninitialised.


 Question number 2:
     Do I need to be calling PyObject_SetAttrString or is there a way
 to set the unsigned ints on the structure direcly?
     It seems overkill to create a Python object for an unsigned int
 just to set it as an attribute on a custom defined type.

 You will have to do it at some point, though, either at instantiation time
 or at Python access time. Depending on the expected usage, either of the two
 can be more wasteful.


 Question number 3:
     In the above code, is there a memory leak?  Should I be
 Py_DECREF'ing the return value from Py_BuildValue after I'm done using
 it.

 You can look that up in the C-API docs. If a function doesn't say that it
 steals a reference, you still own the reference when it returns and have
 to manually decref it (again, a thing that you won't usually have to care
 about in Cython). So, yes, the above leaks one reference for each call to
 Py_BuildValue().

 Stefan

 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http

Creating custom types from C code

2010-12-17 Thread Eric Frederich
Hello,

I have an extension module for a 3rd party library in which I am
wrapping some structures.
My initial attempt worked okay on Windows but failed on Linux.
I was doing it in two parts.
The first part on the C side of things I was turning the entire
structure into a char array.
The second part in Python code I would unpack the structure.

Anyway, I decided I should be doing things a little cleaner so I read
up on Defining New Types
http://docs.python.org/extending/newtypes.html

I got it to work but I'm not sure how to create these new objects from C.

My setup is almost exactly like the example on that page except
instead of 2 strings and an integer I have 5 unsigned ints.

I do not expect to ever be creating these objects in Python.  They
will only be created as return values from my wrapper functions to the
3rd party library.
I could return a tuple from those functions but I want to use dot
notation (e.g. somestruct.var1).

So, question number 1:
Is defining my own type like that overkill just to have an object
to access using dots?
I'll never create those objects from Python.
Is there a shortcut to creating objects and setting attributes
from within C?

In any case, I was able to create my own custom object from C code like so...

PyObject *foo(SomeCStruct bar){
PyObject *ret;
ret = _PyObject_New(mymodule_SomeStructType);
PyObject_SetAttrString(ret, var1 , Py_BuildValue(I, bar.var1 ));
PyObject_SetAttrString(ret, var2 , Py_BuildValue(I, bar.var2 ));
PyObject_SetAttrString(ret, var3 , Py_BuildValue(I, bar.var3 ));
PyObject_SetAttrString(ret, var4 , Py_BuildValue(I, bar.var4 ));
PyObject_SetAttrString(ret, var5 , Py_BuildValue(I, bar.var5 ));
return ret;
}

When using _PyObject_New I notice that neither my new or init function
are ever called.
I verified that they are getting called when creating the object from
Python (which I would never do anyway).

Question number 2:
Do I need to be calling PyObject_SetAttrString or is there a way
to set the unsigned ints on the structure direcly?
It seems overkill to create a Python object for an unsigned int
just to set it as an attribute on a custom defined type.

Question number 3:
In the above code, is there a memory leak?  Should I be
Py_DECREF'ing the return value from Py_BuildValue after I'm done using
it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Reference counting problems?

2010-12-09 Thread Eric Frederich
I am attempting to automate the building of binding for a 3rd party library.
The functions I'm wrapping all return an integer of whether they
failed and output are passed as pointers.
There can be multiple return values.
So the code that I generate has a PyObject* called python__return_val
that I use for returning.
In the 'wrapped_foo' function below you see I build the return value
with Py_BuildValue and OO as the format.
For every output I have I do a C to Python conversion and add another
'O' to the format.
What I'm wondering is if I can use the same logic when wrapping
functions that only return one value like the wrapped_bar function
below.

So for multiples, I wind up doing this which is fine.

python__x = Py_BuildValue(s, x)
python__y = Py_BuildValue(s, y)
python__return_val = Py_BuildValue(OO, python__x, python__y);

But for single returns I do something like this
I realize that the 2 lines below are pointless, but are they causing a
memory leak or problems with reference counting?

python__x = Py_BuildValue(s, x)
python__return_val = Py_BuildValue(O, python__x);


Are python__x and python__return_val the same object, a copy of the object?
Would python__x ever get garbage collected?
Should my code generator detect when there is only one output and not
go through the extra step?

Thanks,
~Eric



static PyObject *
wrapped_foo(PyObject *self, PyObject *args)
{
int wrapp_fail;
// C types
int  x;
const char*  some_str;
int* y;
char**   abc;

// Python types
PyObject*python__return_val;
PyObject*python__y;
PyObject*python__abc;

// Get Python args
if (!PyArg_ParseTuple(args, is, x, some_str))
return NULL;

// Wrapped Call
wrapp_fail = foo(x, some_str, y, abc);
if(wrapp_fail != 0){
return NULL;
}

// Convert output to Python types
python__y = Py_BuildValue(i, y)
python__abc = Py_BuildValue(s, abc)


// Build Python return value
python__return_val = Py_BuildValue(OO, python__y, python__abc);

// Memory free's
MEM_free(abc)

return python__return_val;
}


static PyObject *
wrapped_bar(PyObject *self, PyObject *args)
{
int wrapp_fail;
// C types
int  a;
const char*  b;
char**   c;

// Python types
PyObject*python__return_val;
PyObject*python__c;

// Get Python args
if (!PyArg_ParseTuple(args, is, a, b))
return NULL;

// Wrapped Call
wrapp_fail = bar(a, b, c);
if(wrapp_fail != 0){
return NULL;
}

// Convert output to Python types
python__c = Py_BuildValue(s, c)


// Build Python return value
python__return_val = Py_BuildValue(O, python__c);

// Memory free's
MEM_free(c)

return python__return_val;
}
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiple modules from single c extension

2010-12-02 Thread Eric Frederich
Can you explain how to do this with distutils then?
Would I need a separate setup.py for SpamABC and SpamXYZ?
How would I get them included in the parent module Spam?

Could you explain what you mean when you say The Python import
mechanism will be looking for an appropriately-named .pyd file for
each module?

Are you saying that in python when I say from Spam.ABC import * I
need a file called Spam.ABC.[so|pyd]?

On Wed, Dec 1, 2010 at 8:39 PM, Robert Kern robert.k...@gmail.com wrote:
 On 12/1/10 4:12 PM, Eric Frederich wrote:

 I have an extension to some C library that I created using the guide
 found here...

     http://docs.python.org/extending/extending.html

 I am starting to have A LOT of functions being wrapped.

 The library that I'm creating bindings for is organized into modules.
 In fact, all of their function calls start with a prefix like
 ABC_do_something, XYZ_something_else.

 I'd like to start putting the bindings for each module into a separate
 C file and have each set of bindings end up in its own Python module
 as well.

 Is this possible to do using a single .dll / .pyd file so that I can
 use a single Visual Studio project for these bindings?

 No, I don't think so. The Python import mechanism will be looking for an
 appropriately-named .pyd file for each module. In any case, you shouldn't be
 using Visual Studio directly to build the .pyd. Instead, use distutils.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it
 had
  an underlying truth.
  -- Umberto Eco

 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


multiple modules from single c extension

2010-12-01 Thread Eric Frederich
I have an extension to some C library that I created using the guide
found here...

http://docs.python.org/extending/extending.html

I am starting to have A LOT of functions being wrapped.

The library that I'm creating bindings for is organized into modules.
In fact, all of their function calls start with a prefix like
ABC_do_something, XYZ_something_else.

I'd like to start putting the bindings for each module into a separate
C file and have each set of bindings end up in its own Python module
as well.

Is this possible to do using a single .dll / .pyd file so that I can
use a single Visual Studio project for these bindings?

Is it just a matter of initializing more than one module in the
initspam function?

PyMODINIT_FUNC
initspam(void)
{
(void) Py_InitModule(spam, SpamMethods);
// can I do this?
(void) Py_InitModule(spam.ABC, SpamABCMethods);
}

Would this even work?
How would I get a hold of SpamABCMethods now that its in a different file?
I tried creating a .h file for the ABC functions which had the
prototypes defined as well as the static PyMethodDef SpamABCMethods[]
array.
This way I could include SpamABC.h which had the array.
When I try doing that though, I get link errors saying things like
functions are declared but not defined.
The example online has everything in a single C file.
If I split things up into a .h and a .c file... what should be
declared as staic?, should anything be declared as extern?
This is where I'm stuck.

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


C struct to Python

2010-11-30 Thread Eric Frederich
I am not sure how to proceed.
I am writing a Python interface to a C library.
The C library uses structures.
I was looking at the struct module but struct.unpack only seems to
deal with data that was packed using struct.pack or some other buffer.
All I have is the struct itself, a pointer in C.
Is there a way to unpack directly from a memory address?

Right now on the C side of things I can create a buffer of the struct
data like so...

MyStruct ms;
unsigned char buffer[sizeof(MyStruct) + 1];
memcpy(buffer, ms, sizeof(MyStruct));
return Py_BuildValue(s#, buffer, sizeof(MyStruct));

Then on the Python side I can unpack it using struct.unpack.

I'm just wondering if I need to jump through these hoops of packing it
on the C side or if I can do it directly from Python.

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extension on Windows

2010-11-19 Thread Eric Frederich
On Fri, Nov 19, 2010 at 7:28 AM, Ulrich Eckhardt
ulrich.eckha...@dominolaser.com wrote:

 Now when I created a 2nd function to wrap a library function I get the
 following.

 ImportError: DLL load failed: The specified module could not be found.

 This can mean that the module itself couldn't be loaded or that one of the
 DLLs it depends on couldn't be found. Use dependencywalker to check.

What do I do when I find these dependencies?
Do I put them in some environment variable?
Do I put them in site-packages along with the .pyd file, or in some
other directory?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extension on Windows

2010-11-19 Thread Eric Frederich
On Fri, Nov 19, 2010 at 8:12 AM, Ulrich Eckhardt
ulrich.eckha...@dominolaser.com wrote:
 Eric Frederich wrote:
 Do I put them [DLL dependencies] in some environment variable?
 Do I put them in site-packages along with the .pyd file, or in some
 other directory?

 Take a look at the LoadLibrary() docs:
  http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx

 These further lead on to:
  http://msdn.microsoft.com/en-us/library/ms682586(v=VS.85).aspx
 which details in what order different places are searched for DLLs.

 If you put the DLL in the same directory as your PYD, it should work. This
 is not the most elegant solution though, see above for more info.


Well... I used the tool and found missing dependencies and just copied
them into site-packages.
There were two.  Then when I put those two in there and ran
dependencywalker again, I had 6 more missing.
I found all of these dependencies in a bin directory of the program
which I'm trying to create bindings for.
The one I couldn't find was MSVCR80.DLL but my python module imported
fine so maybe its not a big deal (actually, if I throw one of their
dll files into dependency walker it shows the same thing).
This was just with me wrapping one (very basic) routine.  I would
imagine as I wrap more and more, I'd need more and more dll files.

I think rather than copying .dll files around, I'll just put my .pyd
file in their 'bin' directory and set PYTHONPATH environment variable.

Things are starting to look promising.

I now have to deal with other issues (coming up in a new python-list thread).

Thanks a bunch Ulri.
-- 
http://mail.python.org/mailman/listinfo/python-list


Round Trip: C to Python to C Module

2010-11-19 Thread Eric Frederich
I have a proprietary software PropSoft that I need to extend.
They support extensions written in C that can link against PropLib to
interact with the system.

I have a Python C module that wraps a couple PropLib functions that I
call PyProp.
From an interactive Python shell I can import PyProp and call a function.
None of these functions really do anything outside the context of
being logged into the PropSoft software; so all the functions fail
when running from Python alone.

To my amazement, I was able to run PyRun_SimpleString(import
PyProp\nPyProp.some_function()) without setting PYTHONPATH or
anything.  How this works, I don't know and I don't really care (at
the moment anyway).

The problem I'm having now is how do I return things from my Python
script back to C?
Ultimately I won't be hard coding python inside of PyRun_SimpleString
but loading the script from a file.
So, how do I return values back to C?  Python functions return values
but running a python script?... doesn't that just have an exit status?
Is there a mechanism for doing this?

Thanks in advance,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list


Extension on Windows

2010-11-18 Thread Eric Frederich
Hello,

I am trying to create an extension on Windows and I may be over my
head but I have made it pretty far.

I am trying to create bindings for some libraries which require me to
use Visual Studio 2005.

I set up the spammodule example and in VS set the output file to be a .pyd file.
When I copy that .pyd file into site-packages I can use it just fine
and call system('dir').

Now when I created a 2nd function to wrap a library function I get the
following.

ImportError: DLL load failed: The specified module could not be found.

I added the correct additional include directories and specified the
correct .lib file to get it to compile fine without any errors.

What is going on here?  I tried running python with -vvv and got no
meaningful info... it just fails.

What else do I need to do?

Thanks,
~Eric
-- 
http://mail.python.org/mailman/listinfo/python-list