Re: [Python-Dev] Adding support to curses library

2009-02-25 Thread Ulrich Berning

Heracles wrote:


Hello,

I am working on a patch to add to the _cursesmodule.c file of the Python
core libraries.  I figured I would take on one of the implemented functions
to try to get my feet wet contributing to the project.  At any rate, I have
the following function defined in the 2.7.a version updated from SVN this
morning:

- Snippet ---
// Insert new method color_set Steve Owens 2/24/2009
//   The curses library color_set function has the following signature:
//   int color_set(short color_pair_number, void* opts); 
static PyObject *

PyCurses_color_set(PyObject *self, PyObject *args)
{
  short color_pair_number;
  void * opts;
  int erg;

  // These macros ought to be documented in the API docs
  // but they aren't yet.
  PyCursesInitialised
  PyCursesInitialisedColor

  // Per ncurses Man Page: 
  //   The routine color_set sets the current color of the given window to

  // the foreground/background combination described by the
color_pair_number. 
  // The parameter opts is reserved for future use, applications must
supply a 
  // null pointer. 
  switch(PyTuple_Size(args))

  {
  case 1:
   // Dont make them pass a useless null pointer.
   if (!PyArg_ParseTuple(args, "h", &color_pair_number)) return NULL;
   break;
  case 2:
   // Allow them to pass the opts pointer so that when ncurses is later
updated.
   // This method will still work.
   if (!PyArg_ParseTuple(args, "hO&", &color_pair_number, &opts)) return
NULL;   
   break;
  default:
 PyErr_SetString(PyExc_TypeError, "color_set requires 1 or 2 arguments
(color_pair_number[, opts]?)");
  return NULL;
  }

  erg = color_set(color_pair_number, opts); // Debating on forcing null
here.
  
  if (erg == ERR) 
	  return PyCursesCheckERR(erg, "color_set");

  else
 PyInt_FromLong((long) 1L); 
}

-End  Snippet ---

I also have the following added in (see last line of the snippet):

- Snippet ---
static PyMethodDef PyCurses_methods[] = {
 {"baudrate",(PyCFunction)PyCurses_baudrate, METH_NOARGS},
 {"beep",(PyCFunction)PyCurses_beep, METH_NOARGS},
 {"can_change_color",(PyCFunction)PyCurses_can_change_color,
METH_NOARGS},
 {"cbreak",  (PyCFunction)PyCurses_cbreak, METH_VARARGS},
 {"color_content",   (PyCFunction)PyCurses_Color_Content,
METH_VARARGS},
 {"color_pair",  (PyCFunction)PyCurses_color_pair, METH_VARARGS},
 {"color_set",   (PyCFunction)PyCurses_color_set, METH_VARARGS},
-End  Snippet ---

The code compiles and installs fine, but when I run the following unit test,
I get a segmentation fault:

- Snippet ---
import unittest, curses
from test import test_support

def testCursesColorSet(stdscrn):
  curses.init_pair(1, curses.COLOR_RED, curses.COLOR_WHITE)
  curses.init_pair(2, curses.COLOR_WHITE, curses.COLOR_BLUE);
  i = curses.color_set(1, NULL);
  stdscrn.addstr("RED/BLACK (%0)\n".format(i))
  i = curses.color_set(2, NULL);
  stdscrn.print("WHITE/BLUE (%0)\n".format(i))
  i = curses.color_set(0, NULL);
  stdscrn.print("Default (%0)\n".format(i))


def test_main(stdscrn):
  curses.savetty()
  if curses.has_color():
 testCursesColorSet(stdscrn)
  else
 stdscr.addstr( "Test Aborted: Color not supported on this terminal.")


if __name__ == '__main__':
   curses.wrapper(test_main)
-End  Snippet ---

It turns out that by commenting out this line in the _cursesmodule.c code,
allows the unit test to run 
obviously reporting the error as expected:


- Snippet ---
//erg = color_set(color_pair_number, opts); // Debating on forcing null
here.
-End  Snippet ---

At any rate I am stuck.  I am still trying to build just a plain C file
which will test the color_set function 
outside of python, but that is another task.


Any suggestions?


 

As long as Python is written in C, please don't use C++ comments, some C 
compilers don't like them.


Ulli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Obtaining short file path

2008-05-30 Thread Ulrich Berning

Hartwell Bryan wrote:


Hi,

Purpose: obtaining the system (“short”) path from a full path

Background: File dialogs (visual studio) return a full path (e.g. 
f=“C:\this path has spaces\thisfilenameislongerthan8char.txt”). If 
this value is provided to Python, it will not recongize this as a 
file. In fact os.path.isfile(f) doesn’t return false, it crashes. 
Likewise, when calling executables (from Python) with files as 
arguments a short path is required. VB FileSystemObject has the 
ShortPath method, while os.path and path (www.jorendorff.com) modules 
do not (at least as far as my googling could determine). Why bother 
creating a COM interface when you’re just going to pass as shell 
run-time arguments all the values the server is better at computing?


System: Python 2.3; Windows XP

Sample Code:

import win32com.client

import time

import os,sys

import os.path

#-

def shortpath(x):

z=''

for y in x.split('\\'):

if len(y.split('.')[0])>8:

if ('.' in y):

z=z+'\\'+y.split('.')[0][:6].upper()+'~1'+'.'+y.split('.')[1]

else:

z=z+'\\'+y[:6].upper()+'~1'

else:

z=z+'\\'+y

return z[1:]

#-

xlApp = win32com.client.Dispatch("Excel.Application")

xlBook = xlApp.ActiveWorkbook

savFile = str(sys.argv[1])

rawFile = str(xlBook.Sheets("Timestamp").TextBox2)

#print os.path.isfile(savFile)

r=shortpath(rawFile)

print r

try:

print os.path.isfile(r)

except:

print 'something rude'

time.sleep(7)

Notes: This code does not account for peer paths or files that share 
the first 8 characters (and file extension). I’m also aware that this 
is not the normal means for submitting a “patch”, but in my job 
function I don’t see myself regularly participating in python 
development (and I’m probably not savvy enough) so the effort wasn’t 
worth it. However I still thought others might benefit from what seems 
to be (to me) a fundamental path function. Do with it, or ignore it, 
as you please.


Cheers,

Bryan Hartwell



This message is intended only for the use of the intended recipients, 
and it may be privileged and confidential. If you are not the intended 
recipient, you are hereby notified that any review, retransmission, 
conversion to hard copy, copying, circulation or other use of this 
message is strictly prohibited. If you are not the intended recipient, 
please notify me immediately by return e-mail, and delete this message 
from your system.




___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/ulrich.berning%40denviso.de
 


Why not win32api.GetShortPathName() and win32api.GetFullPathName()?

>>> import os, win32api
>>> path = "C:\\this path has spaces\\thisfilehasmorethan8char.txt"
>>> short_path = win32api.GetShortPathName(path)
>>> short_path
'C:\\THISPA~1\\THISFI~1.TXT'
>>> os.path.isfile(short_path)
True
>>> full_path = win32api.GetLongPathName(short_path)
>>> full_path
'C:\\this path has spaces\\thisfilehasmorethan8char.txt'
>>> os.path.isfile(full_path)
True
>>> path == full_path
True
>>>

Ulli


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Addition of "pyprocessing" module to standard lib.

2008-05-21 Thread Ulrich Berning

Brett Cannon wrote:


On Mon, May 19, 2008 at 2:03 AM, Ulrich Berning
<[EMAIL PROTECTED]> wrote:
 


Gregory P. Smith wrote:

   


On Fri, May 16, 2008 at 1:32 AM, Ulrich Berning
<[EMAIL PROTECTED]> wrote:

 


As long as the ctypes extension doesn't build on major Un*x platforms
(AIX,
HP-UX), I don't like to see ctypes dependend modules included into the
stdlib. Please keep the stdlib as portable as possible.

   


Nice in theory but ctypes already works on at least the top 3 popular
platforms.  Lets not hold Python's stdlib back because nobody who uses
IBM and HP proprietary stuff has contributed the necessary support.
Making nice libraries available for other platforms is a good way to
encourage people to either pitch in and add support or consider their
platform choices in the future.

-gps


 


It's not my platform choice, it's the choice of our customers. I'm not using
these platforms just for fun (in fact it isn't fun compared to Linux or
Windows).

If porting libffi to AIX, HP-UX, IRIX, Solaris... (especially using vendor
compilers) would be an easy job, I'm sure it would have been done already.
   



Well, ctypes isn't simple. =)

 


If more and more essential packages depend on ctypes, we should make a clear
statement, that Python isn't supported any longer on platform/compiler
combinations where libffi/ctypes doesn't build. This would give me arguments
to drop support of our software on those platforms.
   



You are mixing the stdlib in with the language in terms of what is
required for Python to work, which I think is unfair. Just because
some part of the stdlib isn't portable to some OS does not mean Python
is not supported on that platform. If you can run a pure Python module
that does not depend on any C extension, then that platform has the
support needed to run Python. Everything else is extra (which is why
we have modules in the stdlib only available on specific platforms).

-Brett

 

I don't think it is unfair. If the development team decides one day to 
reimplement essential extensions like math, _socket, select, _ssl, pwd, 
grp, time, _locale, zlib... based on ctypes because it may be much 
easier to maintain python modules instead of dealing with complicated C 
code,  Python will become pretty useless. It's like a cool radio without 
the chance to get any batteries for it, pretty useless.


Platform specific modules are documented as such and nobody would expect 
a _winreg module on AIX or HP-UX.


As said before, PyOpenGL is an example of an extension that moved from C 
code to Python/ctypes, luckily we don't use it, but what if the 
maintainers of MySQL-Python or cx_Oracle decide to move to ctypes.
Having the ctypes extension in the stdlib doesn't imply it runs on any 
platform where python runs. Extension writers should keep this in mind 
when they decide to use ctypes. They should document, that their 
extension depends on ctypes and therefore doesn't run on platforms where 
ctypes doesn't work.


Ulli

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Addition of "pyprocessing" module to standard lib.

2008-05-19 Thread Ulrich Berning

Gregory P. Smith wrote:


On Fri, May 16, 2008 at 1:32 AM, Ulrich Berning
<[EMAIL PROTECTED]> wrote:
 


As long as the ctypes extension doesn't build on major Un*x platforms (AIX,
HP-UX), I don't like to see ctypes dependend modules included into the
stdlib. Please keep the stdlib as portable as possible.
   



Nice in theory but ctypes already works on at least the top 3 popular
platforms.  Lets not hold Python's stdlib back because nobody who uses
IBM and HP proprietary stuff has contributed the necessary support.
Making nice libraries available for other platforms is a good way to
encourage people to either pitch in and add support or consider their
platform choices in the future.

-gps

 

It's not my platform choice, it's the choice of our customers. I'm not 
using these platforms just for fun (in fact it isn't fun compared to 
Linux or Windows).


If porting libffi to AIX, HP-UX, IRIX, Solaris... (especially using 
vendor compilers) would be an easy job, I'm sure it would have been done 
already. If more and more essential packages depend on ctypes, we should 
make a clear statement, that Python isn't supported any longer on 
platform/compiler combinations where libffi/ctypes doesn't build. This 
would give me arguments to drop support of our software on those platforms.


Ulli

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Addition of "pyprocessing" module to standard lib.

2008-05-19 Thread Ulrich Berning

Nick Coghlan wrote:


Ulrich Berning wrote:

More and more people tend to say "Runs on Un*x" when they really mean 
"Tested on Linux". Un*x is not Linux.



Hmm, perhaps that would be why there are Solaris, FreeBSD and Tru64 
machines amongst the main Python buildbots, to go along with the 
assorted OS X, Windows and Linux boxes - and as far as I know 
test_ctypes runs quite happily on all of them.


On the specific problems with AIX, HP-UX and ctypes, was it ctypes 
itself that was failing to build, or the underlying libffi?


Cheers,
Nick.


On HP-UX-11.00, HP ANSI C++ B3910B A.03.73, Python-2.5.2, I get
 configure: error: "libffi has not been ported to hppa2.0w-hp-hpux11.00."

On AIX-4.3.3, C for AIX Compiler Version 6, Python-2.5.2, I get
 "build/temp.aix-4.3-2.5/libffi/include/ffi.h", line 123.4: 1506-205 
(S) #error "no 64-bit data type supported"


On Solaris-10/x86, Sun C 5.8 Patch 121016-07 2007/10/03, Python-2.5.2, I get
 "build/temp.solaris-2.10-i86pc-2.5/libffi/include/ffitarget.h", line 
64: undefined symbol: FFI_DEFAULT_ABI


On Solaris-8/sparc, Sun C 5.8 2005/10/13, Python-2.5.2, I get
 "build/temp.solaris-2.8-sun4u-2.5/libffi/include/ffi.h", line 225: 
syntax error before or at: __attribute__


On IRIX-6.5, gcc-3.4.4, Python-2.5.2, ffi_closure is undefined, because 
only the old O32 binary format is supported, not the new N32/N64 format.


I'm trying to use the vendor specific compilers whenever possible, 
because using gcc puts in additional dependencies (libgcc), I want to 
avoid, and even if I could live with these dependencies, it's not easy 
to get/build the 'right' gcc version, if your software also depends on 
other big packages like Qt and PyQt.


I'm not using these platforms for my own pleasure (in fact, I would be 
happy if these platforms would disappear from the market), but as long 
as our customers use these platforms, we want to promise our software 
runs on those platforms.


I have no problem with the fact that ctypes doesn't build on those 
platforms because I don't use it, but if more and more essential 
packages depend on ctypes, I'm running into trouble. PyOpenGL is an 
example of an extension, that moved completely from C-Source (SWIG 
generated) to ctypes usage.


Ulli

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Addition of "pyprocessing" module to standard lib.

2008-05-16 Thread Ulrich Berning

Nick Craig-Wood wrote:


Jesse Noller <[EMAIL PROTECTED]> wrote:
 


I am looking for any questions, concerns or benchmarks python-dev has
regarding the possible inclusion of the pyprocessing module to the
standard library - preferably in the 2.6 timeline.  In March, I began
working on the PEP for the inclusion of the pyprocessing (processing)
module into the python standard library[1]. The original email to the
stdlib-sig can be found here, it includes a basic overview of the
module:

http://mail.python.org/pipermail/stdlib-sig/2008-March/000129.html

The processing module mirrors/mimics the API of the threading module -
and with simple import/subclassing changes depending on the code,
allows you to leverage multi core machines via an underlying forking
mechanism. The module also supports the sharing of data across groups
of networked machines - a feature obviously not part of the core
threading module, but useful in a distributed environment.
   



I think processing looks interesting and useful, especially since it
works on Windows as well as Un*x.

However I'd like to see a review of the security - anything which can
run across networks of machines has security implications and I didn't
see these spelt out in the documentation.

Networked running should certainly be disabled by default and need
explicitly enabling by the user - I'd hate for a new version of python
to come with a remote exploit by default...

 

As long as the ctypes extension doesn't build on major Un*x platforms 
(AIX, HP-UX), I don't like to see ctypes dependend modules included into 
the stdlib. Please keep the stdlib as portable as possible.
More and more people tend to say "Runs on Un*x" when they really mean 
"Tested on Linux". Un*x is not Linux.


Ulli



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] towards a stricter definition of sys.executable

2006-03-16 Thread Ulrich Berning
Fredrik Lundh schrieb:

>how about this alternative ?  (extended (b)).
>
>d) If Python was started from a standard Python interpreter,
>sys.executable contains the full path to this interpreter.  If not,
>or if the path could not be determined, sys.executable is set to
>None.
>  
>
Our registration code for Windows services and COM servers and some 
other specific things rely on the fact, that sys.executable contains the 
name of the binary, that is actually running (either the full path of 
python[.exe] or the full path of the frozen application executable), so 
please don't touch sys.executable.

Ulli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] C++ for CPython 3? (Re: str.count is slow)

2006-02-28 Thread Ulrich Berning
Fredrik Lundh schrieb:

>should we perhaps switch to (careful use of) C++ in 3.0 ?
>
>  
>
I can't see many advantages in moving to C++, but a lot of disadvantages:

- Size increase, especially when we start using templates
- Performance decrease
- Problems with name mangling together with dynamic loading and cross 
module API's
- Everything has to be build with the same compiler, binaries created 
with different compilers can't interoperate
- Possibly all extensions modules have to be (re)written in C++
- Moving to C++ will change Python's well known API substantially

---

IMHO, if a Python major version change implies to forget everything that 
has been established over the years, ignoring backward compaibility, 
breaking nearly every Python script, than we should definitely find 
another name for it, or I will stay with Python 2 for the rest of my life.

---

Ulli


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-14 Thread Ulrich Berning
Mark Hammond schrieb:

>>release. The main reason why I changed the import behavior was
>>pythonservice.exe from the win32 extensions. pythonservice.exe imports
>>the module that contains the service class, but because
>>pythonservice.exe doesn't run in optimized mode, it will only import a
>>.py or a .pyc file, not a .pyo file. Because we always generate bytecode
>>with -OO at distribution time, we either had to change the behavior of
>>pythonservice.exe or change the import behavior of Python.
>>
>>
>
>While ignoring the question of how Python should in the future handle
>optimizations, I think it safe to state that that pythonservice.exe should
>have the same basic functionality and operation in this regard as python.exe
>does.  It doesn't sound too difficult to modify pythonservice to accept -O
>flags, and to modify the service installation process to allow this flag to
>be specified.  I'd certainly welcome any such patches.
>
>Although getting off-topic for this list, note that for recent pywin32
>releases, it is possible to host a service using python.exe directly, and
>this is the technique py2exe uses to host service executables.  It would
>take a little more work to set things up to work like that, but that's
>probably not too unreasonable for a custom application with specialized
>distribution requirements.  Using python.exe obviously means you get full
>access to the  command-line facilities it provides.
>  
>
Although off-topic for this list, I should give a reply.

I have done both.
My first approach was to change pythonservice.exe to accept -O and -OO 
and set the Py_OptimizeFlag accordingly.
Today, we aren't using pythonservice.exe any longer. I have done nearly 
all the required changes in win32serviceutil.py to let python.exe host 
the services. It requires no changes to the services, everything should 
work as before. The difference is, that the service module is always 
executed as a script now. This requires an additional (first) argument 
'--as-service' when the script runs as a service.

NOTE: Debugging services doesn't work yet.

---
Installing the service C:\svc\testService.py is done the usual way:
C:\svc>C:\Python23\python.exe testService.py install

The resulting ImagePath value in the registry is then:
"C:\Python23\python.exe" C:\svc\testService.py --as-service

After finishing development and testing, we convert the script into an 
executable with our own tool sib.py:
C:\svc>C:\Python23\python.exe C:\Python23\sib.py -n testService -d . 
testService.py
C:\svc>nmake

Now, we just do:
C:\svc>testService.exe update

The resulting ImagePath value in the registry is then changed to:
"C:\testService.exe" --as-service

Starting, stopping and removing works as usual:
C:\svc>testService.exe start
C:\svc>testService.exe stop
C:\svc>testService.exe remove
---

Because not everything works as before (debugging doesn't work, but we 
do not use it), I haven't provided a patch yet. As soon as I have 
completed it, I will have a patch available.

Ulli



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-14 Thread Ulrich Berning
Guido van Rossum schrieb:

>On 11/11/05, Ulrich Berning <[EMAIL PROTECTED]> wrote:
>  
>
>>For instance, nobody would give the output of a C compiler a different
>>extension when different compiler flags are used.
>>
>>
>
>But the usage is completely different. With C you explicitly manage
>when compilation happens. With Python you don't. When you first run
>your program with -O but it crashes, and then you run it again without
>-O to enable assertions, you would be very unhappy if the bytecode
>cached in a .pyo file would be reused!
>
>  
>
The other way round makes definitely more sense. At development time, I 
would never use Python with -O or -OO. I use it only at distribution 
time, after doing all the tests, to generate optimized bytecode.

However, this problem could be easily solved, if the value of 
Py_OptimizeFlag would be stored together with the generated bytecode. At 
import time, the cached bytecode would not be reused if the current 
value of Py_OptimizeFlag doesn't match the stored value (if the .py file 
isn't there any longer, we could either raise an exception or we could 
emit a warning and reuse the bytecode anyway). And if we do this a 
little bit more clever, we could refuse reusing optimized bytecode if we 
are running without -O or -OO and ignore assertions and docstrings in 
unoptimized bytecode when we are running with -O or -OO.

>>I would appreciate to see the generation of .pyo files completely
>>removed in the next release.
>>
>>
>
>You seem to forget the realities of backwards compatibility. While
>there are ways to cache bytecode without having multiple extensions,
>we probably can't do that until Python 3.0.
>
>  
>
Please can you explain what backwards compatibility means in this 
context? Generated bytecode is neither upwards nor backwards compatible. 
No matter what I try, I always get a 'Bad magic number' when I try to 
import bytecode generated with a different Python version.
The most obvious software, that may depend on the existence of .pyo 
files are the various freeze/packaging tools like py2exe, py2app, 
cx_Freeze and Installer.  I haven't checked them in detail, but after a 
short inspection, they seem to be independent of the existence of .pyo 
files. I can't imagine that there is any other Python software, that 
depends on the existence of .pyo files, but maybe I'm totally wrong in 
this wild guess.

Ulli

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent behaviour in import/zipimport hooks

2005-11-11 Thread Ulrich Berning
Phillip J. Eby schrieb:

>At 04:33 PM 11/9/2005 -0800, Guido van Rossum wrote:
>  
>
>>On 11/9/05, Phillip J. Eby <[EMAIL PROTECTED]> wrote:
>>
>>
>>>By the way, while we're on this subject, can we make the optimization
>>>options be part of the compile() interface?  Right now the distutils has to
>>>actually exec another Python process whenever you want to compile
>>>code with
>>>a different optimization level than what's currently in effect, whereas if
>>>it could pass the desired level to compile(), this wouldn't be necessary.
>>>  
>>>
>>Makes sense to me; we need a patch of course.
>>
>>
>
>But before we can do that, it's not clear to me if it should be part of the 
>existing "flags" argument, or whether it should be separate.  Similarly, 
>whether it's just going to be a level or an optimization bitmask in its own 
>right might be relevant too.
>
>For the current use case, obviously, a level argument suffices, with 'None' 
>meaning "whatever the command-line level was" for backward 
>compatibility.  And I guess we could go with that for now easily enough, 
>I'd just like to know whether any of the AST or optimization mavens had 
>anything they were planning in the immediate future that might affect how 
>the API addition should be structured.
>
>  
>
I'm using a totally different approach for the above problem. I have 
implemented two functions in the sys module, that make the startup flags 
accessible at runtime. This also solves some other problems I had, as 
you will see in the examples below:


The first function makes most of the flags readable (I have ommited the 
flags, that are documented as deprecated in the code):

sys.getrunflag(name) -> integer

Return one of the interpreter run flags. Possible names are 'Optimize', 
'Verbose', 'Interactive', 'IgnoreEnvironment', 'Debug', 
'DivisionWarning', 'NoSite', 'NoZipImport', 'UseClassExceptions', 
'Unicode', 'Frozen', 'Tabcheck'. getrunflag('Optimize') for example 
returns the current value of Py_OptimizeFlag.


The second function makes a few flags writable:

sys.setrunflag(name, value) -> integer

Set an interpreter run flag. The only flags that can be changed at 
runtime are Py_VerboseFlag ('Verbose') and Py_OptimizeFlag ('Optimize'). 
Returns the previous value of the flag.


As you can see, I have also introduced the new flag Py_NoZipImport that 
can be activated with -Z at startup. This bypasses the activation of 
zipimport and is very handy, if you edit modules stored in the 
filesystem, that are normally imported from a zip archive and you want 
to test your modifications. With this flag, there is no need to delete, 
rename or update the zip archive or to modify sys.path to ensure that 
your changed modules are imported from the filesystem and not from the 
zip archive.


And here are a few usable examples for the new functions:

1.)  You have an application, that does a huge amount of imports and 
some of them are mysterious, so you want to track them in verbose mode. 
You could start python with -v or -vv, but then you get hundreds or 
thousands of lines of output. Instead, you can do the following:

import sys
import ...
import ...
oldval = sys.setrunflag('Verbose', 1) # -v, use 2 for -vv
import ...
import ...
sys.setrunflag('Verbose', oldval)
import ...
import ...

Now, you get only verbose messages for the imports that you want to track.

2.) You need to generate optimized byte code (without assertions and 
docstrings) from a source code, no matter how the interpreter was started:

import sys
...
source = ...
oldval = sys.setrunflag('Optimize', 2) # -OO, use 1 for -O
bytecode = compile(source, ...)
sys.setrunflag('Optimize', oldval)
...

3.) You have to build a command line for the running application (e.g. 
for registration in the registry) and need to check, if you are running 
a script or a frozen executable (this assumes, that your freeze tool 
sets the Py_FrozenFlag):

import sys
...
if sys.getrunflag('Frozen'):
commandline = sys.executable
else:
commandline = '%s %s' % (sys.executable, sys.argv[0])
...

NOTE: My own freeze tool sib.py, which is part of the VendorID package 
(www.riverbankcomputing.co.uk/vendorid) doesn't set the Py_FrozenFlag 
yet. I will provide an update soon.



And now back to the original subject:

I have done nearly the same changes, that Osvaldo provided with his 
patch and I would highly appreciate if this patch goes into the next 
release. The main reason why I changed the import behavior was 
pythonservice.exe from the win32 extensions. pythonservice.exe imports 
the module that contains the service class, but because 
pythonservice.exe doesn't run in optimized mode, it will only import a 
.py or a .pyc file, not a .pyo file. Because we always generate bytecode 
with -OO at distribution time, we either had to change the behavior of 
pythonservice.exe or change the import behavior of Python.
It is essential for us to remove assertions and docstrings in our 
commercial Python appl

Re: [Python-Dev] Linux Python linking with G++?

2005-07-09 Thread Ulrich Berning
David Abrahams schrieb:

>Ulrich Berning <[EMAIL PROTECTED]> writes:
>
>  
>
>>If you build C++ extensions on HP-UX with aCC, Python must be compiled 
>>and linked as a C++ program. This is documented.
>>
>>
>
>You mean dynamically loaded C++ extensions, or the kind that are
>linked into the Python executable?
>
>  
>
Dynamically loaded extensions, especially SIP/PyQt 
(http://www.riverbankcomputing.co.uk).

>I'm willing to believe almost anything about HP-UX.  Until recently,
>aCC was so broken as a C++ compiler that there was little point in
>trying to get Boost.Python to work on it, and I don't have much data
>for that system.
>
>  
>
I'm using the HP aC++ Compiler C.03.50 together with the patches 
PHSS_29483 and PHSS_30967 on HP-UX B.11.00 and had no problems to build 
Python (2.3.5), Qt, SIP and PyQt and all other extensions with it.

>>It will not work if Python is compiled and linked as a normal C
>>program (I have tried it).
>>
>>
>
>Even if you take out the use of C++ constructs in ccpython.cc?  I just
>need to check all the obvious angles.
>
>  
>
What do you mean? The only C++ construct in ccpython.cc is the extern 
"C" declaration of Py_Main() and this is necessary if a C++ program 
references symbols from a C library. HP says, that a C++ shared library 
or a C++ shared object can only be loaded by a C++ main program. I can't 
remember the error message/symptoms, but I tried to build Python using 
python.c and couldn't load any C++ extensions. Because I'm going on 
vacation for the next three weeks, I can't try anything on HP-UX at the 
moment.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Linux Python linking with G++?

2005-07-09 Thread Ulrich Berning
David Abrahams schrieb:

>"Martin v. Löwis" <[EMAIL PROTECTED]> writes:
>
>  
>
>>David Abrahams wrote:
>>
>>
>>>Unless, of course, I'm missing something.  So if I am missing
>>>something, what is it?
>>>  
>>>
>>You are missing something, and I can only repeat myself. Some systems
>>require main() to be compiled as C++, or else constructors may not work
>>(and perhaps other things fail as well). 
>>
>>
>
>Yes, and that becomes important in programs that have constructors.
>I.e., C++ programs.  The Python executable is not such a program,
>except for one C++ file: ccpython.cc.  There is no reason that file
>couldn't be rewritten as a pure 'C' file and any need for Python to be
>linked with G++ would disappear.
>
>  
>
>>The configure option --with-cxx (documented as "enable C++ support")
>>make Python C++ options 
>>
>>
>
>What are "Python C++ options?"
>
>  
>
>>work on such systems. It is automatically enabled if a C++ compiler
>>is found.
>>
>>There is configure auto-detection for what linker is used when
>>ccpython.o becomes main().
>>
>>This is the state of the things as it is. In what way would you like to
>>see that state changed?
>>
>>
>
>I would like the Python executable never to be linked (or compiled
>either) by g++ unless that is explicitly requested by the person
>invoking configure or make.
>
>  
>
>>I could personally accept if ccpython and --with-cxx would be dropped
>>entirely (i.e. deliberately breaking systems which require it); 
>>
>>
>
>I don't believe any systems require it.  I realize you have said
>otherwise, but after years of working with Boost.Python I'm very
>familiar with the issues of dynamic linking and C/C++ interoperability
>on a wide variety of platforms, and I'm not convinced by your
>assertion.  If such a system exists, it should be easy for someone to
>point me at it, and show that something breaks.
>
>  
>
If you build C++ extensions on HP-UX with aCC, Python must be compiled 
and linked as a C++ program. This is documented.
It will not work if Python is compiled and linked as a normal C program 
(I have tried it). I haven't tried gcc on this platform, but I guess it 
is the same (compile and link with g++).

Ulli



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com