Re: [Python-Dev] Investigating time for `import requests`

2017-10-08 Thread David Cournapeau
On Mon, Oct 2, 2017 at 6:42 PM, Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

>
> > On Oct 2, 2017, at 12:39 AM, Nick Coghlan  wrote:
> >
> >  "What requests uses" can identify a useful set of
> > avoidable imports. A Flask "Hello world" app could likely provide
> > another such sample, as could some example data analysis notebooks).
>
> Right.  It is probably worthwhile to identify which parts of the library
> are typically imported but are not ever used.  And likewise, identify a
> core set of commonly used tools that are going to be almost unavoidable in
> sufficiently interesting applications (like using requests to access a REST
> API, running a micro-webframework, or invoking mercurial).
>
> Presumably, if any of this is going to make a difference to end users, we
> need to see if there is any avoidable work that takes a significant
> fraction of the total time from invocation through the point where the user
> first sees meaningful output.  That would include loading from nonvolatile
> storage, executing the various imports, and doing the actual application.
>
> I don't expect to find anything that would help users of Django, Flask,
> and Bottle since those are typically long-running apps where we value
> response time more than startup time.
>
> For scripts using the requests module, there will be some fruit because
> not everything that is imported is used.  However, that may not be
> significant because scripts using requests tend to be I/O bound.  In the
> timings below, 6% of the running time is used to load and run python.exe,
> another 16% is used to import requests, and the remaining 78% is devoted to
> the actual task of running a simple REST API query. It would be interesting
> to see how much of the 16% could be avoided without major alterations to
> requests, to urllib3, and to the standard library.
>

It is certainly true that for a CLI tool that actually makes any network
I/O, especially SSL, import times will quickly be negligible. It becomes
tricky for complex tools, because of error management. For example, a
common pattern I have used in the past is to have a high level "catch all
exceptions" function that dispatch the CLI command:

try:
main_function(...)
except ErrorKind1:

except requests.exceptions.SSLError:
# gives complete message about options when receiving SSL errors, e.g.
invalid certificate

This pattern requires importing requests every time the command is run,
even if no network IO is actually done. For complex CLI tools, maybe most
command don't use network IO (the tool in question was a complete packages
manager), but you pay ~100 ms because of requests import for every command.
It is particularly visible because commands latency starts to be felt
around 100-150 ms, and while you can do a lot in python in 100-150 ms, you
can't do much in 0-50 ms.

David


> For mercurial, "hg log" or "hg commit" will likely be instructive about
> what portion of the imports actually get used.  A push or pull will likely
> be I/O bound so those commands are less informative.
>
>
> Raymond
>
>
> - Quick timing for a minimal script using the requests module
> ---
>
> $ cat > demo_github_rest_api.py
> import requests
> info = requests.get('https://api.github.com/users/raymondh').json()
> print('%(name)s works at %(company)s. Contact at %(email)s' % info)
>
> $ time python3.6 demo_github_rest_api.py
> Raymond Hettinger works at SauceLabs. Contact at None
>
> real0m0.561s
> user0m0.134s
> sys 0m0.018s
>
> $ time python3.6 -c "import requests"
>
> real0m0.125s
> user0m0.104s
> sys 0m0.014s
>
> $ time python3.6 -c ""
>
> real0m0.036s
> user0m0.024s
> sys 0m0.005s
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SSL certificates recommendations for downstream python packagers

2017-01-31 Thread David Cournapeau
On Tue, Jan 31, 2017 at 9:19 AM, Cory Benfield  wrote:

>
> On 30 Jan 2017, at 21:00, David Cournapeau  wrote:
>
>
>
> On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield  wrote:
>
>>
>>
>> > On 30 Jan 2017, at 13:53, David Cournapeau  wrote:
>> >
>> > Are there any official recommendations for downstream packagers beyond
>> PEP 476 ? Is it "acceptable" for downstream packagers to patch python's
>> default cert locations ?
>>
>> There *are* no default cert locations on Windows or macOS that can be
>> accessed by OpenSSL.
>>
>> I cannot stress this strongly enough: you cannot provide a
>> platform-native certificate validation logic for Python *and* use OpenSSL
>> for certificate validation on Windows or macOS. (macOS can technically do
>> this when you link against the system OpenSSL, at the cost of using a
>> catastrophically insecure version of OpenSSL.)
>>
>
> Ah, thanks, that's already useful information.
>
> Just making sure I understand: this means there is no way to use python's
> SSL library to use the system store on windows, in particular private
> certifications that are often deployed by internal ITs in large orgs ?
>
>
> If only it were that simple!
>
> No, you absolutely *can* do that. You can extract the trust roots from the
> system trust store, convert them into PEM/DER-encoded files, and load them
> into OpenSSL. That will work.
>

Right, I guess it depends on what one means by "can". In my context, it was
to be taken as "can it work without the end user having to do anything". We
provide them a python-based tool, and it has to work with the system store
out of the box. If the system store is updated through e.g. group policy,
our python tool automatically get that update.

>From the sound of it, it looks like this is simply not possible ATM with
python, at least not without 3rd party libraries.

David


> The problem is that both SecureTransport and SChannel have got a number of
> differences from OpenSSL. In no particular order:
>
> 1. Their chain building logic is different. This means that, given a
> collection of certificates presented by a server and a bundle of
> already-trusted certs, each implementation may build a different trust
> chain. This may cause one implementation to refuse to validate where the
> others do, or vice versa. This is very common with older OpenSSLs.
> 2. SecureTransport and SChannel both use the system trust DB, which on
> both Windows and mac allows the setting of custom policies. OpenSSL won’t
> respect these policies, which means you can fail-open (that is, export and
> use a root certificate that the OS believes should not be trusted for a
> given use case). There is no way to export these trust policies into
> OpenSSL.
> 3. SecureTransport, SChannel, and OpenSSL all support different X.509
> extensions and understand them differently. This means that some certs may
> be untrusted for certain uses by Windows but trusted for those uses by
> OpenSSL, for example.
>
> In general, it is unwise to mix trust stores. If you want to use your OS’s
> trust store, the best approach is to use the OS’s TLS stack as well. At
> least that way when a user says “It works in my browser”, you know it
> should work for you too.
>
> Cory
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SSL certificates recommendations for downstream python packagers

2017-01-30 Thread David Cournapeau
On Mon, Jan 30, 2017 at 9:14 PM, Christian Heimes 
wrote:

> On 2017-01-30 22:00, David Cournapeau wrote:
> >
> >
> > On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield  > <mailto:c...@lukasa.co.uk>> wrote:
> >
> >
> >
> > > On 30 Jan 2017, at 13:53, David Cournapeau  <mailto:courn...@gmail.com>> wrote:
> > >
> > > Are there any official recommendations for downstream packagers
> beyond PEP 476 ? Is it "acceptable" for downstream packagers to patch
> python's default cert locations ?
> >
> > There *are* no default cert locations on Windows or macOS that can
> > be accessed by OpenSSL.
> >
> > I cannot stress this strongly enough: you cannot provide a
> > platform-native certificate validation logic for Python *and* use
> > OpenSSL for certificate validation on Windows or macOS. (macOS can
> > technically do this when you link against the system OpenSSL, at the
> > cost of using a catastrophically insecure version of OpenSSL.)
> >
> >
> > Ah, thanks, that's already useful information.
> >
> > Just making sure I understand: this means there is no way to use
> > python's SSL library to use the system store on windows, in particular
> > private certifications that are often deployed by internal ITs in large
> > orgs ?
>
> That works with CPython because we get all trust anchors from the cert
> store. However Python is not able to retrieve *additional* certificates.
> A new installation of Windows starts off with a minimal set of trust
> anchors. Chrome, IE and Edge use the proper APIs.
>

Hm. Is this documented anywhere ? We have customers needing
"private/custom" certificates, and I am unsure where to look for.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SSL certificates recommendations for downstream python packagers

2017-01-30 Thread David Cournapeau
On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield  wrote:

>
>
> > On 30 Jan 2017, at 13:53, David Cournapeau  wrote:
> >
> > Are there any official recommendations for downstream packagers beyond
> PEP 476 ? Is it "acceptable" for downstream packagers to patch python's
> default cert locations ?
>
> There *are* no default cert locations on Windows or macOS that can be
> accessed by OpenSSL.
>

Also, doesn't that contradict the wording of PEP 476, specifically " Python
would use the system provided certificate database on all platforms.
Failure to locate such a database would be an error, and users would need
to explicitly specify a location to fix it." ?

Or is that PEP a long term goal, and not a description of the current
status ?

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SSL certificates recommendations for downstream python packagers

2017-01-30 Thread David Cournapeau
On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield  wrote:

>
>
> > On 30 Jan 2017, at 13:53, David Cournapeau  wrote:
> >
> > Are there any official recommendations for downstream packagers beyond
> PEP 476 ? Is it "acceptable" for downstream packagers to patch python's
> default cert locations ?
>
> There *are* no default cert locations on Windows or macOS that can be
> accessed by OpenSSL.
>
> I cannot stress this strongly enough: you cannot provide a platform-native
> certificate validation logic for Python *and* use OpenSSL for certificate
> validation on Windows or macOS. (macOS can technically do this when you
> link against the system OpenSSL, at the cost of using a catastrophically
> insecure version of OpenSSL.)
>

Ah, thanks, that's already useful information.

Just making sure I understand: this means there is no way to use python's
SSL library to use the system store on windows, in particular private
certifications that are often deployed by internal ITs in large orgs ?


> The only program I am aware of that does platform-native certificate
> validation on all three major desktop OS platforms is Chrome. It does this
> using a fork of OpenSSL to do the actual TLS, but the platform-native
> crypto library to do the certificate validation. This is the only
> acceptable way to do this, and Python does not expose the appropriate hooks
> to do it from within Python code. This would require that you carry
> substantial patches to the standard library to achieve this, all of which
> would be custom code. I strongly recommend you don't undertake to do this
> unless you are very confident of your ability to write this code correctly.
>

That's exactly what I was afraid of and why I asked before attempting
anything.


>
> The best long term solution to this is to stop using OpenSSL on platforms
> that don't consider it the 'blessed' approach. If you're interested in
> following that work, we're currently discussing it on the security-SIG, and
> you'd be welcome to join.
>

Thanks, I will see if it looks like I have anything to contribute.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] SSL certificates recommendations for downstream python packagers

2017-01-30 Thread David Cournapeau
Hi,

I am managing the team responsible for providing python packaging at
Enthought, and I would like to make sure we are providing a good (and
secure) out of the box experience for SSL.

My understanding is that PEP 476 is the latest PEP that concerns this
issue, and that PEP recommends using the system store:
https://www.python.org/dev/peps/pep-0476/#trust-database. But looking at
binary python distributions from python.org, that does not seem to a.ways
be the case. I looked at the following:

* 3.5.3 from python.org for OS X (64 bits): this uses the old, system
openssl
* 3.6.0 from python.org for OS X: this embeds a recent openssl, but ssl
seems to be configured to use non existing paths
(ssl..get_default_verify_paths()), and indeed, cert validation seems to
fail by default with those installers
* 3.6.0 from python.org for windows: I have not found how the ssl module
finds the certificate, but certification seems to work

Are there any official recommendations for downstream packagers beyond PEP
476 ? Is it "acceptable" for downstream packagers to patch python's default
cert locations ?

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making the new dtrace support work on OS X

2017-01-13 Thread David Cournapeau
On Fri, Jan 13, 2017 at 9:12 PM, Lukasz Langa  wrote:

> Looks like function-entry and function-return give you the C-level frame
> names for some reason. This was implemented on OS X 10.11 if that makes any
> difference. I will look at this in the evening, the laptop I'm on now is
> macOS Sierra with SIP which cripples dtrace.
>

On that hint, I tried on OSX 11.1. sw_vers says

ProductName: Mac OS X
ProductVersion: 10.11.6
BuildVersion: 15G1108

And there, the example worked as advertised w/ my build of 3.6.0. I will
try on more versions of OS X in our test lab.

David

>
> On Jan 12, 2017, at 5:08 AM, David Cournapeau  wrote:
>
> Hi,
>
> I was excited to see official dtrace support for python 3.6.0 on OS X, but
> I have not been able to make it work:
>
> 1. I built my own python from sources on OS X 10.9,  with the
> --with-dtrace support
> 2. if I launch `python3.6 -q &` and then `sudo dtrace -l -P python$!`, I
> get the following output:
>
>ID   PROVIDERMODULE  FUNCTION NAME
>  2774 python48084 python3.6  _PyEval_EvalFrameDefault
> function-entry
>  2775 python48084 python3.6  _PyEval_EvalFrameDefault
> function-return
>  2776 python48084 python3.6   collect
> gc-done
>  2777 python48084 python3.6   collect
> gc-start
>  2778 python48084 python3.6  _PyEval_EvalFrameDefault line
>
> Which looks similar but not the same as the example given in the doc at
> https://docs.python.org/dev/howto/instrumentation.
> html#enabling-the-static-markers
>
> 3. When I try to test anything with the given call_stack.d example, I
> can't make it work at all:
>
> """
> # script.py
> def start():
> foo()
>
> def foo():
> pass
>
> start()
> """
>
> I am not very familiar with dtrace, so maybe I a missing a step, there is
> a documentation bug, or it depends on which OS X version you are using ?
>
> Thanks,
> David
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> lukasz%40langa.pl
>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Making the new dtrace support work on OS X

2017-01-12 Thread David Cournapeau
Hi,

I was excited to see official dtrace support for python 3.6.0 on OS X, but
I have not been able to make it work:

1. I built my own python from sources on OS X 10.9,  with the --with-dtrace
support
2. if I launch `python3.6 -q &` and then `sudo dtrace -l -P python$!`, I
get the following output:

   ID   PROVIDERMODULE  FUNCTION NAME
 2774 python48084 python3.6  _PyEval_EvalFrameDefault
function-entry
 2775 python48084 python3.6  _PyEval_EvalFrameDefault
function-return
 2776 python48084 python3.6   collect
gc-done
 2777 python48084 python3.6   collect
gc-start
 2778 python48084 python3.6  _PyEval_EvalFrameDefault line

Which looks similar but not the same as the example given in the doc at
https://docs.python.org/dev/howto/instrumentation.html#enabling-the-static-markers

3. When I try to test anything with the given call_stack.d example, I can't
make it work at all:

"""
# script.py
def start():
foo()

def foo():
pass

start()
"""

I am not very familiar with dtrace, so maybe I a missing a step, there is a
documentation bug, or it depends on which OS X version you are using ?

Thanks,
David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry

2016-03-01 Thread David Cournapeau
On Tue, Mar 1, 2016 at 5:46 PM, Steve Dower  wrote:

> On 01Mar2016 0524, Paul Moore wrote:
>
>> On 1 March 2016 at 11:37, David Cournapeau  wrote:
>>
>>> I am not clear about 3., especially on what should be changed. I know
>>> that
>>> for 2.7, we need to change PC\getpathp.c for sys.path, but are there any
>>> other places where the registry is used by python itself ?
>>>
>>
>> My understanding from the earlier discussion was that you should not
>> patch Python at all. The sys.path building via PythonPath is not
>> covered by the PEP and you should continue as at present. The new keys
>> are all for informational purposes - your installer should write to
>> them, and read them if looking for your installations. But the Python
>> interpreter itself should not know or care about your new keys.
>>
>> Steve can probably clarify better than I can, but that's how I recall
>> it being intended to work.
>> Paul
>>
>
> Yes, the intention was to not move sys.path building out of the PythonCore
> key. It's solely about discovery by external tools.
>

Right. For us, continuing populating sys.path from the registry "owned" by
python.org official installers is more and more untenable, because every
distribution writes there, and this is especially problematic when you have
both 32 bits and 64 bits distributions in the same machine.


> If you want to patch your own distribution to move the paths you are
> welcome to do that - there is only one string literal in getpathp.c that
> needs to be updated - but it's not a requirement and I deliberately avoided
> making a recommendation either way. (Though as discussed earlier in the
> thread, I'm very much in favour of deprecating and removing any use of the
> registry by the runtime itself in 3.6+, but still working out the
> implications of that.)
>

Great, I just wanted to make sure removing it ourselves do not put us in a
corner or further away from where python itself is going.

Would it make sense to indicate in the PEP that doing so is allowed
(neither recommended or frowned upon) ?

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry

2016-03-01 Thread David Cournapeau
Hi Steve,

I have looked into this PEP to see what we need to do on Enthought side of
things. I have a few questions:

1. Is it recommended to follow this for any python version we may provide,
or just new versions (3.6 and above). Most of our customers still heavily
use 2.7, and I wonder whether it would cause more trouble than it is worth
backporting this to 2.7.
2. The main issue for us in practice has been `PythonPath` entry as used to
build `sys.path`. I understand this is not the point of the PEP, but would
it make sense to give more precise recommendations for 3rd party providers
there ?

IIUC, the PEP 514 would recommend for us to do the following:

1. Use HKLM for "system install" or HKCU for "user install" as the root key
2. Register under "\Software\Python\Enthought"
3. We should patch our pythons to look in 2. and not in
"\Software\Python\PythonCore", especially for `sys.path`
constructions.
4. When a python from enthought is installed, it should never register
anything in the key defined in 2.

Is this correct ?

I am not clear about 3., especially on what should be changed. I know that
for 2.7, we need to change PC\getpathp.c for sys.path, but are there any
other places where the registry is used by python itself ?

Thanks for working on this,

David

On Sat, Feb 6, 2016 at 9:01 PM, Steve Dower  wrote:

> I've posted an updated version of this PEP that should soon be visible at
> https://www.python.org/dev/peps/pep-0514.
>
> Leaving aside the fact that the current implementation of Python relies on
> *other* information in the registry (that is not specified in this PEP),
> I'm still looking for feedback or concerns from developers who are likely
> to create or use the keys that are described here.
>
> 
>
> PEP: 514
> Title: Python registration in the Windows registry
> Version: $Revision$
> Last-Modified: $Date$
> Author: Steve Dower 
> Status: Draft
> Type: Informational
> Content-Type: text/x-rst
> Created: 02-Feb-2016
> Post-History: 02-Feb-2016
>
> Abstract
> 
>
> This PEP defines a schema for the Python registry key to allow third-party
> installers to register their installation, and to allow applications to
> detect
> and correctly display all Python environments on a user's machine. No
> implementation changes to Python are proposed with this PEP.
>
> Python environments are not required to be registered unless they want to
> be
> automatically discoverable by external tools.
>
> The schema matches the registry values that have been used by the official
> installer since at least Python 2.5, and the resolution behaviour matches
> the
> behaviour of the official Python releases.
>
> Motivation
> ==
>
> When installed on Windows, the official Python installer creates a
> registry key
> for discovery and detection by other applications. This allows tools such
> as
> installers or IDEs to automatically detect and display a user's Python
> installations.
>
> Third-party installers, such as those used by distributions, typically
> create
> identical keys for the same purpose. Most tools that use the registry to
> detect
> Python installations only inspect the keys used by the official installer.
> As a
> result, third-party installations that wish to be discoverable will
> overwrite
> these values, resulting in users "losing" their Python installation.
>
> By describing a layout for registry keys that allows third-party
> installations
> to register themselves uniquely, as well as providing tool developers
> guidance
> for discovering all available Python installations, these collisions
> should be
> prevented.
>
> Definitions
> ===
>
> A "registry key" is the equivalent of a file-system path into the
> registry. Each
> key may contain "subkeys" (keys nested within keys) and "values" (named and
> typed attributes attached to a key).
>
> ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in
> user,
> and this user can generally read and write all settings under this root.
>
> ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally,
> any
> user can read these settings but only administrators can modify them. It is
> typical for values under ``HKEY_CURRENT_USER`` to take precedence over
> those in
> ``HKEY_LOCAL_MACHINE``.
>
> On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a
> special key
> that 32-bit processes transparently read and write to rather than
> accessing the
> ``Software`` key directly.
>
> Structure
> =
>
> We consider there to be a single collection of Python environments on a
> machine,
> where the collection may be different for each user of the machine. There
> are
> three potential registry locations where the collection may be stored
> based on
> the installation options of each environment::
>
> HKEY_CURRENT_USER\Software\Python\\
> HKEY_LOCAL_MACHINE\Software\Python\\
> HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\
>
> Environments are uniquely identifie

Re: [Python-Dev] Single-file Python executables (was: Computed Goto dispatch for Python 2)

2015-05-28 Thread David Cournapeau
On Fri, May 29, 2015 at 1:28 AM, Chris Barker  wrote:

> On Thu, May 28, 2015 at 9:23 AM, Chris Barker 
> wrote:
>
>> Barry Warsaw wrote:
>> >> I do think single-file executables are an important piece to Python's 
>> >> long-term
>> competitiveness.
>>
>> Really? It seems to me that desktop development is dying. What are the
>> critical use-cases for a single file executable?
>>
>
> oops, sorry -- I see this was addressed in another thread. Though I guess
> I still don't see why "single file" is critical, over "single thing to
> install" -- like a OS-X app bundle that can just be dragged into the
> Applications folder.
>

It is much simpler to deploy in an automated, recoverable way (and also
much faster), because you can't have parts of the artefact "unsynchronized"
with another part of the program. Note also that moving a python
installation in your fs is actually quite unlikely to work in interesting
usecases on unix because of the relocatability issue.

Another advantage: it makes it impossible for users to tamper an
application's content and be surprised things don't work anymore (a very
common source of issues, familiar to anybody deploying complex python
applications in the "enterprise world").

I recently started using some services written in go, and the single file
approach is definitely a big +. It makes *using* applications written in it
so much easier than python, even though I am complete newbie in go and
relatively comfortable with python.

One should keep in mind that go has some inherent advantages over python in
those contexts even if python were to gain single file distribution
tomorrow. Most of go stdlib is written in go now I believe, and it is much
more portable across linux systems on a given CPU arch compared to python.
IOW, it is more robust against ABI variability.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*

2015-03-13 Thread David Cournapeau
Thank you both for your answers.

I will go away with this modification, and see how it goes.

David

On Thu, Mar 12, 2015 at 2:41 AM, Wes Turner  wrote:

>
> On Mar 11, 2015 3:36 PM, "David Cournapeau"  wrote:
> >
> > Hi,
> >
> > While looking at the import code of python for C extensions, I was
> wondering why we pass a relative path instead of an absolute path to
> LoadLibraryEx (see bottom for some context).
> >
> > In python 2.7, the full path existence was even checked before calling
> into LoadLibraryEx (
> https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189),
> but it looks like this check was removed in python 3.x branch.
> >
> > Is there any defined behaviour that depends on this path to be relative ?
>
> Just a guess: does it have to do with resolving symlinks (w/ POSIX
> filesystems)?
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*

2015-03-11 Thread David Cournapeau
Hi,

While looking at the import code of python for C extensions, I was
wondering why we pass a relative path instead of an absolute path to
LoadLibraryEx (see bottom for some context).

In python 2.7, the full path existence was even checked before calling into
LoadLibraryEx (
https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189), but
it looks like this check was removed in python 3.x branch.

Is there any defined behaviour that depends on this path to be relative ?

Context
---

The reason why I am interested in this is the potential use of
SetDllDirectory to share dlls between multiple python extensions.
Currently, the only solutions I am aware of are:

1. putting the dlls in the PATH
2. bundling the dlls side by side the .pyd
3. patching packages to use preloading (using e.g. ctypes)

I am investigating a solution 4, where the dlls would be put in a separate
"private" directory only known of python itself, without the need to modify
PATH.

Patching python to use SetDllDirectory("some private paths specific to a
python interpreter") works perfectly, except that it slightly changes the
semantics of LoadLibraryEx not to look for dlls in the current directory.
This breaks importing extensions built in place, unless I modify the call
in ;https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L195
from:

hDLL = LoadLibraryEx(pathname, NULL LOAD_WITH_ALTERED_SEARCH_PATH)

to

hDLL = LoadLibraryEx(pathbuf, NULL LOAD_WITH_ALTERED_SEARCH_PATH)

That seems to work, but I am quite worried about changing any import
semantics by accident.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-29 Thread David Cournapeau
On Wed, Oct 29, 2014 at 5:17 PM, David Cournapeau 
wrote:

>
>
> On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou 
> wrote:
>
>> On Thu, 30 Oct 2014 01:09:45 +1000
>> Nick Coghlan  wrote:
>> >
>> > Lots of folks are happy with POSIX emulation layers on Windows, as
>> > they're OK with "basically works" rather than "works like any other
>> > native application". "Basically works" isn't sufficient for many
>> > Python-on-Windows use cases though, so the core ABI is a platform
>> > native one, rather than a POSIX emulation.
>> >
>> > This makes Python fit in more cleanly with other Windows applications,
>> > but makes it harder to write Python applications that span both POSIX
>> > and Windows.
>>
>> I don't really understanding why that's the case. Only the
>> building and packaging may be more difficult, and that assumes you're
>> familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows
>> runtime magically POSIX-compatible (Cygwin does, to some extent).
>>
>
> mingw32 is a more compliant C compiler (VS2008 does not implement much
> from C89)
>

That should read much C99, of course, otherwise VS 2008 would have been a
completely useless C compiler !

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of C compilers for Python on Windows

2014-10-29 Thread David Cournapeau
On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou  wrote:

> On Thu, 30 Oct 2014 01:09:45 +1000
> Nick Coghlan  wrote:
> >
> > Lots of folks are happy with POSIX emulation layers on Windows, as
> > they're OK with "basically works" rather than "works like any other
> > native application". "Basically works" isn't sufficient for many
> > Python-on-Windows use cases though, so the core ABI is a platform
> > native one, rather than a POSIX emulation.
> >
> > This makes Python fit in more cleanly with other Windows applications,
> > but makes it harder to write Python applications that span both POSIX
> > and Windows.
>
> I don't really understanding why that's the case. Only the
> building and packaging may be more difficult, and that assumes you're
> familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows
> runtime magically POSIX-compatible (Cygwin does, to some extent).
>

mingw32 is a more compliant C compiler (VS2008 does not implement much from
C89), and it does implement quite a few things not implemented in the C
runtime, especially for math.

But TBH, those are not compelling cases to build python itself on mingw,
only to better support C extensions with mingw.

David


> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Software integrators vs end users (was Re: Language Summit notes)

2014-04-19 Thread David Cournapeau
On Fri, Apr 18, 2014 at 11:28 PM, Donald Stufft  wrote:

>
> On Apr 18, 2014, at 6:24 PM, Nick Coghlan  wrote:
>
> > On 18 April 2014 18:17, Paul Moore  wrote:
> >> On 18 April 2014 22:57, Donald Stufft  wrote:
> >>> Maybe Nick meant ``pip install ipython[all]`` but I don’t actually
> know what that
> >>> includes. I’ve never used ipython except for the console.
> >>
> >> The hard bit is the QT Console, but that's because there aren't wheels
> >> for PySide AFAICT.
> >
> > IPython, matplotlib, scikit-learn, NumPy, nltk, etc. The things that
> > let you break programming out of the low level box of controlling the
> > computer, and connect it directly to the more universal high level
> > task of understanding and visualising the world.
> >
> > Regards,
> > Nick.
> >
> >>
> >> Paul
> >
> >
> >
> > --
> > Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
> FWIW It’s been David Cournapeau’s opinion (on Twitter at least) that
> some/all/most
> (I’m not sure exactly which) of these can be handled by Wheels (they just
> aren’t right now!).
>

Indeed, and the scipy community has been working on making wheels for new
releases. The details of the format does not matter as much as having one
format: at Enthought, we have been using the egg format for years to deploy
python, C/C++ libraries and other assets, but we would have been using
wheels if it existed at that time. Adding features like pre remove/post
install to wheels would be great, but that's a relatively simpler
discussion.

I agree with your sentiment that the main value of sumo distributions like
anaconda, active python or our own canopy is the binary packaging + making
sure it all works together. There will always be some limitations in making
those sumo distributions work seamlessly with 'standard' python, but those
are pretty much the same issues as e.g. linux integrators have.

If the python packaging efforts help the linux distribution integration, it
is very likely to help us too (us == sumo distributions builders) too.

David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Language Summit at PyCon: Agenda

2013-03-04 Thread David Cournapeau
On Mon, Mar 4, 2013 at 4:34 PM, Brett Cannon  wrote:
>
>
>
> On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw  wrote:
>>
>> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote:
>>
>> >It is of course possible for subunit and related tools to run their
>> >own implementation, but it seems ideal to me to have a common API
>> >which regular unittest, nose, py.test and others can all agree on and
>> >use : better reuse for pretty printers, GUI displays and the like
>> >depend on some common API.
>>
>> And One True Way of invoking and/or discovering how to invoke, a package's
>> test suite.
>
>
> How does unittest's test discovery not solve that?

It is not always obvious how to test a package when one is not
familiar with it. Are the tests in pkgname/tests or tests or ... ?

In the scientific community, we have used the convention of making the
test suite available at runtime with pkgname.tests().

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL delegation for PEP 426 (PyPI metadata 1.3)

2013-02-04 Thread David Cournapeau
On Mon, Feb 4, 2013 at 2:01 AM, Vinay Sajip  wrote:
> David Cournapeau  gmail.com> writes:
>
>> You are putting the words out of the context in which those were
>> written: it is stated that the focus is on the general architecture
>
> OK, no offence was meant. Thanks for the clarification.

No worries, none taken :)

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL delegation for PEP 426 (PyPI metadata 1.3)

2013-02-03 Thread David Cournapeau
On Sun, Feb 3, 2013 at 10:34 PM, Vinay Sajip  wrote:
> Simon Cross  gmail.com> writes:
>
>> For the record, all the reasons listed at [1] appear trivial.
>
> In Bento's author's own words - "Weak documentation", "Mediocre code quality",
> "at a lower level, a lot of code leaves to be desired" may be trivial if David
> is just being self-deprecating, but what if he isn't? Or perhaps that part of
> the page is out of date, and needs updating? I can certainly agree with the
> "Weak documentation" part of the assessment, but this makes it hard to assess
> the package as a whole. Note that I'm not sniping - writing good documentation
> is hard work.

You are putting the words out of the context in which those were
written: it is stated that the focus is on the general architecture
and low-coupling which are the main issues I saw with distutils. Bento
is designed to use multiple build backends (it can use distutils to
build C extensions, or waf, the latter being how numpy/scipy is being
built with bento).

FWIW, I am not in favor of having bento blessed (or any other tool for
that matter). The fundamental mistake of the previous attempts at
packaging has been to formalize too early, or impose de-facto
standards without much specification. That's why wheel and similar
efforts are the way forward: they tackle a narrow but well defined
sub-problem of packaging. Thus, they can be reused by other libraries
to build higher abstractions. They are also less prone to the
'fatigue' that often arise in packaging efforts.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-23 Thread David Cournapeau
On Sat, Jun 23, 2012 at 12:25 PM, Nick Coghlan  wrote:
> On Sat, Jun 23, 2012 at 8:37 PM, Lennart Regebro  wrote:
>> In the end, I think this discussion is very similar to all previous
>> packaging/building/installing discussions: There is a lot of emotions,
>> and a lot of willingness to declare that "X sucks" but very little
>> concrete explanation of *why* X sucks and why it can't be fixed.
>
> If you think that, you haven't read the whole thread. Thanks to this
> discussion, I now have a *much* clearer idea of what's broken, and a
> few ideas on what can be done to fix it.
>
> However, distutils-sig and python-ideas will be the place to post about those.

Nick, I am unfamiliar with python-ideas rules: should we continue
discussion in distutils-sig entirely, or are there some specific
topics that are more appropriate for python-ideas ?

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-22 Thread David Cournapeau
On Fri, Jun 22, 2012 at 9:11 PM, PJ Eby  wrote:
> On Fri, Jun 22, 2012 at 5:22 AM, Dag Sverre Seljebotn
>  wrote:
>>
>> On 06/22/2012 10:40 AM, Paul Moore wrote:
>>>
>>> On 22 June 2012 06:05, Nick Coghlan  wrote:

 distutils really only plays at the SRPM level - there is no defined OS
 neutral RPM equivalent. That's why I brought up the bdist_simple
 discussion earlier in the thread - if we can agree on a standard
 bdist_simple format, then we can more cleanly decouple the "build"
 step from the "install" step.
>>>
>>>
>>> That was essentially the key insight I was trying to communicate in my
>>> "think about the end users" comment. Thanks, Nick!
>>
>>
>> The subtlety here is that there's no way to know before building the
>> package what files should be installed. (For simple extensions, and perhaps
>> documentation, you could get away with ad-hoc rules or special support for
>> Sphinx and what-not, but there's no general solution that works in all
>> cases.)
>>
>> What Bento does is have one metadata file for the source-package, and
>> another metadata file (manifest) for the built-package. The latter is
>> normally generated by the build process (but follows a standard
>> nevertheless). Then that manifest is used for installation (through several
>> available methods).
>
>
> This is the right thing to do, IMO.
>
> Also, I think rather than bikeshedding the One Serialization To Rule Them
> All, it should only be the *built* manifest that is standardized for tool
> consumption, and leave source descriptions to end-user tools.  setup.cfg,
> bento.info, or whatever...  that part should NOT be the first thing
> designed, and should not be the part that's frozen in a spec, since it
> otherwise locks out the ability to enhance that format.

agreed. I may not have been very clear before, but the bento.info
format is really peripherical to what bento is about (it just happens
that  what would become bento was started as a 2 hours proof of
concept for another packaging discussion 3 years ago :) ).

As for the build manifest, I have a few, very out-dated notes there:

http://cournape.github.com/Bento/html/hacking.html#build-manifest-and-building-installers

I will try to update them this WE. I do have code to install, produce
eggs, msi, .exe and .mpkg from this format. The API is kind of
crappy/inconsistent, but the features are there, and there are even
some tests around it. I don't think it would be very difficult to hack
distutils2 to produce this build manifest.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-22 Thread David Cournapeau
On Fri, Jun 22, 2012 at 2:24 PM, Paul Moore  wrote:

>
> I suppose if you're saying that "pip install lxml" should download and
> install for me Visual Studio, libxml2 sources and any dependencies,
> and run all the builds, then you're right. But I assume you're not. So
> why should I need to install Visual Studio just to *use* lxml?
>
> On the other hand, I concede that there are some grey areas between
> the 2 extremes. I don't know enough to do a proper review of the
> various cases. But I do think that there's a risk that the discussion,
> because it is necessarily driven by developers, forgets that "end
> users" really don't have some tools that a developer would consider
> "trivial" to have.

Binary installers are important: if you think lxml is hard on windows,
think about what it means to build fortran libraries and link them
with visual studio for scipy :) That's one of the reason virtualenv +
pip is not that useful for numpy/scipy end users. Bento has code to
build basic binary installers in all the formats supported by
distutils except for RPM, and the code is by design mostly independ of
the rest. I would be happy to clean up that code to make it more
reusable (most of it is extracted from distutils/setuptools anyway).

But it should be completely orthogonal to the issue of package
description: if there is one thing that distutils got horribly wrong,
that's tying everything altogether. The uncoupling is the key, because
otherwise, one keep discussing all the issues together, which is part
of what makes the discussion so hard. Different people have different
needs.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-22 Thread David Cournapeau
On Fri, Jun 22, 2012 at 10:38 AM, Donald Stufft  wrote:
> On Friday, June 22, 2012 at 5:22 AM, Dag Sverre Seljebotn wrote:
>
>
> What Bento does is have one metadata file for the source-package, and
> another metadata file (manifest) for the built-package. The latter is
> normally generated by the build process (but follows a standard
> nevertheless). Then that manifest is used for installation (through
> several available methods).
>
> From what I understand, this dist.(yml|json|ini) would be replacing the
> mainfest not the bento.info then. When bento builds a package compatible
> with the proposed format it would instead of generating it's own manifest
> it would generate the dist.(yml|json|ini).

If by manifest you mean the build manifest, then that's not desirable:
the manifest contains the explicit filenames, and those are
platform/environment specific. You don't want this to be user-facing.

The way it should work is:
   - package description (dist.yaml, setup.cfg, bento.info, whatever)
   - use this as input to the build process
   - build process produces a build manifest that is platform
specific. It should be extremely simple, no conditional or anything,
and should ideally be fed to both python and non-python programs.
   - build manifest is then the sole input to the process building
installers (besides the actual build tree, of course).

Conceptually, after the build, you can do :

manifest = BuildManifest.from_file("build_manifest.json")
manifest.update_path(path_configuration) # This is needed so as to
allow path scheme to be changed depending on installer format
for category, source, target on manifest.iter_files():
 # simple case is copying source to target,  potentially using the
category label for category specific stuff.

This was enough for me to do straight install, eggs, .exe and .msi
windows installers and .mpkg from that with a relatively simple API.
Bonus point, if you include this file inside the installers, you can
actually losslessly convert from one to the other.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-22 Thread David Cournapeau
On Fri, Jun 22, 2012 at 6:05 AM, Nick Coghlan  wrote:

> On Fri, Jun 22, 2012 at 10:01 AM, Donald Stufft 
> wrote:
> > The idea i'm hoping for is to stop worrying about one implementation over
> > another and
> > hoping to create a common format that all the tools can agree upon and
> > create/install.
>
> Right, and this is where it encouraged me to see in the Bento docs
> that David had cribbed from RPM in this regard (although I don't
> believe he has cribbed *enough*).
>
> A packaging system really needs to cope with two very different levels
> of packaging:
> 1. Source distributions (e.g. SRPMs). To get from this to useful
> software requires developer tools.
> 2. "Binary" distributions (e.g. RPMs). To get from this to useful
> software mainly requires a "file copy" utility (well, that and an
> archive decompressor).
>
> An SRPM is *just* a SPEC file and source tarball. That's it. To get
> from that to an installed product, you have a bunch of additional
> "BuildRequires" dependencies, along with %build and %install scripts
> and a %files definition that define what will be packaged up and
> included in the binary RPM. The exact nature of the metadata format
> doesn't really matter, what matters is that it's a documented standard
> that multiple tools can read.
>
> An RPM includes files that actually get installed on the target
> system. An RPM can be arch specific (if they include built binary
> bits) or "noarch" if they're platform neutral.
>
> distutils really only plays at the SRPM level - there is no defined OS
> neutral RPM equivalent. That's why I brought up the bdist_simple
> discussion earlier in the thread - if we can agree on a standard
> bdist_simple format, then we can more cleanly decouple the "build"
> step from the "install" step.
>
> I think one of the key things to learn from the SPEC file format is
> the configuration language it used for the various build phases: sh
> (technically, any shell on the system, but almost everyone just uses
> the default system shell)
>
> This is why you can integrate whatever build system you like with it:
> so long as you can invoke the build from the shell, then you can use
> it to make your RPM.
>
> Now, there's an obvious problem with this: it's completely useless
> from a *cross-platform* building point of view. Isn't it a shame
> there's no language we could use that would let us invoke build
> systems in a cross platform way? Oh, wait...
>
> So here's some sheer pie-in-the-sky speculation. If people like
> elements of this idea enough to run with it, great. If not... oh well:
>
> - I believe the "egg" term has way too much negative baggage (courtesy
> of easy_install), and find the full term Distribution to be too easily
> confused with "Linux distribution". However, "Python dist" is
> unambiguous (since the more typical abbreviation for an aggregate
> distribution is "distro"). Thus, I attempt to systematically refer to
> the objects used to distribute Python software from developers to
> users as "dists". In practice, this terminology is already used in
> many places (distutils, sdist, bdist_msi, bdist_rpm, the .dist-info
> format in PEP 376 etc). Thus, Python software is distributed as dists
> (either sdists or bdists), which may in turn be converted to distro
> packages (e.g. SRPMs and RPMs) for deployment to particular
> environments.
>
> - I reject setup.cfg, as I believe ini-style configuration files are
> not appropriate for a metadata format that needs to include file
> listings and code fragments
>
> - I reject bento.info, as I think if we accept
> yet-another-custom-configuration-file-format into the standard library
> instead of just using YAML, we're even crazier than is already
> apparent
>

I agree having yet another format is a bit crazy, and am actually
considering changing bento.info to be a yaml. I initially did got toward a
cabal-like syntax instead for the following reasons:
  - lack of conditional (a must IMO, it is even more useful for cross
-platform stuff than it is for RPM only)
  - yaml becomes quite a bit verbose for some cases

I find JSON to be inappropriate because beyond the above issues, it does
not support comments, and it is significantly more verbose. That being
said, that's just syntax and what matters more is the features we allow:
  - I like the idea of categorizing like you did better than how it works
in bento, but I think one need to be able to create its own category as
well. A category is just a mapping from a name to an install directory (see
http://cournape.github.com/Bento/html/tutorial.html#installed-data-files-datafiles-section,
but we could find another syntax of course).
  - I don't find the distinction between source and build very useful in
the-yet-to-be-implemented description. Or maybe that's just a naming issue,
and it is just the same distinction as extra files vs installed files I
made in bento ? See next point
  - regarding build, I don't think we want to force people to implement
target

Re: [Python-Dev] Status of packaging in 3.3

2012-06-21 Thread David Cournapeau
On Thu, Jun 21, 2012 at 11:00 PM, Antoine Pitrou wrote:

> On Thu, 21 Jun 2012 22:46:58 +0200
> Dag Sverre Seljebotn  wrote:
> > > The other thing is, the folks in distutils2 and myself, have zero
> > > knowledge about compilers. That's why we got very frustrated not to see
> > > people with that knowledge come and help us in this area.
> >
> > Here's the flip side: If you have zero knowledge about compilers, it's
> > going to be almost impossible to have a meaningful discussion about a
> > compilation PEP.
>
> If a PEP is being discussed, even a packaging PEP, it involves all of
> python-dev, so Tarek and Éric not being knowledgeable in compilers is
> not a big problem.
>
> > The necessary prerequisites in this case is not merely "knowledge of
> > compilers". To avoid repeating mistakes of the past, the prerequisites
> > for a meaningful discussion is years of hard-worn experience building
> > software in various languages, on different platforms, using different
> > build tools.
>
> This is precisely the kind of knowledge that a PEP is aimed at
> distilling.
>

What would you imagine such a PEP would contain ? If you don't need to
customize the compilation, then I would say refactoring what's in distutils
is good enough. If you need customization, then I am convinced one should
just use one of the existing build tools (waf, fbuild, scons, etc…). Python
has more than enough of them already.

By refactoring, I mean extracting it completely from command, and have an
API similar to e.g. fbuild (
https://github.com/felix-lang/fbuild/blob/master/examples/c/fbuildroot.py),
i.e. you basically have a class PythonBuilder.build_extension(name,
sources, options). The key point is to remove any dependency on commands.
If fbuild were not python3-specific, I would say just use that. It would
cover most usecases.

Actually,




> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-21 Thread David Cournapeau
On Thu, Jun 21, 2012 at 10:04 PM, Tarek Ziadé  wrote:

> On 6/21/12 10:46 PM, Dag Sverre Seljebotn wrote:
> ...
>
>  I think we should, as you proposed, list a few projects w/ compilation
>>> needs -- from the simplest to the more complex, then see how a standard
>>> *description* could be used by any tool
>>>
>>
>> It's not clear to me what you mean by description. Package metadata,
>> install information or description of what/how to build?
>>
>> I hope you don't mean the latter, that would be insane...it would
>> effectively amount to creating a build tool that's both more elegant and
>> more powerful than any option that's currently already out there.
>>
>> Assuming you mean the former, that's what David did to create Bento.
>> Reading and understanding Bento and the design decisions going into it
>> would be a better use of time than redoing a discussion, and would at least
>> be a very good starting point.
>>
>
> What I mean is : what would it take to use Bento (or another tool) as the
> compiler in a distutils-based project, without having to change the
> distutils metadata.


I think there is a misunderstanding of what bento is: bento is not a
compiler or anything like that. It is a set of libraries that work together
to configure, build and install a python project.

Concretely, in bento, there is
  -  a part that build a packager description (Distribution-like in
distutils-parlance) from a bento.info (a bite like setup.cfg)
  - a set of tools of commands around this package description.
  - a set of "backends" to e.g. use waf to build C extension with full and
automatic dependency analysis (rebuild this if this other thing is out of
date), parallel builds and configuration. Bento scripts build numpy more
efficiently and reliable while being 50 % shorter than our setup.py.
  - a small library to build a distutils-compatible Distribution so that
you can write a 3 lines setup.py that takes all its info from
bento.infoand allow for pip to work.

Now, you could produce a similar package description from the setup.cfg to
be fed to bento, but I don't really see the point since AFAIK,
bento.infois strictly more powerful as a format than setup.cfg.

Another key point is that the commands around this package description are
almost entirely decoupled from each other: this is the hard part, and
something that is not really possible to do with the current distutils
design in an incremental way.

  - Command don't know about each other and dependencies between commands
are *external* to commands. You say command "build" depends on command
"configure", those dependencies are resolved at runtime. This allows for
3rd parties to insert new command without interfering with each other.
  - options are registered and handled outside command as well: each
command can query any other command options. I believe something similar is
now available in distutils2, though. Bento allow to add arbitrary configure
options to customize library directories (ala autoconf).
  - bento internally has an explicit "database" of built files, with
associated categories, and the build command produces a build "manifest".
The build manifest + the build tree defines completely the input for
install and installers command. The different binary installers use the
same build manifest, and the build manifest is actually designed as to
allow lossless convertion between different installers (e.g. wininst <->
msi, egg <-> mpkg on mac, etc…). This is what allows in principle to use
make, gyp, etc… to produce this build manifest


>
> "It can deal with simple distutils-like builds using a bundled build tool"
>  => If I understand this correctly, does that mean that Bento can build a
> distutils project with the distutils Metadata ?
>

I think Dag meant that bento has a system where you can basically do

# setup.py
from distutils.core import setup
import bento.distutils
bento.distutils.monkey_patch()
setup()

and this setup.py will automatically build a distutils Distribution
populated from bento.info. This allows a bento package to be installable
with pip or anything that expected a setup.py

This allows for interoperability without having to depend on all the
distutils issues.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-21 Thread David Cournapeau
On Thu, Jun 21, 2012 at 12:58 PM, Nick Coghlan  wrote:

> On Thu, Jun 21, 2012 at 7:28 PM, David Cournapeau 
> wrote:
> > If specifying install dependencies is the killer feature of setuptools,
> why
> > can't we have a very simple module that adds the necessary 3 keywords to
> > record it, and let 3rd party tools deal with it as they wish ? That would
> > not even require speciying the format, and would let us more time to deal
> > with the other, more difficult questions.
>
> That low level role is filled by PEP 345 (the latest PyPI metadata
> format, which adds the new fields), PEP 376 (local installation
> database) and PEP 386 (version numbering schema).
>
> The corresponding packaging submodules are the ones that were being
> considered for retention as a reference implementation in 3.3, but are
> still slated for removal along with the rest of the package (the
> reference implementations will remain available as part of distutils2
> on PyPI).
>

I understand the code is already implemented, but I meant that it may be a
good idea to have a simple, self-contained module that does just provide
the necessary bits for the "setuptools killer feature", and let competing
tools deal with it as they please.



> Whatever UI a Python packaging solution presents to a user, it needs
> to support those 3 PEPs on the back end for interoperability with
> other tools (including, eventually, the packaging module in the
> standard library).
>
> Your feedback on the commands/compilers design sounds valuable, and I
> would be very interested in seeing a PEP targeting that aspect of the
> new packaging module (if you look at the start of this thread, the
> failure to improve the compiler API is one of the reasons for pulling
> the code from 3.3).


The problem with compilation is not just the way the compiler classes work.
It it how they interact with commands and the likes, which ends up being
most of the original distutils code. What's wrong with  distutils is the
whole underlying model, if one can call that. No PEP will fix the issue if
the premise is to work within that model.

There are similar kind of arguments around the extensibility of distutils:
it is not just about monkey-patching, but what kind of API you offer to
allow for extensibility, and I think the only way to design this sensibly
is to work on real packages and iterate, not writing a PEP as a first step.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of packaging in 3.3

2012-06-21 Thread David Cournapeau
On Thu, Jun 21, 2012 at 9:45 AM, Nick Coghlan  wrote:

> On Thu, Jun 21, 2012 at 2:44 PM, Chris McDonough  wrote:
> > All of these are really pretty minor issues compared with the main
> benefit
> > of not needing to ship everything with everything else. The killer
> feature
> > is that developers can specify dependencies and users can have those
> > dependencies installed automatically in a cross-platform way.  Everything
> > else is complete noise if this use case is not served.
>
> Cool. This is the kind of thing we need recorded in a PEP - there's a
> lot of domain knowledge floating around in the heads of packaging
> folks that needs to be captured so we can know *what the addition of
> packaging to the standard library is intended to fix*.
>
> And, like it or not, setuptools has a serious PR problem due to the
> fact it monkeypatches the standard library, uses *.pth files to alter
> sys.path for every installed application by default, actually *uses*
> the ability to run code in *.pth files and has hard to follow
> documentation to boot. I *don't* trust that I fully understand the
> import system on any machine with setuptools installed, because it is
> demonstrably happy to install state to the file system that will
> affect *all* Python programs running on the machine.
>
> A packaging PEP needs to explain:
> - what needs to be done to eliminate any need for monkeypatching
> - what's involved in making sure that *.pth are *not* needed by default
> - making sure that executable code in implicitly loaded *.pth files
> isn't used *at all*
>

It is not a PEP, but here are a few reasons why extending distutils is
difficult (taken from our experience in the scipy community, which has by
far the biggest extension of distutils AFAIK):

http://cournape.github.com/Bento/html/faq.html#why-not-extending-existing-tools-distutils-etc

While I believe setuptools has been a net negative for the scipy community
because of the way it works and for the reason you mentioned, I think it is
fair to say it is not really possible to do any differently if you rely on
distutils.

If specifying install dependencies is the killer feature of setuptools, why
can't we have a very simple module that adds the necessary 3 keywords to
record it, and let 3rd party tools deal with it as they wish ? That would
not even require speciying the format, and would let us more time to deal
with the other, more difficult questions.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SWIG (was Re: Ctypes and the stdlib)

2011-08-29 Thread David Cournapeau
On Mon, Aug 29, 2011 at 7:14 PM, Eli Bendersky  wrote:
> 
>>
>> I've sometimes thought it might be interesting to create a Swig
>> replacement purely in Python.  When I work on the PLY project, this is often
>> what I think about.   In that project, I've actually built a number of the
>> parsing tools that would be useful in creating such a thing.   The only
>> catch is that when I start thinking along these lines, I usually reach a
>> point where I say "nah, I'll just write the whole application in Python."
>>
>> Anyways, this is probably way more than anyone wants to know about Swig.
>> Getting back to the original topic of using it to make standard library
>> modules, I just don't know.   I think you probably could have some success
>> with an automatic code generator of some kind.  I'm just not sure it should
>> take the Swig approach of parsing C++ headers.  I think you could do better.
>>
>
> Dave,
>
> Having written a full C99 parser (http://code.google.com/p/pycparser/) based
> on your (excellent) PLY library, my impression is that the problem is with
> the problem, not with the solution. Strange sentence, I know :-) What I mean
> is that parsing C++ (even its headers) is inherently hard, which is why the
> solutions tend to grow so complex. Even with the modest C99, clean and
> simple solutions based on theoretical approaches (like PLY with its
> generated LALR parsers) tend to run into walls [*]. C++ is an order of
> magnitude harder.
>
> If I went to implement something like SWIG today, I would almost surely base
> my implementation on Clang (http://clang.llvm.org/). They have a full C++
> parser (carefully hand-crafted, quite admirably keeping a relatively
> comprehensible code-base for such a task) used in a real compiler front-end,
> and a flexible library structure aimed at creating tools. There are also
> Python bindings that would allow to do most of the interesting
> Python-interface-specific work in Python - parse the C++ headers using
> Clang's existing parser into ASTs - then generate ctypes / extensions from
> that, *in Python*.
>
> The community is also gladly accepting contributions. I've had some fixes
> committed for the Python bindings and the C interfaces that tie them to
> Clang, and got the impression from Clang's core devs that further
> contributions will be most welcome. So whatever is missing from the Python
> bindings can be easily added.

Agreed, I know some people have looked into that direction in the
scientific python community (to generate .pxd for cython). I wrote one
of the hack Stefan refered to (based on ctypeslib using gccxml), and
using clang makes so much more sense.

To go back to the initial issue, using cython to wrap C code makes a
lot of sense. In the scipy community, I believe there is a broad
agreement that most of code which would requires C/C++ should be done
in cython instead (numpy and scipy already do so a bit). I personally
cannot see man situations where writing wrappers in C by hand works
better than cython (especially since cython handles python2/3
automatically for you).

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pass possibly imcompatible options to distutil's ccompiler

2011-04-12 Thread David Cournapeau
On Tue, Apr 12, 2011 at 8:32 AM, Nick Coghlan  wrote:
> On Tue, Apr 12, 2011 at 7:41 AM, Lukas Lueg  wrote:
>> Any other ideas on how to solve this in a better way?
>
> Have you tried with distutils2? If it can't help you, it should really
> be looked into before the packaging API is locked for 3.3.

distutil2 is almost identical to distutils as far as compilation goes,
so I am not sure why it would help the OP.

@Lukas: if you want to check for compiler flag support, the best way
to do it in distutils is to use the config support: look in particular
in the try_compile/try_link methods. The schema is basically:

# code may refer to e.g. a trivial extension source code
try_compile(code) # check that the current option set is sane
for each additional flag you are interested:
   save compiler option
   add the additional flag
   if try_compile(code) == 0:
   restore compiler option

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hg: inter-branch workflow

2011-03-21 Thread David Cournapeau
On Tue, Mar 22, 2011 at 1:20 AM, Barry Warsaw  wrote:
> On Mar 20, 2011, at 04:39 PM, Georg Brandl wrote:
>
>>On 20.03.2011 16:21, Guido van Rossum wrote:
>>> What is "rebase"? Why does everyone want it and hate it at the same time?
>>
>>Basically, rebase is a way to avoid having pointless merge commits on the
>>same branch.
>
> There's something I don't understand about rebase.  It seems like most git and
> hg users I hear from advocate rebase, while (ISTM) few Bazaar users do.

I cannot talk for the communities in general, but that's mostly
cultural from what I have seen, although the tools reflect the
cultural aspect (like fast forward being the default for merge in
git). Hg up to recently did not have rebase at all, so it is less
ingrained it seems.

The reason why I like using rebase in general is because it fits the
way I like thinking about git: as a patch management system when I
work alone on a branch. I don't think private history is that useful
in general, but that depends on the projects of course. Another aspect
is that some people learnt git first through git-svn, where rebase is
the defacto way of working.

The issue of committing something that does not correspond to a tested
working tree is moot IMO: the only way to do it correctly and reliably
is to have some kind of gateway to actually test the commit before
adding it for good to the reference repo. In my experience, the most
common way to commit a broken working tree is to forget adding a new
file or removing an old file (e.g. the code depends on some old file
which has been removed while the old .pyc is still there), and none of
svn/bzr/hg/git prevents that.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backport troubles with mercurial

2010-12-29 Thread David Cournapeau
On Wed, Dec 29, 2010 at 5:02 PM, Amaury Forgeot d'Arc
 wrote:
> 2010/12/29 David Cournapeau 
>>
>> The easiest way I found to emulate git cherry-pick (which does exactly
>> what you want) with hg is to use import/export commands:
>> http://mercurial.selenic.com/wiki/CommunicatingChanges
>>
>> It is indeed quite a pain to use in my experience, because you cannot
>> easily refer to the original commit the cherry pick is coming from
>> (i.e. no equivalent to git cherry-pick -x), and the conflict
>> resolution is quite dumb.
>
> This is precisely why I proposed a specific example.
> Which precise steps do you take?
> How much typing or manual copy/paste is required for this very simple case?
> Can you do the merge in less than 10 minutes?

I don't know in this specific case. As I said, when I have to use hg,
that's the technique I use, and you get the issue you mention. That's
a hg limitation AFAICS.

> And finally the biased question:
> can you find one improvement over the current situation with svn?

I am not involved in the hg conversion process nor its decision (I am
not even a python committer). Cherry picking is actually easier to do
with svn by "accident", because its merge method, up to 1.5 at least,
was really dumb and never remembered the ancestors of a previous
merge.

As for a few data points which may or may not be relevant:  in numpy
we convereted from  svn -> git recently, and it has worked pretty
well, with numerous new contributions happening, and better, new
contributors appearing. I have been the release manager for numpy for
several years, and as such had to do the kind of tasks you mention
numerous times with svn, and the only words that comes to mind when
remember this period would not be appropriate on a public mailing
list: I always found svn to be horrible. I started using git to make
my life as release manager simpler, actually. I would be surprised if
python's situation did not end up being similar to numpy's one. Other
projects related to numpy made the changes to a DVCS (ipython, nipy,
lean scikit) before and none of them ever regretted it AFAIK, and
sometimes the people who become the most vocal proponents of the new
tool were the most sceptic ones before. Certainly, very few people if
any asked to revert the process.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backport troubles with mercurial

2010-12-28 Thread David Cournapeau
On Wed, Dec 29, 2010 at 9:13 AM, Amaury Forgeot d'Arc
 wrote:
> Hello,
> The PyPy project recently switched from svn to mercurial. Since this day I
> have some
> difficulties to perform simple tasks, and my questions did not receive
> satisfying answers.
> I was sure the Python project would have the same issues;
> so I started from the repositories in http://code.python.org/hg/ and
> tried simple
> merges between Python versions.
> I would like several people to try the same example, and tell how they
> handle it.
> I'm new to mercurial, and I may have missed something important!
>
> Let's say you have to do the simple change shown in the diff below,
> and want to "fix" the the 3 usual versions: py3k, release31-maint,
> release27-maint.
> The diff was made against py3k.
> How would you do it with Mercurial? Please try to do it for real!

The easiest way I found to emulate git cherry-pick (which does exactly
what you want) with hg is to use import/export commands:
http://mercurial.selenic.com/wiki/CommunicatingChanges

It is indeed quite a pain to use in my experience, because you cannot
easily refer to the original commit the cherry pick is coming from
(i.e. no equivalent to git cherry-pick -x), and the conflict
resolution is quite dumb. One alternative is to be careful about where
you apply your patch
(http://stackoverflow.com/questions/3926906/what-is-the-most-efficient-way-to-handle-hg-import-rejects),
but that's not very convenient either.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Catalog-sig] egg_info in PyPI

2010-09-18 Thread David Cournapeau
On Sat, Sep 18, 2010 at 7:50 PM, Michael Foord
 wrote:
>  On 18/09/2010 11:48, David Cournapeau wrote:
>>
>> On Sat, Sep 18, 2010 at 7:39 PM, Michael Foord
>>   wrote:
>>>
>>>  On 18/09/2010 11:03, "Martin v. Löwis" wrote:
>>>>
>>>> That's really sad. So people will have to wait a few years to
>>>> efficiently
>>>> implement tools that they could implement today.
>>>
>>> Why a few years?
>>
>> That's the time it will take for all packages to support distutils2 ?
>
> Not "all packages" support setuptools.

Sure, but supporting setuptools was kind of possible for packages
relying heavily on distutils, even if it was not simple and fragile.
Distutils2 being incompatible API-wise with distutils, I am not sure
it will be as "easy" as with setuptools. It may be, but the only way
to know is to do it, and the incentive rather unclear. It means that
anyway, a lot of infrastructure will have to support both "standards"
for the time being.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Catalog-sig] egg_info in PyPI

2010-09-18 Thread David Cournapeau
On Sat, Sep 18, 2010 at 7:39 PM, Michael Foord
 wrote:
>  On 18/09/2010 11:03, "Martin v. Löwis" wrote:

>> That's really sad. So people will have to wait a few years to efficiently
>> implement tools that they could implement today.
>
> Why a few years?

That's the time it will take for all packages to support distutils2 ?

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 384 status

2010-09-08 Thread David Cournapeau
On Wed, Sep 8, 2010 at 7:59 PM, Nick Coghlan  wrote:
> On Wed, Sep 8, 2010 at 6:34 PM, David Cournapeau  wrote:
>> In other words, the problem mainly arises when you need to integrate
>> libraries which you can not recompile with the compiler used by
>> python, because the code is not visual-studio compatible, or because
>> the library is only available in binary form.
>
> In my case, I was building on an old dev system which only has VC6,
> but needed to link against Python 2.4 (which was compiled with MSVC
> 2005). The build process didn't use distutils, so that didn't affect
> anything.

ok, I was confused by "I just recompiled".

>
> It works, you just have to know what APIs you have to avoid.

The critical point is that you cannot always do that. Retaking my
example of mkstemp: I have a C library which has a fdopen-like
function (built with one C runtime, not the same as python), there is
no way that I know of to use this API with a file descriptor obtained
from tempfile.mkstemp function. The only solution is to build my own C
extension with C mkstemp, built with the same runtime as my library,
and make that available to python.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 384 status

2010-09-08 Thread David Cournapeau
On Wed, Sep 8, 2010 at 5:19 PM, Nick Coghlan  wrote:
> On Wed, Sep 8, 2010 at 8:34 AM, David Cournapeau  wrote:
>> I would turn the question around: what are the cases where you manage
>> to mix CRT and not getting any issues ? This has never worked in my
>> own experience in the context of python extensions,
>
> I've done it quite a bit over the years with SWIG-wrapped libraries
> where all I did was recompile my extension for new versions of Python.
> It works fine, so long as you avoid the very APIs that PEP 384 is
> blocking from the stable ABI.

What do you mean by recompiling ? Using Mingw ? Normally, if you just
recompile your extensions without using special distutils options, you
use the same compiler as the one used by python, which generally
avoids mixing runtimes.

In other words, the problem mainly arises when you need to integrate
libraries which you can not recompile with the compiler used by
python, because the code is not visual-studio compatible, or because
the library is only available in binary form.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 384 status

2010-09-07 Thread David Cournapeau
On Wed, Sep 8, 2010 at 2:48 AM, M.-A. Lemburg  wrote:
> "Martin v. Löwis" wrote:
>>
>>> This sounds like the issues such a mix can cause are mostly
>>> theoretical and don't really bother much in practice, so
>>> PEP 384 on Windows does have a chance :-)
>>
>> Actually, the CRT issues (FILE* in particular) have been
>> causing real crashes for many years, for many people.
>
> Do you have some pointers ?

I don't have a bug report, but I have had issues in my audiolab
package, where the package itself is built with visual studio, but the
library it is linked against is built with mingw (libsndfile, which
cannot be built with visual studio). The problem happens when you want
to use the mingw-built library which accepts a file descriptor that
you create with mkstemp inside code built with Visual Studio.

This issue is actually quite easy to reproduce.

> I don't remember this being a real practical issue, at least
> not for different versions of the MS CRTs.

I would turn the question around: what are the cases where you manage
to mix CRT and not getting any issues ? This has never worked in my
own experience in the context of python extensions,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3149 thoughts

2010-09-05 Thread David Cournapeau
On Mon, Sep 6, 2010 at 3:16 AM, Georg Brandl  wrote:
> Am 05.09.2010 19:22, schrieb "Martin v. Löwis":
>> I know the PEP is accepted, but I would still like to see some
>> changes/clarifications.
>>
>> 1. What is the effect of this PEP on Windows? Is this a Linux-only
>>    feature? If not, who is going to provide the changes for Windows?
>>    (More specifically: if this is indeed meant for Windows, and
>>    if no Windows implementation arrives before 3.2b1, I'd ask that
>>    the changes be rolled back, and integration is deferred until there
>>    is Windows support)
>
> I don't think Windows support is planned or necessary; after all, isn't the
> default installation mode on Windows to install every Python version into
> its own root direction (C:\PythonXY)?
>
>> 2. Why does the PEP recommend installing stuff into /usr/share/pyshared?
>>    According to the Linux FHS, /usr/share is for Architecture-
>>    independent data, see
>>
>> http://www.pathname.com/fhs/pub/fhs-2.3.html#USRSHAREARCHITECTUREINDEPENDENTDATA
>>    In particular, it's objective is that you can NFS-share it across,
>>    say, both SPARC Linux and x86 Linux. I believe the PEP would break
>>    this, as SPARC and x86 executables would override each other.
>
> Indeed.  I think this is probably just an oversight and should be corrected
> in the PEP.  However, it's the distributions' call anyway.

Reading the related paragraph in the PEP, it seems to me that the use
of package as in "these distributions install third party (i.e.
non-standard library) packages ..." is too vague. On Ubuntu at least,
the package content is spread out over different paths, and only
*some* files of the package are put into ...pyshared (namely, the ones
that can indeed be shared across different versions, that is onlythe
.py files in general, with the .so and the .pyc in /usr/lib/...). I
guess this is obvious for Barry and other people accustomed with
packaging on debian-like systems, but not so much for others.

Maybe the PEP would benefit from a stronger example (for example how
is a simple package with a C extension actually installed on the
system), but OTOH, this keeps changing between debian/ubuntu versions,
so a complete example may be more confusing.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 384 status

2010-08-30 Thread David Cournapeau
On Tue, Aug 31, 2010 at 6:54 AM, Nick Coghlan  wrote:


> Hmm... that last point is a bit of any issue actually, since it also
> flows the other way (changes made via the locale module won't be
> visible to any extension modules using a different C runtime). So I
> suspect mixing C runtimes is still going to come with the caveat of
> potential locale related glitches.

As far as IO is concerned, FILE* is just a special case of a more
generic issue, though, so maybe this could be a bit reworded. For
example, file descriptor cannot be shared between runtimes either.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 384 status

2010-08-29 Thread David Cournapeau
On Mon, Aug 30, 2010 at 6:43 AM, Antoine Pitrou  wrote:
> On Mon, 30 Aug 2010 07:31:34 +1000
> Nick Coghlan  wrote:
>> Since part of the point of
>> PEP 384 is to support multiple versions of the C runtime in a single
>> process, [...]
>
> I think that's quite a maximalist goal. The point of PEP 384 should be
> to define a standard API for Python, (hopefully) spanning multiple
> versions. Whether the API effectively guarantees a standard ABI can
> only depend on whether the system itself hasn't changed its own
> conventions (including, for example, function call conventions, or the
> binary representation of standard C types).
>
> In other words, PEP 384 should only care to stabilize the ABI as
> long as the underlying system doesn't change. It sounds a bit foolish
> for us to try to hide potential unstabilities in the underlying
> platform. And it's equally foolish to try to forbid people from using
> well-known system facilities such as FILE* or (worse) errno.
>
> So, perhaps the C API docs can simply mention the caveat of using FILE*
> (and perhaps errno, if there's a problem there as well) for C extensions
> under Windows. C extension writers are (usually) consenting adults, for
> all.

This significantly decrease the value of such an API, to the point of
making it useless on windows, since historically different python
versions are built with different runtimes. And I would think that
windows is the platform where PEP 384 would be the most useful - at
least it would for numpy/scipy, where those runtimes issues have
bitten us several times (and are painful to debug, especially when you
don't know windows so well).

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible bug in randint when importing pylab?

2010-08-19 Thread David Cournapeau
On Fri, Aug 20, 2010 at 1:02 AM, Amaury Forgeot d'Arc
 wrote:
> Hi,
>
> 2010/8/19 Timothy Kinney :
>> I am getting some unexpected behavior in Python 2.6.4 on a WinXP SP3 box.
>
> This mailing list is for development *of* python, not about
> development *with* python.
> Your question should be directed to the comp.lang.python newsgroup, or
> the python-list mailing list.

actually, the numpy and/or matplotlib ML would be even better in that case :)

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fixing #7175: a standard location for Python config files

2010-08-12 Thread David Cournapeau
On Fri, Aug 13, 2010 at 7:29 AM, Antoine Pitrou  wrote:
> On Thu, 12 Aug 2010 18:14:44 -0400
> Glyph Lefkowitz  wrote:
>>
>> On Aug 12, 2010, at 6:30 AM, Tim Golden wrote:
>>
>> > I don't care how many stats we're doing
>>
>> You might not, but I certainly do.  And I can guarantee you that the
>> authors of command-line tools that have to start up in under ten
>> seconds, for example 'bzr', care too.
>
> The idea that import time is dominated by stat() calls sounds rather
> undemonstrated (and unlikely) to me.

It may be, depending on what you import. I certainly have seen (and
profiled) it. In my experience, stat calls and regex compilation often
come at the top of the culprits for slow imports. In the case of
setuptools namespace package, there was a thread on 23rd april on
distutils-sig about this issue: most of the slowdown came from
unneeded stat (and symlink translations).

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mingw support?

2010-08-12 Thread David Cournapeau
On Wed, Aug 11, 2010 at 10:21 PM, Sturla Molden  wrote:
>
> "David Cournapeau":
>> Autotools only help for posix-like platforms. They are certainly a big
>> hindrance on windows platform in general,
>
> That is why mingw has MSYS.

I know of MSYS, but it is not very pleasant to use, if only because it
is extremely slow. When I need to build things for windows, I much
prefer cross compiling to using MSYS. I also think that cross
compilation is more useful than native mingw build alone - there are
patches for cross compilation, but I don't know their current status,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mingw support?

2010-08-10 Thread David Cournapeau
On Tue, Aug 10, 2010 at 11:06 PM,   wrote:
> On Mon, Aug 09, 2010 at 06:55:29PM -0400, Terry Reedy wrote:
>> On 8/9/2010 2:47 PM, Sturla Molden wrote:
>> >> Terry Reedy:
>> >
>> >>     MingW has become less attractive in recent years by the difficulty
>> >> in downloading and installing a current version and finding out how to
>> >> do so. Some projects have moved on to the TDM packaging of MingW.
>> >>
>> >> http://tdm-gcc.tdragon.net/
>>
>> Someone else deserves credit for writing that and giving that link ;-)
>
> Yes, that was a great link, thanks. It works fine for me.
>
> The reason I was bringing up this topic again was that I think the gnu
> autotools have been made for exactly this purpose, to port software to
> different platforms,

Autotools only help for posix-like platforms. They are certainly a big
hindrance on windows platform in general,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 proposed changes for basic plugins support

2010-08-03 Thread David Cournapeau
On Tue, Aug 3, 2010 at 11:35 PM, Michael Foord
 wrote:
> On 03/08/2010 15:19, David Cournapeau wrote:
>>
>> On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou
>>  wrote:
>>
>>>
>>> On Tue, 03 Aug 2010 10:28:07 +0200
>>> "M.-A. Lemburg"  wrote:
>>>
>>>>>
>>>>> Don't forget system packaging tools like .deb, .rpm, etc., which do not
>>>>> generally take kindly to updating such things.  For better or worse,
>>>>> the
>>>>> filesystem *is* our "central database" these days.
>>>>>
>>>>
>>>> I don't think that's a problem: the SQLite database would be a cache
>>>> like e.g. a font cache or TCSH command cache, not a replacement of
>>>> the meta files stored in directories.
>>>>
>>>> Such a database would solve many things at once: faster access to
>>>> the meta-data of installed packages, fewer I/O calls during startup,
>>>> more flexible ways of doing queries on the meta-data, needed for
>>>> introspection and discovery, etc.
>>>>
>>>
>>> If the cache can become stale because of system package management
>>> tools, how do you avoid I/O calls while checking that the database is
>>> fresh enough at startup?
>>>
>>
>> There is a tension between the two approaches: either you want
>> "auto-discovery", or you want a system with explicit registration and
>> only the registered plugins would be visible to the system.
>>
>>
>
> Not true. Auto-discovery provides an API for applications to tell users
> which plugins are *available* whilst still allowing the app to decide which
> are active / enabled. It still leaves full control in the hands of the
> application.

Maybe  I was not clear, but I don't understand how your statement
contradict mine. The issue is how to determine which plugins are
available: if you don't have an explicit registration, you need to
constantly restat every potential location (short of using OS specific
systems to to get notification from fs changes). The current python
solutions that I am familiar with are prohibitively computing
intensive for this reason (think about what happens when you stat
locations on NFS shares).

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 proposed changes for basic plugins support

2010-08-03 Thread David Cournapeau
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou  wrote:
> On Tue, 03 Aug 2010 10:28:07 +0200
> "M.-A. Lemburg"  wrote:
>> >
>> > Don't forget system packaging tools like .deb, .rpm, etc., which do not
>> > generally take kindly to updating such things.  For better or worse, the
>> > filesystem *is* our "central database" these days.
>>
>> I don't think that's a problem: the SQLite database would be a cache
>> like e.g. a font cache or TCSH command cache, not a replacement of
>> the meta files stored in directories.
>>
>> Such a database would solve many things at once: faster access to
>> the meta-data of installed packages, fewer I/O calls during startup,
>> more flexible ways of doing queries on the meta-data, needed for
>> introspection and discovery, etc.
>
> If the cache can become stale because of system package management
> tools, how do you avoid I/O calls while checking that the database is
> fresh enough at startup?

There is a tension between the two approaches: either you want
"auto-discovery", or you want a system with explicit registration and
only the registered plugins would be visible to the system.

System-wise, I much prefer the later, and auto-discovery should be
left at the application discretion IMO. A library to deal with this at
the *app* level may be fine. But the current system of loading
packages and co is already complex enough in python that anything that
complexify at the system (interpreter) level sounds like a bad idea.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] proto-pep: plugin proposal (for unittest)

2010-07-30 Thread David Cournapeau
On Fri, Jul 30, 2010 at 10:23 PM, Michael Foord
 wrote:
> For those of you who found this document perhaps just a little bit too long,
> I've written up a *much* shorter intro to the plugin system (including how
> to get the prototype) on my blog:
>
>     http://www.voidspace.org.uk/python/weblog/arch_d7_2010_07_24.shtml#e1186

This looks nice and simple, but I am a bit worried about the
configuration file for registration. My experience is that end users
don't like editing files much. I understand that may be considered as
bikesheding, but have you considered a system analog to bzr instead ?
A plugin is a directory somewhere, which means that disabling it is
just removing a directory. In my experience, it is more reliable from
a user POV than e.g. the hg way of doing things. The plugin system of
bzr is one of the thing that I still consider the best in its
category, even though I stopped using bzr for quite some time. The
registration was incredibly robust and easy to use from a user and
developer POV,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3 optimizations...

2010-07-22 Thread David Cournapeau
On Thu, Jul 22, 2010 at 10:08 PM, stefan brunthaler
 wrote:
>> Is the source code under an open source non-copyleft license?
>>
> I am (unfortunately) not employed or funded by anybody, so I think
> that I can license/release the code as I see fit.

If you did this work under your PhD program, you may be more
restricted than you think. You may want to check with your adviser
first,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More detailed build instructions for Windows

2010-07-02 Thread David Cournapeau
On Sat, Jul 3, 2010 at 2:26 PM, Reid Kleckner  wrote:
> Hey folks,
>
> I'm trying to test out a patch to add a timeout in subprocess.py on
> Windows, so I need to build Python with Visual Studio.  The docs say
> the files in PCBuild/ work with VC 9 and newer.  I downloaded Visual
> C++ 2010 Express, and it needs to convert the .vcproj files into
> .vcxproj files, but it fails.
>
> I can't figure out where to get VC 9, all I see is 2008 and 2010.

VS 2008 == VC 9 == MSVC 15

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SVN <-> HG workflow to split Python Library by Module

2010-07-02 Thread David Cournapeau
On Sat, Jul 3, 2010 at 9:34 AM, Brett Cannon  wrote:
> On Fri, Jul 2, 2010 at 17:17, David Cournapeau  wrote:
>> On Sat, Jul 3, 2010 at 6:37 AM, Brett Cannon  wrote:
>>> On Fri, Jul 2, 2010 at 12:25, anatoly techtonik  wrote:
>>>> I planned to publish this proposal when it is finally ready and tested
>>>> with an assumption that Subversion repository will be online and
>>>> up-to-date after Mercurial migration. But recent threads showed that
>>>> currently there is no tested mechanism to sync Subversion repository
>>>> back with Mercurial, so it will probably quickly outdate, and the
>>>> proposal won't have a chance to be evaluated. So now is better than
>>>> never.
>>>>
>>>> So, this is a way to split modules from monolithic Subversion
>>>> repository into several Mercurial mirrors - one mirror for each module
>>>> (or whatever directory structure you like). This will allow to
>>>> concentrate your work on only one module at a time ("distutils",
>>>> "CGIHTTPServer" etc.) without caring much about anything else.
>>>> Exceptionally useful for occasional external "contributors" like me,
>>>> and folks on Windows, who don't possess Visual Studio to compile
>>>> Python and are forced to use whatever version they have installed to
>>>> create and test patches.
>>>
>>> But modules do not live in an isolated world; they are dependent on
>>> changes made to other modules. Isolating them from other modules whose
>>> semantics change during development will lead to skew and improper
>>> patches.
>>
>> I cannot comment on the original proposal, but this issue has known
>> solutions in git, in the form of submodules. I believe hg has
>> something similar with the forest extension
>>
>> http://mercurial.selenic.com/wiki/ForestExtension
>
> Mercurial has subrepo support, but that doesn't justify the need to
> have every module in its own repository so they can be checked out
> individually.

It does not justify it, but it makes it possible to keep several
repositories in sync, and that you get a consistent state when cloning
the top repo. If there is a need to often move code from one repo to
the other, or if a change in one repo often cause a change in another
one, then certainly that's a sign that they should  be in the same
repo.

But for the windows issue, using subrepo so that when you clone python
repo, you get the exact same versions of C libraries as used for the
official msi (tk, tcl, openssl, bzip2, etc...), that would be very
useful. At least I would have prefered this to the current situation
when I need to build python myself on windows.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SVN <-> HG workflow to split Python Library by Module

2010-07-02 Thread David Cournapeau
On Sat, Jul 3, 2010 at 6:37 AM, Brett Cannon  wrote:
> On Fri, Jul 2, 2010 at 12:25, anatoly techtonik  wrote:
>> I planned to publish this proposal when it is finally ready and tested
>> with an assumption that Subversion repository will be online and
>> up-to-date after Mercurial migration. But recent threads showed that
>> currently there is no tested mechanism to sync Subversion repository
>> back with Mercurial, so it will probably quickly outdate, and the
>> proposal won't have a chance to be evaluated. So now is better than
>> never.
>>
>> So, this is a way to split modules from monolithic Subversion
>> repository into several Mercurial mirrors - one mirror for each module
>> (or whatever directory structure you like). This will allow to
>> concentrate your work on only one module at a time ("distutils",
>> "CGIHTTPServer" etc.) without caring much about anything else.
>> Exceptionally useful for occasional external "contributors" like me,
>> and folks on Windows, who don't possess Visual Studio to compile
>> Python and are forced to use whatever version they have installed to
>> create and test patches.
>
> But modules do not live in an isolated world; they are dependent on
> changes made to other modules. Isolating them from other modules whose
> semantics change during development will lead to skew and improper
> patches.

I cannot comment on the original proposal, but this issue has known
solutions in git, in the form of submodules. I believe hg has
something similar with the forest extension

http://mercurial.selenic.com/wiki/ForestExtension

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How are the bdist_wininst binaries built ?

2010-07-01 Thread David Cournapeau
On Thu, Jul 1, 2010 at 2:00 PM, "Martin v. Löwis"  wrote:
>>> See PC/bdist_wininst.
>>
>> Hm, my question may not have been clear: *how* is the wininst-9.0
>> built from the bdist_wininst sources ? I see 6, 7.0, 7.1 and 8.0
>> versions of the visual studio build scripts, but nothing for VS 9.0.
>
> Ah. See PCbuild/bdist_wininst.vcproj.

I thought I checked there, but I obviously missed it. thanks,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How are the bdist_wininst binaries built ?

2010-06-30 Thread David Cournapeau
On Thu, Jul 1, 2010 at 1:22 PM, "Martin v. Löwis"  wrote:
>> I would like to modify the code of the bdist installers, but I don't
>> see any VS project for VS 9.0. How are the wininst-9.0*exe built ?
>
> See PC/bdist_wininst.

Hm, my question may not have been clear: *how* is the wininst-9.0
built from the bdist_wininst sources ? I see 6, 7.0, 7.1 and 8.0
versions of the visual studio build scripts, but nothing for VS 9.0.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] How are the bdist_wininst binaries built ?

2010-06-30 Thread David Cournapeau
Hi,

I would like to modify the code of the bdist installers, but I don't
see any VS project for VS 9.0. How are the wininst-9.0*exe built ?

thanks,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] what to do if you don't want your module in Debian

2010-04-26 Thread David Cournapeau
On Tue, Apr 27, 2010 at 5:10 AM, Piotr Ożarowski  wrote:

> if there's no other way (--install-data is ignored right now, and I know
> you're doing a great work to change that, thanks BTW), one could always
> use it in *one* place and later import the result in other parts of
> the code (instead of using __file__ again)

May I ask why this is not actually the solution to resources location
? For example, let's say we have (hypothetic version of distutils
supporting autoconf paths):

python setup.py install --prefix=/usr --datadir=/var/lib/foo
--manpath=/somefunkypath

Then the install step would generate a file __install_path.py such as:

PREFIX = "/usr"
DATADIR = "/var/lib/foo"
MANPATH = "/somfunkypath"

There remains then the problem of relocatable packages, but solving
this would be easy through a conditional in this generated file:

if RELOCATABLE:
PREFIX = "$prefix"
...
else:

and define $prefix and co from __file__ if necessary. All this would
be an implementation detail, so that the package developer effectively
do

from mypkg.file_paths import PREFIX, DATADIR, etc...

This is both simple and flexible: it is not mandatory, it does not
make life more complicated for python developers who don't care about
platform X. FWIW, that's the scheme I intend to support in my own
packaging solution,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Binary Compatibility Issue with Python v2.6.5 and v3.1.2

2010-04-20 Thread David Cournapeau
On Tue, Apr 20, 2010 at 9:19 PM, Phil Thompson
 wrote:
> When I build my C++ extension on Windows (specifically PyQt with MinGW)
> against Python v2.6.5 it fails to run under v2.6.4. The same problem exists
> when building against v3.1.2 and running under v3.1.1.
>
> The error message is...
>
> ImportError: DLL load failed: The specified procedure could not be found.
>
> ...though I don't know what the procedure is.
>
> When built against v2.6.4 it runs fine under all v2.6.x. When built under
> v3.1.1 it runs fine under all v3.1.x.
>
> I had always assumed that an extension built with vX.Y.Z would always run
> under vX.Y.Z-1.

I don't know how well it is handled in python, but this is extremely
hard to do in general - you are asking about forward compatibility,
not backward compatibility.

Is there a reason why you need to do this ? The usual practice is to
build against the *oldest* compatible version you can, so that it
remains compatible with everything afterwards,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Automatic installer builds (was Re: Fwd: Broken link to download (Mac OS X))

2010-04-15 Thread David Cournapeau
On Thu, Apr 15, 2010 at 3:54 AM,   wrote:
>
>    Bill> In any case, they shouldn't be needed on buildbots maintained by
>    Bill> the PSF.
>
> Sure.  My question was related to humans building binary distributions
> though.  Unless that becomes fully automated so the release manager can just
> push a button and have it built on and as-yet-nonexistent Mac OSX buildbot
> machine, somebody will have to generate that installer.  Ronald says Fink,
> MacPorts and /usr/local are poison.  If that's truly the case that's fine.
> It's just that it reduces the size of the potential binary installer build
> machines.

Actually, you can just use a chroot "jail" to build the binary - I use
this process to build the official numpy/scipy binaries, it works very
well whatever crap there is on my laptop otherwise.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python compiler

2010-04-06 Thread David Cournapeau
On Mon, Apr 5, 2010 at 11:54 PM,   wrote:
> for a college project, I proposed to create a compiler for python. I've
> read something about it and maybe I saw that made a bad choice. I hear
> everyone's opinion respond.

Depending on your taste, you may want to tackle something like a
static analyser for python. This is not a compiler proper, but it
could potentially be more useful than yet another compiler compiling
50 % of python, and you would get some results more quickly (no need
to generate code, etc...). See e.g. http://bugs.jython.org/issue1541
for an actual implementation on a similar idea (but for jython),

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Distutils] Bootstrap script for package management tool in Python 2.7 (Was: Re: At least one package management tool for 2.7)

2010-03-29 Thread David Cournapeau
On Mon, Mar 29, 2010 at 10:45 PM, anatoly techtonik  wrote:
> On Mon, Mar 29, 2010 at 12:15 PM, Tarek Ziadé  wrote:
>> [..]
>>> distutils is not a `package management` tool, because it doesn't know
>>> anything even about installed packages, not saying anything about
>>> dependencies.
>>
>> At this point, no one knows anything about installed packages at the
>> Python level.
>
> Users do not care about this, and `distutils` doesn't know this even
> at package level.
>
>> Keeping track of installed projects is a feature done within each
>> package managment system.
>>
>> And the whole purpose of PEP 376 is to define a database of what's
>> installed, for the sake of interoperability.
>
> That's great. When it will be ready everybody would be happy to make
> their package management tool compliant.
>
>>>
>>> `pip` and `distribute` are unknown for a vast majority of Python
>>> users, so if you have a perspective replacement for `easy_install` -
>>
>> Depending on how you call a Python user, I disagree here. Many people
>> use pip and distribute.
>>
>> The first one because it has an uninstall feature among other things.
>> The second one because it fixes some bugs of setuptools and provides
>> Python 3 support
>
> I do not mind if we can distribute three stubs, they will also serve
> as pointers for a better way of packaging when an ultimate tool is
> finally born. Me personally is willing to elaborate for `easy_install`
> stub in 2.7.
>
>>>
>>> For now there are two questions:
>>> 1. Are they stable enough for the replacement of user command line
>>> `easy_install` tool?
>>> 2. Which one is the recommended?
>>>
>>> P.S. Should there be an accessible FAQ in addition to ML?
>>
>> This FAQ work has been started in th "HitchHicker's guide to
>> Packaging" you can find here:
>>
>> http://guide.python-distribute.org
>
> I can see any FAQ. To me the FAQ is something that could be posted to
> distutils ML once a month to reflect current state of packaging. It
> should also carry version number. So anybody can comment on the FAQ,
> ask another question or ask to make a change.
>
>> Again, any new code work will not happen because 2.7 is due in less
>> than a week. Things are happening in Distutils2.
>
> That doesn't solve the problem. Bootstrap script can be written in one
> day. What we need is a consensus whatever this script is welcomed in
> 2.7 or not? Who is the person to make the decision?
>
>> Now, for the "best practice" documentation, I think the guide is the
>> best plce to look at.
>
> Let's refer to original user story:
> "I installed Python and need a quick way to install my packages on top of it."

python setup.py install works well, and has for almost a decade.

If you need setuptools, you can include ez_setup.py, which does
exactly what you want, without adding a hugely controversial feature
to python proper. You do something like:

try:
 import setuptools
except ImportError:
 print "Run ez_setup.py first"
 

And you're done,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why is nan != nan?

2010-03-28 Thread David Cournapeau
On Sun, Mar 28, 2010 at 9:28 AM, Robert Kern  wrote:
> On 2010-03-27 00:32 , David Cournapeau wrote:
>>
>> On Sat, Mar 27, 2010 at 8:16 AM, Raymond Hettinger
>>   wrote:
>>>
>>> On Mar 26, 2010, at 2:16 PM, Xavier Morel wrote:
>>>
>>> How about raising an exception instead of creating nans in the first
>>> place,
>>> except maybe within specific contexts (so that the IEEE-754 minded can
>>> get
>>> their nans working as they currently do)?
>>>
>>> -1
>>> The numeric community uses NaNs as placeholders in vectorized
>>> calculations.
>>
>> But is this relevant to python itself ? In Numpy, we indeed do use and
>> support NaN, but we have much more control on what happens compared to
>> python float objects. We can control whether invalid operations raises
>> an exception or not, we had isnan/isfinite for a long time, and the
>> fact that nan != nan has never been a real problem AFAIK.
>
> Nonetheless, the closer our float arrays are to Python's float type, the
> happier I will be.

Me too, but I don't see how to reconcile this with the intent of
simplifying nan handling because they are not intuitive, which seems
to be the goal of this discussion.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why is nan != nan?

2010-03-26 Thread David Cournapeau
On Sat, Mar 27, 2010 at 8:16 AM, Raymond Hettinger
 wrote:
>
> On Mar 26, 2010, at 2:16 PM, Xavier Morel wrote:
>
> How about raising an exception instead of creating nans in the first place,
> except maybe within specific contexts (so that the IEEE-754 minded can get
> their nans working as they currently do)?
>
> -1
> The numeric community uses NaNs as placeholders in vectorized calculations.

But is this relevant to python itself ? In Numpy, we indeed do use and
support NaN, but we have much more control on what happens compared to
python float objects. We can control whether invalid operations raises
an exception or not, we had isnan/isfinite for a long time, and the
fact that nan != nan has never been a real problem AFAIK.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why is nan != nan?

2010-03-26 Thread David Cournapeau
On Fri, Mar 26, 2010 at 10:19 AM, P.J. Eby  wrote:
> At 11:57 AM 3/26/2010 +1100, Steven D'Aprano wrote:
>>
>> But they're not -- they're *signals* for "your calculation has gone screwy
>> and the result you get is garbage", so to speak. You shouldn't even think of
>> a specific NAN as a piece of specific garbage, but merely a label on the
>> *kind* of garbage you've got (the payload): INF-INF is, in some sense, a
>> different kind of error to log(-1). In the same way you might say "INF-INF
>> could be any number at all, therefore we return NAN", you might say "since
>> INF-INF could be anything, there's no reason to think that INF-INF ==
>> INF-INF."
>
> So, are you suggesting that maybe the Pythonic thing to do in that case
> would be to cause any operation on a NAN (including perhaps comparison) to
> fail, rather than allowing garbage to silently propagate?

Nan behavior being tightly linked to FPU exception handling, I think
this is a good idea. One of the goal of Nan is to avoid many testing
in intermediate computation (for efficiency reason), which may not
really apply to python. Generally, you want to detect
errors/exceptional situations as early as possible, and if you use
python, you don't care about potential slowdown caused by those
checks.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why is nan != nan?

2010-03-25 Thread David Cournapeau
On Thu, Mar 25, 2010 at 9:39 PM, Jesus Cea  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 03/25/2010 12:22 PM, Nick Coghlan wrote:
>>   "Not a Number" is not a single floating point value. Instead each
>>   instance is a distinct value representing the precise conditions that
>>   created it. Thus, two "NaN" values x and y will compare equal iff they
>>   are the exact same NaN object (i.e. "if isnan(x) then x == y iff
>>   x is y".
>>
>> As stated above, such a change would allow us to restore reflexivity
>> (eliminating a bunch of weirdness) while still honouring the idea of NaN
>> being a set of values rather than a single value.
>
> Sounds good.
>
> But IEEE 754 was created by pretty clever guys and sure they had a
> reason for define things in the way they are. Probably we are missing
> something.

Yes, indeed. I don't claim having a deep understanding myself, but up
to now, everytime I thought something in IEE 754 was weird, it ended
up being for good reasons.

I think the fundamental missing point in this discussion about Nan is
exception handling: a lot of NaN quircky behavior becomes much more
natural once you take into account which operations are invalid under
which condition. Unless I am mistaken, python itself does not support
for FPU exception handling.

For example, the reason why x != x for x Nan is because != (and ==)
are about the only operations where you can have NaN as operands
without risking raising an exception, and support for creating and
detecting NaN in languages have been coming only quite lately (e.g.
C99).

Concerning the lack of rationale: a relatively short reference
concerned about FPU exception and NaN handling is from Kahan himself

http://www.eecs.berkeley.edu/~wkahan/ieee754status/ieee754.ps

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3188: Implementation Questions

2010-03-03 Thread David Cournapeau
On Fri, Feb 26, 2010 at 1:51 PM, Meador Inge  wrote:
> Hi All,
>
> Recently some discussion began in the issue 3132 thread
> (http://bugs.python.org/issue3132) regarding
> implementation of the new struct string syntax for PEP 3118.  Mark Dickinson
> suggested that I bring the discussion on over to Python Dev.  Below is a
> summary
> of the questions\comments from the thread.
>
> Unpacking a long-double
> ===
>
> 1. Should this return a Decimal object or a ctypes 'long double'?
> 2. Using ctypes 'long double' is easier to implement, but precision is
>     lost when needing to do arithmetic, since the value for cytpes 'long
> double'
>     is converted to a Python float.
> 3. Using Decimal keeps the desired precision, but the implementation would
>     be non-trivial and architecture specific (unless we just picked a
>     fixed number of bytes regardless of the architecture).
> 4. What representation should be used for standard size and alignment?
>     IEEE 754 extended double precision?

I think supporting even basic arithmetic correctly for long double
would be a tremendous amount of work in python. First, as you know,
there are many different formats which depend not only on the CPU but
also on the OS and the compiler, but there are quite a few issues
which are specific to long double (like converting to an integer which
cannot fit in any C integer type on most implementations).

Also, IEEE 754  does not define any alignment as far as I know, that's
up to the CPU implementer I believe. In Numpy, long double usually
maps to either 12 bytes (np.float96) or 16 bytes (np.float128).

I would expect the long double to be mostly useful for data exchange -
if you want to do arithmetic on long double, then the user of the
buffer protocol would have to implement it by himself (like NumPy does
ATM). So the important thing is to have enough information to use the
long double: alignment and size are not enough.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] IO module improvements

2010-02-06 Thread David Cournapeau
On Fri, Feb 5, 2010 at 10:28 PM, Antoine Pitrou  wrote:
> Pascal Chambon  gmail.com> writes:
>>
>> By the way, I'm having trouble with the "name" attribute of raw files,
>> which can be string or integer (confusing), ambiguous if containing a
>> relative path, and which isn't able to handle the new case of my
>> library, i.e opening a file from an existing file handle (which is ALSO
>> an integer, like C file descriptors...)
>
> What is the difference between "file handle" and a regular C file descriptor?
> Is it some Windows-specific thing?
> If so, then perhaps it deserves some Windows-specific attribute ("handle"?).

When wondering about the same issue, I found the following useful:

http://www.codeproject.com/KB/files/handles.aspx

The C library file descriptor as returned by C open is emulated by
win32. Only HANDLE is considered "native" (can be passed freely
however you want within one process).

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] IO module improvements

2010-02-05 Thread David Cournapeau
On Sat, Feb 6, 2010 at 4:31 PM, Tres Seaver  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Antoine Pitrou wrote:
>> Pascal Chambon  gmail.com> writes:
>>> By the way, I'm having trouble with the "name" attribute of raw files,
>>> which can be string or integer (confusing), ambiguous if containing a
>>> relative path, and which isn't able to handle the new case of my
>>> library, i.e opening a file from an existing file handle (which is ALSO
>>> an integer, like C file descriptors...)
>>
>> What is the difference between "file handle" and a regular C file descriptor?
>> Is it some Windows-specific thing?
>> If so, then perhaps it deserves some Windows-specific attribute ("handle"?).
>
> File descriptors are integer indexes into a process-specific table.

AFAIK, they aren't simple indexes in windows, and that's  partly why
even file descriptors cannot be safely passed between C runtimes on
windows (whereas they can in most unices).

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] buildtime vs runtime in Distutils

2009-11-15 Thread David Cournapeau
On Sun, Nov 15, 2009 at 10:32 PM, Tarek Ziadé  wrote:

>
> Ok. Fair enough, I'll work with them this way.

Although packagers should certainly fix the problems they introduce in
the first place, the second suggestion in the bug report would be
useful, independently on how linux distributions package things.

Especially if the data can be obtained for every build (autoconf and
VS-based), this would help packages which use something else than
distutils for their build.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?

2009-11-04 Thread David Cournapeau
On Thu, Nov 5, 2009 at 4:02 AM, "Martin v. Löwis"  wrote:

>
> That's not my experience. I see a change in source (say, on Django)
> available for 3.x within 5 seconds.

This is for which version of 2to3 ? I got similar experience (several
minutes), but maybe I am using 2to3 the wrong way. On my machine, with
2to3 from 3.1.1, it takes ~ 1s to convert one single file of 200
lines, and converting a tiny subset of numpy takes > one minute.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?

2009-11-03 Thread David Cournapeau
On Wed, Nov 4, 2009 at 3:25 AM, "Martin v. Löwis"  wrote:

> But only if NumPy would drop support for 2.x, for x < 7, right?
> That would probably be many years in the future.

Yes. Today, given the choice of supporting py 3.x and dropping python
< 2.7 and continue support for 2.4, the latter is by far my preferred
choice today (RHEL still require 2.4, for example).

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?

2009-11-03 Thread David Cournapeau
On Tue, Nov 3, 2009 at 9:55 PM, Barry Warsaw  wrote:

>
> Then clearly we can't back port features.
>
> I'd like to read some case studies of people who have migrated applications
> from 2.6 to 3.0.

+1, especially for packages which have a lot of C code: the current
documentation is sparse :) The only helpful reference I have found so
far is an email by MvL concerning psycopg2 port.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?

2009-11-03 Thread David Cournapeau
On Tue, Nov 3, 2009 at 8:40 PM, Antoine Pitrou  wrote:
> Sturla Molden  molden.no> writes:
>>
>> Porting NumPy is not a trivial issue. It might take
>> a complete rewrite of the whole C base using Cython.
>
> I don't see why they would need a rewrite.

(let me know if those numpy-specific discussions are considered OT0

There is certainly no need for a full rewrite, no. I am still unclear
on the range of things to change for 3.x, but the C changes are not
small, especially since numpy uses "dark" areas of python C extension.
The long vs int, strings vs bytes will take some time.

AFAIK, the only thing which has been attempted so far is porting our
own distutils extension to python 3.x, but I have not integrated those
changes yet.

> between 2.x and 3.x. Cython itself is supposed to support both 2.x and 3.x,
> isn't it?

Yes - but no numpy code use cython ATM, except for the random
generators, which would almost certainly be trivial to convert.

The idea which has been discussed so far is that for *some* code which
need significant changes or rewrite, using cython instead of C may be
beneficial, as it would give the 3.x code "for free". Having more
cython and less C could also bring more contributors - that would
actually be the biggest incentive, as the number of people who know
the core C code of numpy is too small.

> That's interesting, because PEP 3118 was pushed mainly by a prominent member 
> of
> the NumPy community and some of its features are almost dedicated to NumPy.

I have not been involved with PEP 3118 discussion, so cannot comment
on the reason why it is not fully supported yet by numpy.

But I think that's a different issue altogether - PEP 3118 goal is for
interoperation with other packages. We can port to PEP 3118 without
porting to 3.x, and we can port to 3.x without taking care of PEP
3118.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?

2009-11-03 Thread David Cournapeau
On Tue, Nov 3, 2009 at 6:13 PM, Michael Foord  wrote:
> Sturla Molden wrote:
>>
>> I'd just like to mention that the scientific community is highly dependent
>> on NumPy. As long as NumPy is not ported to Py3k, migration is out of the
>> question. Porting NumPy is not a trivial issue. It might take a complete
>> rewrite of the whole C base using Cython. NumPy's ABI is not even PEP 3118
>> compliant. Changing the ABI for Py3k might break extension code written for
>> NumPy using C. And scientists tend to write CPU-bound routines in languages
>> like C and Fortran, not Python, so that is a major issue as well. If we port
>> NumPy to Py3k, everyone using NumPy will have to port their C code to the
>> new ABI. There are lot of people stuck with Python 2.x for this reason. It
>> does not just affect individual scientists, but also large projects like IBM
>> and CERN's blue brain and NASA's space telecope. So please, do not cancel
>> 2.x support before we have ported NumPy, Matplotlib and most of their
>> dependant extensions to Py3k.
>
> What will it take to *start* the port? (Or is it already underway?) For many
> projects I fear that it is only the impending obsolescence (real rather than
> theoretical) of Python 2 that will convince projects to port.

I feel the same way. Given how much resources it will take to port to
py3k, I doubt the port will happen soon. I don't know what other numpy
developers think, but I consider py3k to simply not worth the hassle -
I know we will have to port eventually, though.

To answer your question, the main issues are:
 - are two branches are necessary or not ? If two branches are
necessary, I think we simply do not have the resources at the moment.
 - how to maintain a compatible C API across 2.x and 3.x
 - is it practically possible to support and maintain numpy from 2.4
to 3.x ? For example, I don't think the python 2.6 py3k warnings are
very useful when you need to maintain compatibility with 2.4 and 2.5.

There is also little documentation on how to port a significant C
codebase to py3k.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)

2009-10-20 Thread David Cournapeau
On Wed, Oct 21, 2009 at 5:49 AM, Paul Moore  wrote:
> 2009/10/20 Chris Withers :
>> I wouldn't have a problem if integrating with the windows package manager
>> was an optional extra, but I think it's one of many types of package
>> management that need to be worried about, so might be easier to get the
>> others working and let anyone who wants anything beyond a pure-python
>> packaging system that works across platforms, regardless of whether binary
>> extensions are needed, do the work themselves...
>
> There are many (I believe) Windows users for whom bdist_wininst is
> just what they want. For those people, where's the incentive to switch
> in what you propose? You're not providing the features they currently
> have, and frankly "do the work yourself" is no answer (not everyone
> can, often for entirely legitimate reasons).

I am not so familiar with msi or wininst internals, but isn't it
possible to install w.r.t. a given prefix ? Basically, making it
possible to use a wininst in a virtualenv if required (in which case I
guess it would not register with the windows db - at least it should
be possible to disable it).

The main problem with bdist_wininst installers is that they don't work
with setuptools dependency stuff (at least, that's the reason given by
windows users for a numpy egg on windows, whereas we used to only
provide an exe). But you could argue it is a setuptools pb as much as
a wininst pb, I guess.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)

2009-10-08 Thread David Cournapeau
On Fri, Oct 9, 2009 at 1:35 AM, Masklinn  wrote:
> On 8 Oct 2009, at 18:17 , Toshio Kuratomi wrote:
>>
>>> This is not at all how I use virtualenv. For me virtualenv is a
>>> sandbox so that I don't have to become root whenever I need to install
>>> a Python package for testing purposes
>>
>> This is needing to install multiple versions and use the newly installed
>> version for testing.
>>
> No it's not. It's keeping the python package *being tested* out of the
> system's or user's site-package because it's potentially unstable or
> unneeded. It provides a trivial way of *removing* the package to get rid of
> it: delete the virtualenv. No trace anywhere that the package was ever
> installed, no impact on the system  (apart from the potential side-effects
> of executing the system).
>
> The issue here isn't "multiple installed packages", it will more than likely
> be the only version of itself: note that it's a package being tested, not an
> *upgrade* being tested.
>
> The issues solved are:
> * not having to become root (solved by PEP 370 if it ever lands)
> * minimizing as much as possible the impact of testing the package on the
> system (not solved by any other solution)

This is not true - stow solves the problem in a more general way (in
the sense that it is not restricted to python), at least on platforms
which support softlink. The only inconvenience of stow compared to
virtual env is namespace packages, but that's because of a design flaw
in namespace package (as implemented in setuptools, and hopefully
fixed in the upcoming namespace package PEP).

Virtualenv provides a possible solution to some deployment problems,
and is useful in those cases, but it is too specialized to be included
in python itself IMO.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)

2009-10-08 Thread David Cournapeau
On Thu, Oct 8, 2009 at 5:31 PM, Tarek Ziadé  wrote:

> = Virtualenv and the multiple version support in Distribute =
>
> (I am not saying "We" here because this part was not discussed yet
> with everyone)
>
> Virtualenv allows you to create an isolated environment to install
> some distribution without polluting the
> main site-packages, a bit like a user site-packages.
>
> My opinion is that this tool exists only because Python doesn't
> support the installation of multiple versions for the same
> distributions.

I am really worried about this, because it may encourage people to use
multiple versions as a bandaid to maintaining backward compatibility.
At least with virtual-env, the problem is restricted to the user.

Generalized multiple, side by side installation  has been tried in
many different contexts, and I have never seen a single one working
and not bringing more problems that it solved. One core problem being
the exponential number of combination (package A depends on B and C, B
depends on one version of D, C on another version of D). Being able to
install *some* libraries in multiple versions is OK, but generalizing
is very dangerous IMHO.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Package install failures in 2.6.3

2009-10-06 Thread David Cournapeau
2009/10/6 P.J. Eby :
> At 02:22 PM 10/5/2009 +0200, Tarek Ziadé wrote:
>>
>> Setuptools development has been discontinued for a year, and does
>> patches on Distutils code. Some of these patches are sensitive to any
>> change
>> made on Distutils, wether those changes are internal or not.
>
> Setuptools is  also not the only thing out there that subclasses distutils
> commands in general, or the build_ext command in particular.  Numpy,
> Twisted, the mx extensions...  there are plenty of things out there that
> subclass distutils commands, quite in adherence to the rules.  (Note too
> that subclassing != patching, and the ability to subclass and substitute
> distutils commands is documented.)
>
> It's therefore not appropriate to treat the issue as if it were
> setuptools-specific; it could have broken any other major (or minor)
> package's subclassing of the build_ext command.

The internal vs published API difference does not make much sense in
distutils case anyway, since a lot of implementation details are
necessary to make non trivial extension work.

When working on numpy.distutils, I almost always have to look at
distutils sources since the docs are vastly insufficient, and even
then, the code is so bad that quite often the only way to interact
with distutils is to "reverse engineer" its behavior by trial and
error.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] VC++ versions to match python versions?

2009-08-18 Thread David Cournapeau
On Mon, Aug 17, 2009 at 2:01 PM, David Bolen wrote:
> Chris Withers  writes:
>
>> Is the Express Edition of Visual C++ 2008 suitable for compiling
>> packages for Python 2.6 on Windows?
>> (And Python 2.6 itself for that matter...)
>
> Yes - it's currently being used on my buildbot, for example, to build
> Python itself.  Works for 2.6 and later.
>
>> Ditto for 2.5, 3.1 and the trunk (which I guess becomes 3.2?)
>
> 2.5 needs VS 2003.

The 64 bits version of 2.5 is built with VS 2005, though.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update to Python Documentation Website Request

2009-07-27 Thread David Cournapeau
On Mon, Jul 27, 2009 at 7:20 PM, David Lyon wrote:

> My only point is that Windows ain't no embedded system. It's not
> short on memory or disk space. If a package manager is 5 megabytes
> extra say, with it's libraries.. what's the extra download time on
> that ? compared to three days+ stuffing around trying to find out
> how to install packages for a new user.

The problem is not so much  the size by itself that more code means
more maintenance burden for python developers. Including new code
means it has to work everywhere where python works currently, and that
other people can understand/support the related code. Adding code to a
project is far from free from python maintainers POV.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mingw32 and gc-header weirdness

2009-07-23 Thread David Cournapeau
On Thu, Jul 23, 2009 at 6:49 PM, Paul Moore wrote:
> 2009/7/22 Christian Tismer :
>> Maybe the simple solution is to prevent building extensions
>> with mingw, if the python executable was not also built with it?
>> Then, all would be fine I guess.
>
> I have never had problems in practice with extensions built with mingw
> rather than MSVC - so while I'm not saying that the issue doesn't
> exist, it certainly doesn't affect all extensions, so disabling mingw
> support seems a bit of an extreme measure.

I am strongly against this as well. We build numpy with mingw on
windows, and disabling it would make my life even more miserable on
windows. One constant source of pain with MS compilers is when
supporting different versions of python - 2.4, 2.5 and 2.6 require a
different VS version (and free versions are available only for the
last version of VS usually).

I am far from a windows specialist, but I understand that quite a few
problems with mingw-built extensions with python are caused by some
Python decisions as well (the C API with runtime-dependent structures
like FILE, etc...). So mingw is not the only to blame :)

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mingw32 and gc-header weirdness

2009-07-22 Thread David Cournapeau
On Thu, Jul 23, 2009 at 4:40 AM, Antoine Pitrou wrote:

>
> The size of long double is also 12 under 32-bit Linux. Perhaps mingw disagrees
> with Visual Studio

Yes, mingw and VS do not have the same long double type. This has been
the source of some problems in numpy as well, since mingw uses the MS
runtime, and everything involving long double and the runtime is
broken (printf, math library calls). I wish there was a way to disable
this in mingw, but there isn't AFAIK.

> on some ABI subtleties (is it expected? is mingw supposed to
> be ABI-compatible with Visual Studio? if yes, you may report a bug to them 
> :-)).

I think mostly ABI compatible is the best description :)

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Distutils] PEP 376 - from PyPM's point of view

2009-07-15 Thread David Cournapeau
On Wed, Jul 15, 2009 at 11:00 PM, Tarek Ziadé wrote:
> On Wed, Jul 15, 2009 at 12:10 PM, Paul Moore wrote:
>>
>> Disclaimer: I've only read the short version, so if some of this is
>> covered in the longer explanation, I apologise now.
>
> Next time I won't put a short version ;)
>
>
>>
>> PEP 376 support has added a requirement for 3 additional methods to
>> the existing 1 finder method in PEP 302. That's already a 300%
>> increase in complexity. I'm against adding any further complexity to
>> PEP 302 - in all honesty, I'd rather move towards PEP 376 defining its
>> *own* locator protocol and avoid any extra burden on PEP 302. I'm not
>> sure implementers of PEP 302 importers will even provide the current
>> PEP 376 extensions.
>>
>> I propose that before the current prototype is turned into a final
>> (spec and) implementation, the PEP 302 extensions are extracted and
>> documented as an independent protocol, purely part of PEP 376. (This
>> *helps* implementers, as they can write support for, for example,
>> eggs, without needing to modify the existing egg importer). I'll
>> volunteer to do that work - but I won't start until the latest
>> iteration of questions and discussions has settled down and PEP 376
>> has achieved a stable form with the known issues addressed.
>
> Sure that makes sense. I am all for having these 302 extensions
> flipped on PEP 376
> side, then think about the "locator" protocol.
>
> I am lagging a bit in the discussions, I have 10 messages left to read or so,
> but the known issues I've listed so far are about the RECORD file and
> absolute paths,
> I am waiting for PJE example on the syntax he proposed for prefixes,
> on the docutils example.
>
>> Of course, this is moving more and more towards saying that the design
>> of setuptools, with its generic means for locating distributions, etc
>> etc, is the right approach.
>> We're reinventing the wheel here. But the
>> problem is that too many people dislike setuptools as it stands for it
>> to gain support.
>
> I don't think it's about setuptools design. I think it's more likely
> to be about the fact
> that there's no way in Python to install two different versions of the
> same distribution
> without "hiding" one from each other, using setuptools, virtualenv or
> zc.buildout.
>
> "installing" a distribution in Python means that its activated
> globally, whereas people
> need it locally at the application level.
>
>> My understanding is that the current set of PEPs were
>> intended to be a stripped down, more generally acceptable subset of
>> setuptools. Let's keep them that way (and omit the complexities of
>> multi-version support).
>>
>> If you want setuptools, you know where to get it...
>
> Sure, but let's not forget that the multiple-version issue is a global
> issue OS packagers
> also meet. (setuptools is not the problem) :
>
> - application Foo uses docutils 0.4 and doesn't work with docutils 0.5
> - application Bar uses docutils 0.5
>
> if docutils 0.5 is installed, Foo is broken, unless docutils 0.4 is
> shipped with it.

As was stated by Debian packagers on the distutils ML, the problem is
that docutils 0.5 breaks packages which work with docutils 0.4 in the
first place.

http://www.mail-archive.com/distutils-...@python.org/msg05775.html

And current hacks to work around lack of explicit version handling for
module import is a maintenance burden:

http://www.mail-archive.com/distutils-...@python.org/msg05742.html

setuptools has given the incentive to use versioning as a workaround
for API/ABI compatibility. That's the core problem, and most problems
brought by setuptools (sys.path and .pth hacks with the unreliability
which ensued) are consequences of this. I don't see how virtualenv
solves anything in that regard for deployment issues. I doubt using
things like virtualenv will make OS packagers happy.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 - Open questions

2009-07-09 Thread David Cournapeau
On Thu, Jul 9, 2009 at 4:18 PM, Paul Moore wrote:

>>
>> There might be a library (and I call dibs on the name "distlib" :) that
>> provides support routines to parse setup.info, but there's no framework
>> involved. And no need for a plugin system.
>
> +1. Now who's going to design & write it?

I started a new thread on distutils-sig ("setup.py needs to go away")
to avoid jeopardizing this thread. I added the context as well as my
own suggestions for such a design.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 - Open questions

2009-07-08 Thread David Cournapeau
On Thu, Jul 9, 2009 at 7:07 AM, Eric Smith wrote:
> Paul Moore wrote:
>>
>> 2009/7/8 P.J. Eby :
>>>
>>> If it were being driven by setuptools, I'd have just implemented it
>>> myself
>>> and presented it as a fait accompli.  I can't speak to Tarek's motives,
>>> but
>>> I assume that, as stated in the PEP, the primary driver is supporting the
>>> distutils being able to uninstall things, and secondarily to allow other
>>> tools to be built on top of the API.
>>
>> My understanding is that all of the various distutils PEPs were driven
>> by the "packaging summit" ay PyCon. The struggle here seems to be to
>> find *anyone* from that summit who will now comment on the discussion
>> :-(
>
> I was there, and I've been commenting!
>
> There might have been more discussion after the language summit and the one
> open space event I went to. But the focus as I recall was static metadata
> and version specification. When I originally brought up static metadata at
> the summit, I meant metadata describing the sources in the distribution, so
> that we can get rid of setup.py's. From that metadata, I want to be able to
> generate .debs, .rpms, .eggs, etc.

I agree wholeheartedly. Getting rid of setup.py for most packages
should be a goal IMHO. Most packages don't need anything fancy, and
static metadata are so much easier to use compared to
setup.py/distutils for 3rd party interop.

There was a discussion about how to describe/find the list of files to
form a distribution (for the different sdist/bdist_* commands), but no
agreement was reached. Some people strongly defend the setuptools
feature to get the list of files from the source control system, in
particular.

http://mail.python.org/pipermail/distutils-sig/2009-April/011226.html

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] py3k build broken

2009-05-08 Thread David Cournapeau
On Fri, May 8, 2009 at 7:23 AM, Tarek Ziadé  wrote:

> I have fixed configure by runing autoconf, everything should be fine now
>
> Sorry for the inconvenience.

I am the one responsible for this - I did not realize that the
generated configure/Makefile were also in the trunk, and my patch did
not include the generated files. My apologies,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding a "sysconfig" module in the stdlib

2009-05-07 Thread David Cournapeau
On Fri, May 8, 2009 at 9:36 AM, Tarek Ziadé  wrote:
> Hello,
>
> I am trying to refactor distutils.log in order to use logging but I
> have been bugged by the fact that site.py uses
> distutils.util.get_platform() in "addbuilddir".
> The problem is the order of imports at initialization time : importing
> "logging" into distutils will make the initialization/build fail
> because site.py wil break when
> trying to import "logging", then "time".
>
> Anyways,
> So why site.py looks into distutils ?  because distutils has a few
> functions to get some info about the platform and about the Makefile
> and some
> other header files like pyconfig.h etc.
>
> But I don't think it's the best place for this, and I have a proposal :
>
> let's create a dedicated "sysconfig" module in the standard library
> that will provide all the (refactored) functions located in
> distutils.sysconfig (but not customize_compiler)
> and disutils.util.get_platform.

If we are talking about putting this into the stdlib proper, I would
suggest thinking about putting information for every platform in
sysconfig, instead of just Unix. I understand it is not an easy
problem (because windows builds are totally different than every other
platform), but it would really help for interoperability with other
build tools. If sysconfig is to become independent of distutils, it
should be cross platform and not unix specific.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Help on issue 5941

2009-05-07 Thread David Cournapeau
On Thu, May 7, 2009 at 8:49 PM, Tarek Ziadé  wrote:

>
> Notice that from the beginning, the unixcompiler class options are
> never used if the option has been customized
> in distutils.sysconfig and present in the Makefile, so we need to
> clean this behavior as well at some point, and document
> the customization features.

Indeed, I have never bothered much with this part, though. Flags
customization with distutils is too awkward to be useful in general
for something like numpy IMHO, I just use scons instead when I need
fine grained control.

> By the way, do you happen to have a buildbot or something that builds numpy ?

We have a buildbot:

http://buildbot.scipy.org/

But I don't know if that's easy to set up such as both python and
numpy are built from sources.

> If not it'll be very interesting:  I wouldn't mind having one numpy
> track running on the Python trunk and receiving
> mails if something is broken.

Well, I would not mind either :)

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Help on issue 5941

2009-05-07 Thread David Cournapeau
On Wed, May 6, 2009 at 6:01 PM, Tarek Ziadé  wrote:
> Hello,
>
> I need some help on http://bugs.python.org/issue5941
>
> The bug is quite simple: the Distutils unixcompiler used to set the
> archiver command to "ar -rc".
>
> For quite a while now, this behavior has changed in order to be able
> to customize the compiler behavior from
> the environment. That introduced a regression because the mechanism in
> Distutils that looks for the
> AR variable in the environment also looks into the Makefile of Python.
> (in the Makefile then is os.environ)
>
> And as a matter of fact, AR is set to "ar" in there, so the -cr option
> is not set anymore.
>
> So my question is : should I make a change into the Makefile by adding
> for example a variable called AR_OPTIONS
> then build the ar command with AR + AR_OPTIONS

I think for consistency, it could be named ARFLAGS (this is the name
usually taken for configure scripts), and both should be overridable
as the other variable in distutils.sysconfig.customize_compiler. Those
flags should be used in Makefile.pre as well, instead of the harcoded
cr as currently used.

Here is what I would try:
 - check for AR (already done in the configure script AFAICT)
 - if ARFLAGS is defined in the environment, use those, otherwise set
ARFLAGS to cr
 - use ARFLAGS in the makefile

Then, in the customize_compiler function, set archiver to $AR +
$ARFLAGS. IOW, just copying the logic used for e.g. ldshared,

I can prepare a patch if you want,

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-08 Thread David Cournapeau
On Thu, Apr 9, 2009 at 4:45 AM, Alexander Neundorf
 wrote:

> I think cmake can do all of the above (cpack supports creating packages).

I am sure it is - it is just a lot of work, specially if you want to
stay compatible with distutils-built extensions :)

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-07 Thread David Cournapeau
On Wed, Apr 8, 2009 at 7:54 AM, Alexander Neundorf
 wrote:
> On Wed, Apr 8, 2009 at 12:43 AM, Greg Ewing  
> wrote:
>> David Cournapeau wrote:
>>>
>>> Having a full
>>> fledged language for complex builds is nice, I think most familiar
>>> with complex makefiles would agree with this.
>>
>> Yes, people will still need general computation in their
>> build process from time to time whether the build tool
>> they're using supports it or not.
>
> I'm maintaining the CMake-based buildsystem for KDE4 since 3 years now
> in my sparetime, millions lines of code, multiple code generators, all
> major operating systems. My experience is that people don't need
> general computation in their build process.
> CMake supports now more general purpose programming features than it
> did 2 years ago, e.g. it has now functions with local variables, it
> can do simple math, regexps and other things.
> If we get to the point where this is not enough, it usually means a
> real program which does real work is required.
> In this case it's actually a good thing to have this as a separate
> tool, and not mixed into the buildsystem.
> Having a not very powerful, but therefor domain specific language for
> the buildsystem is really a feature :-)
> (even if it sounds wrong in the first moment).

Yes, there are some advantages to that. The point of python is to have
the same language for the build specification and the extensions, in
my mind. For extensions, you really need a full language - for
example, if you want to add support for tools which generate unknown
files in advance, and handle this correctly from a build POV, a
macro-like language is not sufficient.

>
> >From what I saw when I was building Python I didn't actually see too
> complicated things. In KDE4 we are not only building and installing
> programs, but we are also installing and shipping a development
> platform. This includes CMake files which contain functionality which
> helps in developing KDE software, i.e. variables and a bunch of
> KDE-specific macros. They are documented here:
> http://api.kde.org/cmake/modules.html#module_FindKDE4Internal
> (this is generated automatically from the cmake file we ship).
> I guess something similar could be useful for Python, maybe this is
> what distutils actually do ?

distutils does roughly everything that autotools does, and more:
 - configuration: not often used in extensions, we (numpy) are the
exception I would guess
 - build
 - installation
 - tarball generation
 - bdist_ installers (msi, .exe on windows, .pkg/.mpkg on mac os x,
rpm/deb on Linux)
 - registration to pypi
 - more things which just ellude me at the moment

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-07 Thread David Cournapeau
On Wed, Apr 8, 2009 at 6:42 AM, Alexander Neundorf
 wrote:

> What options ?

Compilation options. If you build an extension with distutils, the
extension is built with the same flags as the ones used by python, the
options are taken from distutils.sysconfig  (except for MS compilers,
which has its own options, which is one of the big pain in distutils).

>
> Can you please explain ?

If you want to stay compatible with distutils, you have to do quite a
lot of things. Cmake (and scons, and waf) only handle the build, but
they can't handle all the packaging done by distutils (tarballs
generation, binaries generation, in place build, develop mode of
setuptools, eggs, .pyc and .pyo generation, etc...), so you have two
choices: add support for this in the build tool (lot of work) or just
use distutils once everything is built with your tool of choice.

> It is easy to run external tools with cmake at cmake time and at build
> time, and it is also possible to run them at install time.

Sure, what can of build tool could not do that :) But given the design
of distutils, if you want to keep all its packaging features, you
can't just launch a few commands, you have to integrate them somewhat.
Everytime you need something from distutils, you would need to launch
python for cmake, whether with scons/waf, you can just use it as you
would use any python library. That's just inherent to the fact that
waf/scons are in the same language as distutils; if we were doing
ocaml builds, having a build tool in ocaml would have been easier,
etc...

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-07 Thread David Cournapeau
On Wed, Apr 8, 2009 at 2:24 AM, Heikki Toivonen
 wrote:
> David Cournapeau wrote:
>> The hard (or rather time consuming) work is to do everything else that
>> distutils does related to the packaging. That's where scons/waf are
>> more interesting than cmake IMO, because you can "easily" give up this
>> task back to distutils, whereas it is inherently more difficult with
>> cmake.
>
> I think this was the first I heard about using SCons this way. Do you
> have any articles or examples of this? If not, could you perhaps write one?

I developed numscons as an experiment to build numpy, scipy, and other
complex python projects depending on many library/compilers:

http://github.com/cournape/numscons/tree/master

The general ideas are somewhat explained on my blog

http://cournape.wordpress.com/?s=numscons

And also the slides from SciPy08 conf:

http://conference.scipy.org/static/wiki/numscons.pdf

It is plugged into distutils through a scons command (which bypasses
all the compiled build_* ones, so that the whole build is done through
scons for correct dependency handling). It is not really meant as a
general replacement (it is too fragile, partly because of distutils,
partly because of scons, partly because of me), but it shows it is
possible not only theoretically.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-07 Thread David Cournapeau
On Tue, Apr 7, 2009 at 11:58 PM, M.-A. Lemburg  wrote:

>>
>> This means your proposal actually doesn't add any benefit over the
>> status quo, where you can have an __init__.py that does nothing but
>> declare the package a namespace.  We already have that now, and it
>> doesn't need a new filename.  Why would we expect OS vendors to start
>> supporting it, just because we name it __pkg__.py instead of __init__.py?
>
> I lost you there.
>
> Since when do we support namespace packages in core Python without
> the need to add some form of magic support code to __init__.py ?

I think P. Eby refers to the problem that most packaging systems don't
like several packages to have the same file - be it empty or not.
That's my main personal grip against namespace packages, and from this
POV, I think it is fair to say the proposal does not solve anything.
Not that I have a solution, of course :)

cheers,

David
>
> My suggestion basically builds on the same idea as Martin's PEP,
> but uses a single __pkg__.py file as opposed to some non-Python
> file yaddayadda.pkg.
>
> Here's a copy of the proposal, with some additional discussion
> bullets added:
>
> """
> Alternative Approach:
> -
>
> Wouldn't it be better to stick with a simpler approach and look for
> "__pkg__.py" files to detect namespace packages using that O(1) check ?
>
> This would also avoid any issues you'd otherwise run into if you want
> to maintain this scheme in an importer that doesn't have access to a list
> of files in a package directory, but is well capable for the checking
> the existence of a file.
>
> Mechanism:
> --
>
> If the import mechanism finds a matching namespace package (a directory
> with a __pkg__.py file), it then goes into namespace package scan mode and
> scans the complete sys.path for more occurrences of the same namespace
> package.
>
> The import loads all __pkg__.py files of matching namespace packages
> having the same package name during the search.
>
> One of the namespace packages, the defining namespace package, will have
> to include a __init__.py file.
>
> After having scanned all matching namespace packages and loading
> the __pkg__.py files in the order of the search, the import mechanism
> then sets the packages .__path__ attribute to include all namespace
> package directories found on sys.path and finally executes the
> __init__.py file.
>
> (Please let me know if the above is not clear, I will then try to
> follow up on it.)
>
> Discussion:
> ---
>
> The above mechanism allows the same kind of flexibility we already
> have with the existing normal __init__.py mechanism.
>
> * It doesn't add yet another .pth-style sys.path extension (which are
> difficult to manage in installations).
>
> * It always uses the same naive sys.path search strategy. The strategy
> is not determined by some file contents.
>
> * The search is only done once - on the first import of the package.
>
> * It's possible to have a defining package dir and add-one package
> dirs.
>
> * The search does not depend on the order of directories in sys.path.
> There's no requirement for the defining package to appear first
> on sys.path.
>
> * Namespace packages are easy to recognize by testing for a single
> resource.
>
> * There's no conflict with existing files using the .pkg extension
> such as Mac OS X installer files or Solaris packages.
>
> * Namespace __pkg__.py modules can provide extra meta-information,
> logging, etc. to simplify debugging namespace package setups.
>
> * It's possible to freeze such setups, to put them into ZIP files,
> or only have parts of it in a ZIP file and the other parts in the
> file-system.
>
> * There's no need for a package directory scan, allowing the
> mechanism to also work with resources that do not permit to
> (easily and efficiently) scan the contents of a package "directory",
> e.g. frozen packages or imports from web resources.
>
> Caveats:
>
> * Changes to sys.path will not result in an automatic rescan for
> additional namespace packages, if the package was already loaded.
> However, we could have a function to make such a rescan explicit.
> """
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source  (#1, Apr 07 2009)
 Python/Zope Consulting and Support ...        http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> 
> 2009-03-19: Released mxODBC.Connect 1.0.1      http://python.egenix.com/
>
> ::: Try our new mxODBC.Connect Python Database Interface for free ! 
>
>
>   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>           Registered at Amtsgericht Duesseldorf: HRB 46611
>               http://www.egenix.com/company/contact/
> 

Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-07 Thread David Cournapeau
On Tue, Apr 7, 2009 at 10:08 PM, Alexander Neundorf
 wrote:

>
> What is involved in building python extensions ? Can you please explain ?

Not much: at the core, a python extension is nothing more than a
dynamically loaded library + a couple of options. One choice is
whether to take options from distutils or to set them up
independently. In my own scons tool to build python extensions, both
are possible.

The hard (or rather time consuming) work is to do everything else that
distutils does related to the packaging. That's where scons/waf are
more interesting than cmake IMO, because you can "easily" give up this
task back to distutils, whereas it is inherently more difficult with
cmake.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-07 Thread David Cournapeau
On Tue, Apr 7, 2009 at 9:14 PM,   wrote:
>
>    Ondrej> ... while scons and other Python solutions imho encourage to
>    Ondrej> write full Python programs, which imho is a disadvantage for the
>    Ondrej> build system, as then every build system is nonstandard.
>
> Hmmm...  Like distutils setup scripts?

fortunately, waf and scons are much better than distutils, at least
for the build part :)

I think it is hard to overestimate the importance of a python solution
for python softwares (python itself is different). Having a full
fledged language for complex builds is nice, I think most familiar
with complex makefiles would agree with this.

>
> I don't know thing one about cmake, but if it's good for the goose (building
> Python proper) would it be good for the gander (building extensions)?

For complex softwares, specially ones relying on lot of C and platform
idiosyncrasies, distutils is just too cumbersome and limited. Both
Ondrej and me use python for scientific usage, and I think it is no
hazard that we both look for something else. In those cases, scons -
and cmake it seems - are very nice; build tools are incredibly hard to
get right once you want to manage dependencies automatically.

For simple python projects (pure python, a few .c source files without
much dependencies), I think it is just overkill.

cheers,

David
>
> --
> Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/
>        "XML sucks, dictionaries rock" - Dave Beazley
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/cournape%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial?

2009-04-05 Thread David Cournapeau
On Sun, Apr 5, 2009 at 6:06 PM, "Martin v. Löwis"  wrote:
>> Off the top of my head, the following is needed for a successful migration:
>>
>>    - Verify that the repository at http://code.python.org/hg/ is
>> properly converted.
>
> I see that this has four branches. What about all the other branches?
> Will they be converted, or not? What about the stuff outside /python?
>
> In particular, the Stackless people have requested that they move along
> with what core Python does, so their code should also be converted.

I don't know the capabilities of hg w.r.t svn conversion, so this may
well be overkill, but git has a really good tool for svn conversion
(svn-all-fast-export, developed by KDE). You can handle almost any svn
organization (e.g. outside the usual trunk/tags/branches), and convert
email addresses of committers, split one big svn repo into
subprojects, etc... Then, the git repo could be converted to hg
relatively easily I believe.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-03-30 Thread David Cournapeau
On Tue, Mar 31, 2009 at 3:16 AM, Alexander Neundorf
 wrote:
>
> Can you please explain ? What is "those" ?

Everything in Lib. On windows, I believe this is done through project
files, but on linux at least, and I guess on most other OS, those are
handled by distutils. I guess the lack of autoconf on windows is one
reason for this difference ?

>
>> Also, when converting a project from one build system to another,
>> doing the 80 % takes 20 % in my experience.
>
> Getting it working took me like 2 days, if that's 20% it's not too bad ;-)

So it means ten days of work to convert to a new system that maybe
most python maintainers do not know. What does it bring ?

I think supporting cross compilation would be more worthwhile, for
example, in the build department.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-03-30 Thread David Cournapeau
On Tue, Mar 31, 2009 at 2:37 AM, Alexander Neundorf
 wrote:
> On Mon, Mar 30, 2009 at 12:09 AM, Neil Hodgson  wrote:
> ...
>> while so I can't remember the details. The current Python project
>> files are hierarchical, building several DLLs and an EXE and I think
>> this was outside the scope of the tools I looked at.
>
> Not sure I understand.
> Having a project which builds (shared) libraries and executables which
> use them (and which maybe have to be executed later on during the
> build) is no problem for CMake, also with the VisualStudio projects.
> >From what I remember when I wrote the CMake files for python it was
> quite straight forward.

I think Christian meant that since on windows, those are built with
visual studio project files, but everywhere else, it is built with
distutils, you can't use a common system without first converting
everything to cmake for all the other platforms.

Also, when converting a project from one build system to another,
doing the 80 % takes 20 % in my experience. The most time consuming
part is all small the details on not so common platforms.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-03-29 Thread David Cournapeau
On Mon, Mar 30, 2009 at 3:18 AM, Antoine Pitrou  wrote:

> What are the compilation requirements for cmake itself? Does it only need a
> standard C compiler and library, or are there other dependencies?

CMake is written in C++. IIRC, that's the only dependency.

cheers,

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-03-29 Thread David Cournapeau
On Mon, Mar 30, 2009 at 2:59 AM, Antoine Pitrou  wrote:
> Jeffrey Yasskin  gmail.com> writes:
>>
>> The other popular configure+make replacement is scons.
>
> I can only give uninformed information (!) here, but in one company I worked
> with, the main project decided to switch from scons to cmake due to some huge
> performance problems in scons. This was in 2005-2006, though, and I don't know
> whether things have changed.

They haven't - scons is still slow. Python is not that big, though
(from a build POV) ?

I would think the bootstrap problem to be much more significant. I
don't find the argument "many desktop have already python" very
convincing - what if you can't install it, for example ? AFAIK, scons
does not run on jython or ironpython.

>
> If you want to investigate Python-based build systems, there is waf (*), which
> apparently started out as a fork of scons (precisely due to the aforementioned
> performance problems). Again, I have never tried it.

Waf is definitely faster than scons - something like one order of
magnitude. I am yet very familiar with waf, but I like what I saw -
the architecture is much nicer than scons (waf core amount of code is
almost ten times smaller than scons core), but I would not call it a
mature project yet.

About cmake: I haven't looked at it recently, but I have a bit of hard
time believing python requires more from a build system than KDE. The
lack of autoheader is not accurate, if
only because kde projects have it:

http://www.cmake.org/Wiki/CMake_HowToDoPlatformChecks

Whether using it compared to the current system is really a win for
python, I have no idea.

David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >