[issue40249] __import__ doesn't honour globals

2020-04-14 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

I'm not entirely sure, but have to admit that the sentence

"The function imports the module name, potentially using the given globals and 
locals to determine how to interpret the name in a package context."

is a bit obscure. What does "determine how to interpret the name" actually mean 
? Is the algorithm described anywhere in detail ? In that case, a simple 
reference might be enough.

--

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40249] __import__ doesn't honour globals

2020-04-10 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

OK, thanks for the clarification. I think this is obscure enough to warrant at 
least a paragraph or two to clarify the semantics of these arguments.
I changed the issue "components" and "type" to reflect that.

--
assignee:  -> docs@python
components: +Documentation -Interpreter Core
nosy: +docs@python
type: behavior -> enhancement

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40249] __import__ doesn't honour globals

2020-04-10 Thread Stefan Seefeld


New submission from Stefan Seefeld :

I'm trying to import custom scripts that expect the presence of certain 
variables in the global namespace.
It seems `__import__('script', globals=dict(foo='bar'))` doesn't have the 
expected effect of injecting "foo" into the namespace and making it accessible 
to the script being loaded.

Is this a bug in `__import__` or am I missing something ? If the behaviour is 
expected, how can I achieve the desired behaviour ?

--
components: Interpreter Core
messages: 366160
nosy: stefan
priority: normal
severity: normal
status: open
title: __import__ doesn't honour globals
type: behavior
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue40249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-03-08 Thread Stefan Seefeld


Change by Stefan Seefeld :


--
resolution:  -> works for me
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-01-25 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

Yes. Depending on the answer to my question(s), the request either becomes: 
"please add support for this use-case", or "this use-case isn't documented 
properly", i.e. a feature request or a bug report. You choose. :-)

--

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35830] building multiple (binary) packages from a single project

2019-01-25 Thread Stefan Seefeld


New submission from Stefan Seefeld :

I'm working on a project that I'd like to split into multiple separately 
installable components. The main component is a command-line tool without any 
external dependencies. Another component is a GUI frontend that adds some 
third-party dependencies.
Therefore, I'd like to distribute the code in a single source package, but 
separate binary packages (so users can install only what they actually need).

I couldn't find any obvious way to support such a scenario with either 
`distutils` nor `setuptools`. Is there an easy solution to this ? (I'm 
currently thinking of adding two `setup()` calls to my `setup.py` script. That 
would then call all commands twice, so I'd need to override the `sdist` command 
to only build a single (joint) source package.
Is there a better way to achieve what I want ?

--
assignee: docs@python
components: Distutils, Documentation
messages: 334381
nosy: docs@python, dstufft, eric.araujo, stefan
priority: normal
severity: normal
status: open
title: building multiple (binary) packages from a single project
type: behavior

___
Python tracker 
<https://bugs.python.org/issue35830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-07 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

> The limitation is a consequence of how Linux works.
> Unix has no cross-platform API for non-blocking waiting for child process 
> finish except handling SIGCHILD signal.

Why does the `wait()` have to be non-blocking ? We can call it once in 
response to the reception of a `SIGCHILD`, where we know the call 
wouldn't block. Then we can pass the `pid` to whatever event loop 
created the subprocess to do the cleanup there...

> On the other hand signal handlers in Python should work in the main thread.

That's fine.

> Your trick with a loop creation in the main thread and actual running in 
> another thread can work, but asyncio doesn't guarantee it.
> The behavior can be broken in next releases, sorry.

Yeah, I observed some strange issues that looked like they could be 
fixed by someone intimately familiar with `asyncio`. But given the 
documented limitation, it seemed wise not to descend into that rabbit 
hole, and so I (at least temporarily) abandoned the entire approach.

--

Stefan

   ...ich hab' noch einen Koffer in Berlin...

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-07 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

OK, so while I have been able to work around the issues (by using `quamash` to 
bridge between `asyncio` and `Qt`), I'd still like to understand the rationale 
behind the limitation that any subprocess-managing event-loop has to run in the 
main thread. Is this really an architectural limitation or a limit of the 
current implementation ?

And to your question: As I wasn't really expecting such a limitation, I would 
have expected 
"To handle signals and to execute subprocesses, the event loop must be run 
in the main thread."

to be written much more prominently (as a warning admonition even, perhaps).

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-04 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

That's quite an unfortunate limitation ! I'm working on a GUI frontend to a 
Python tool I wrote using asyncio, and the GUI (Qt-based) itself insists to run 
its own event loop in the main thread.

I'm not sure how to work around this limitation, but I can report that my 
previously reported strategy appears to be working well (so far).

What are the issues I should expect to encounter running an asyncio event loop 
in a worker thread ?

--

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35635] asyncio.create_subprocess_exec() only works in main thread

2019-01-01 Thread Stefan Seefeld


New submission from Stefan Seefeld :

This is an addendum to issue35621:

To be able to call `asyncio.create_subprocess_exec()` from another thread, A 
separate event loop needs to be created. To make the child watcher aware of 
this new loop, I have to call `asyncio.get_child_watcher().attach_loop(loop)`. 
However, in the current implementation this call needs to be made by the main 
thread (or else the `signal` module will complain as handlers may only be 
registered in the main thread).

So, to work around the above limitations, the following workflow needs to be 
used:

1) create a new loop in the main thread
2) attach it to the child watcher
3) spawn a worker thread
4) set the previously created event loop as default loop

After that, I can run `asyncio.create_subprocess_exec()` in the worker thread. 
However, I suppose the worker thread will be the only thread able to call that 
function, given the child watcher's limitation to a single loop.

Am I missing something ? Given the complexity of this, I would expect this to 
be better documented in the sections explaining how `asyncio.subprocess` and 
`threading` interact.

--
components: asyncio
messages: 332855
nosy: asvetlov, stefan, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.create_subprocess_exec() only works in main thread
type: behavior

___
Python tracker 
<https://bugs.python.org/issue35635>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35621] asyncio.create_subprocess_exec() only works with main event loop

2019-01-01 Thread Stefan Seefeld


Change by Stefan Seefeld :


--
nosy: +stefan

___
Python tracker 
<https://bugs.python.org/issue35621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-10 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

ad 3) sorry, I picked a bad example - I didn't mean to suggest that immutable 
objects should in fact become mutable by modifying their `__doc__` attribute.

ad 1) good, glad to hear that.

ad 2) fine. In fact, I'm not even proposing that per-instance docstring 
generation should be "on" by default. I'm merely asking whether the Python 
community can't (or even shouldn't) agree on a single convention for how to 
represent them, such that special tools can then support them, rather than 
different tools supporting different syntax / conventions.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

On 2018-12-09 19:48, Karthikeyan Singaravelan wrote:
> There was a related proposal in 
> https://www.python.org/dev/peps/pep-0258/#attribute-docstrings

Right, but that was rejected (for unrelated reasons). The idea itself
was rejected by Guido
(https://www.python.org/dev/peps/pep-0224/#comments-from-our-bdfl), and
I'm not aware whether anyone has addressed his concerns by proposing a
different syntax.

It's sad, as right now there doesn't appear to be any way to address
this need...

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

On 2018-12-09 18:35, Steven D'Aprano wrote:
> Steven D'Aprano  added the comment:
>
>> Is there any discussion concerning what syntax might be used for 
>> docstrings associated with objects ?
> I don't know about PyDoc in general, but I would expect help(obj) to 
> just use obj.__doc__ which will return the instance docstring if it 
> exists, and if not, the type docstring (if it exists). No new syntax is 
> required, the standard ``help(obj)`` is sufficient.

That's why I distinguished between points 1) and 2) in my original mail:
The syntax is about how certain tokens in the parse tree are associated
as "docstring" with a given object (i.e., point 1), while the pydoc's
behaviour (to either accept any `__doc__` attributes, or only those of
specific types of objects) is entirely orthogonal to that (thus point 2).

I now understand that the current `pydoc` behaviour is considered
erroneous, and it sounds like a fix would be simple and focused in scope.

>> (There seem to be some partial 
>> solutions added on top of the Python parser (I think `epydoc` offered 
>> one), but it would be nice to have a built-in solution to avoid having 
>> to re-invent wheels.
> Are you suggesting we need new syntax to automatically assign docstrings 
> to instances? I don't think we do.

No, I'm not suggesting that. I'm suggesting that within the current
syntax, some additional semantic rules might be required to bind
comments (or strings) to objects as "docstrings". For example:

```

foo = 123

"""This is foo's docstring"""

```

might be one convention to add a docstring to a variable.

```

foo = 123

# This is foo's docstring

```

might be another.

None of this is syntactically new, but the construction of the AST from
the parse tree is. (I have seen both of these conventions used in custom
tools to associate documentation to variables, which of course requires
hacking into the parser internals, to add the given docstring to the
object's `__doc__` attribute.

It would be great to establish a convention for this, so in the future
tools don't have to invent their own (non-portable) convention.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


Stefan Seefeld  added the comment:

Exactly ! I'm fully aware of the ubiquity of objects in Python, and it is for 
that reason that I had naively expected `pydoc` to simply DoTheRightThing when 
encountering an object containing a `__doc__` attribute. rather than only 
working for types and function objects.

OK, assuming that this is a recognized bug / limitation, it seems easy to 
address.

Is there any discussion concerning what syntax might be used for docstrings 
associated with objects ? (There seem to be some partial solutions added on top 
of the Python parser (I think `epydoc` offered one), but it would be nice to 
have a built-in solution to avoid having to re-invent wheels.

--

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35449] documenting objects

2018-12-09 Thread Stefan Seefeld


New submission from Stefan Seefeld :

On multiple occasions I have wanted to add documentation not only to Python 
classes and functions, but also instance variables. This seems to involve (at 
least) two orthogonal questions:

1) what is the proper syntax to associate documentation (docstrings ?) to 
objects ?
2) what changes need to be applied to Python's infrastructure (e.g., the help 
system) to support it ?


I have attempted to work around 1) in my custom code by explicitly setting an 
object's `__doc__` attribute. However, calling `help()` on such an object would 
simply ignore that attribute, and instead list the documentation associated 
with the instance type.

Am I missing something here, i.e. am I approaching the problem the wrong way, 
or am I the first to want to use object-specific documentation ?

--
messages: 331443
nosy: stefan
priority: normal
severity: normal
status: open
title: documenting objects
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue35449>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33389] argparse redundant help string

2018-04-29 Thread Stefan Seefeld

New submission from Stefan Seefeld :

I'm using Python's `argparse` module to define optional arguments.
I'm calling the `argparse.add_argument` method to add both short and long 
arguments, but I notice that the generated help message lists some information 
twice. For example:
```
argparse.add_argument('-s', '--service',...)
```
will generate

```
-s SERVICE, --service SERVICE
```
and when I add a `choices` argument, even the choices list is repeated. I think 
it would be more useful to suppress the repetition to produce output such as
```
-s|--service SERVICE ...
```
instead with both the meta var as well as choices etc. printed only once.

--
components: Library (Lib)
messages: 315917
nosy: stefan
priority: normal
severity: normal
status: open
title: argparse redundant help string
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue33389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

Stefan Seefeld  added the comment:

It looks like https://bugs.python.org/issue14191 is a conversation about the 
same inconsistent behaviour. It is set to "fixed". Can you comment on this ? 
Should I follow the advice mentioned there about how to work around the issue ?

--

___
Python tracker 
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

Stefan Seefeld  added the comment:

On 15.11.2017 12:54, R. David Murray wrote:
> Can you reproduce this without your PosArgsParser?
I can indeed (by simply commenting out the `action` argument to the
`add_argument()` calls).
That obviously results in all positional arguments being accumulated in
the `goal` member, as there is no logic to distinguish `a` from `a=b`
semantically.

--

___
Python tracker 
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32036] error mixing positional and non-positional arguments with `argparse`

2017-11-15 Thread Stefan Seefeld

New submission from Stefan Seefeld :

I'm trying to mix positional and non-positional arguments with a script using 
`argparse`, but I observe inconsistent behaviour.
The attached test runs fine when invoked with

test_argparse.py --info a a=b
test_argparse.py a a=b --info

but produces the error `error: unrecognized arguments: a=b` when invoked as

test_argparse.py a --info a=b

Is this intended behaviour ? If yes, is this documented ? If not, is there a 
way to make this work with existing `argparse` versions ?

--
components: Library (Lib)
files: test_argparse.py
messages: 306283
nosy: stefan
priority: normal
severity: normal
status: open
title: error mixing positional and non-positional arguments with `argparse`
type: behavior
Added file: https://bugs.python.org/file47268/test_argparse.py

___
Python tracker 
<https://bugs.python.org/issue32036>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

OK, fair enough, that makes sense. As I said, in my last message, I was mainly 
simply trying to figure out the exact location of the error in the executed 
script, which I got from inspecting the SyntaxError.
So if all of this is expected behaviour, I think we can close this issue.

Many thanks for following up.

--
resolution:  -> not a bug
stage: needs patch -> resolved
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Answering my own question:

It appears I can get the location of a syntax error by inspecting the
raised `SyntaxError`, which solves my specific use-case. 
The bug remains, though: The traceback is incomplete if it stems from a syntax 
error.

--

___
Python tracker 
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec` on SyntaxError

2017-05-29 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Some further experiements:

Replacing the `exec(f.read(), env)` line by
```
code = compile(f.read(), 'script', 'exec')
exec(code, env)
```
exhibits the same behaviour. If I remove the `try...except`, the correct
(full) traceback is printed out. So it looks like the issue is with the 
traceback propagation through exception handlers when the error happens during 
parsing.

--

___
Python tracker 
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30496] Incomplete traceback with `exec`

2017-05-28 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The following code is supposed to catch and report errors encountered during 
the execution of a (python) script:

```
import traceback
import sys

try:
env = {}
with open('script') as f:
exec(f.read(), env)
except:
type_, value_, tb = sys.exc_info()
print (traceback.print_tb(tb))
```
However, depending on the nature of the error, the traceback may contain the 
location of the error *within* the executed `script` file, or it may only 
report the above `exec(f.read(), env)` line.

The attached tarball contains both the above as well as a 'script' that exhibit 
the problem.

Is this a bug or am I missing something ? Are there ways to work around this, 
i.e. determine the correct (inner) location of the error ?

(I'm observing this with both Python 2.7 and Python 3.5)

--
files: pyerror.tgz
messages: 294645
nosy: stefan
priority: normal
severity: normal
status: open
title: Incomplete traceback with `exec`
type: behavior
Added file: http://bugs.python.org/file46908/pyerror.tgz

___
Python tracker 
<http://bugs.python.org/issue30496>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26481] unittest discovery process not working without .py source files

2016-03-04 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The unittest test discovery right now only looks into sub-packages if they 
contain a `__init__.py` file. That's an unnecessary requirement, as packages 
are also importable if only `__init__.pyc` is present.

--
components: Library (Lib)
messages: 261192
nosy: stefan
priority: normal
severity: normal
status: open
title: unittest discovery process not working without .py source files

___
Python tracker 
<http://bugs.python.org/issue26481>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2016-01-21 Thread Stefan Seefeld

Stefan Seefeld added the comment:

I believe what I actually want is for the discovery mechanism to be fully 
implicit. It turns out that already *almost* works right now.

What doesn't work (and what this bug report really was about initially), is the 
use of the 'discover' command with the '-p "*.py"' argument, which for some 
reason makes certain tests (all ?) count twice. It looks like packages are 
visited twice, once as modules, and once via their contained '__init__.py' 
file...

(For the implicit discovery to work better, I believe, the discovery-specific 
options need to be made available through the main parser, so they can be used 
even without the 'discover' command.)

--

___
Python tracker 
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2016-01-20 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Hi, I'm investigating more issues related to test loading, and thus I have 
discovered issue #16662.

I have found quite a number of inconsistencies and bugs in the loading of 
tests. But without getting a sense of what the supposed behavior is I find it 
difficult to come up with workarounds and fixes.

Is there a place where these issues can be discussed (rather than just looking 
at each bug individually) ?

I'd ultimately like to be able to invoke

  `python -m unittest my.test.subpackage` 

and have unittest pick up all the tests within that recursively, with and 
without load_tests() functions.

(On a tangential note, I would also like to have a mode where the found tests 
are listed without being executed. I have hacked a pseudo-TestRunner that does 
that, but I'm not sure this is the best approach.)

Is there any place where the bigger picture can be discussed ?

Thanks,

--

___
Python tracker 
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26033] distutils default compiler API is incomplete

2016-01-06 Thread Stefan Seefeld

New submission from Stefan Seefeld:

I'm trying to use the distutil compiler to preprocess some header (to be used 
with the cffi package).
The code is

compiler = distutils.ccompiler.new_compiler()
compiler.add_include_dir(join(sys.prefix, 'include'))
compiler.preprocess(source)

This raises this exception (on Linux):

  File ".../distutils/unixccompiler.py", line 88, in preprocess
pp_args = self.preprocessor + pp_opts
TypeError: unsupported operand type(s) for +: 'NoneType' and 'list'

caused by 'set.preprocessor' to be set to None (with the preceding comment:

# The defaults here
# are pretty generic; they will probably have to be set by an outsider
# (eg. using information discovered by the sysconfig about building
# Python extensions).

Seems that code never got fully implemented.
Further, the MSVC version of the compiler (msvccompiler.py) doesn't even 
implement a "preprocess()" method, so this falls back to the 
CCompiler.preprocess() default, which does nothing !

--
components: Distutils
messages: 257663
nosy: dstufft, eric.araujo, stefan
priority: normal
severity: normal
status: open
title: distutils default compiler API is incomplete
type: behavior
versions: Python 2.7, Python 3.5

___
Python tracker 
<http://bugs.python.org/issue26033>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25726] sys.setprofile / sys.getprofile asymetry

2015-11-24 Thread Stefan Seefeld

New submission from Stefan Seefeld:

I'm using the `cProfile` module to profile my code.

I tried to temporarily disable the profiler by using:

  prof = sys.getprofile()
  sys.setprofile(None)
  ...
   sys.setprofile(prof)

resulting in an error.

The reason is that with `cProfile`, `sys.getprofile` returns the profile object 
itself, which isn't suitable as argument for `sys.setprofile` (which expects a 
callable).

Notice that if I use the `profile` module instead of `cProfile`, the above 
works fine.

--
messages: 255301
nosy: stefan
priority: normal
severity: normal
status: open
title: sys.setprofile / sys.getprofile asymetry
type: behavior
versions: Python 3.4

___
Python tracker 
<http://bugs.python.org/issue25726>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25520] unittest load_tests protocol not working as documented

2015-10-30 Thread Stefan Seefeld

New submission from Stefan Seefeld:

As described in the README contained in the attached tarball, I'm observing 
wrong behavior. I have based this code on my understanding of 
https://docs.python.org/2/library/unittest.html#load-tests-protocol, but the 
effect isn't as expected (I see duplicate appearances of tests whenever I use 
the load_tests() mechanism.)

--
files: unittest_bug.tgz
messages: 253757
nosy: stefan
priority: normal
severity: normal
status: open
title: unittest load_tests protocol not working as documented
type: behavior
versions: Python 2.7, Python 3.4
Added file: http://bugs.python.org/file40905/unittest_bug.tgz

___
Python tracker 
<http://bugs.python.org/issue25520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2101] xml.dom documentation doesn't match implementation

2008-02-13 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The docs at http://docs.python.org/lib/dom-element-objects.html
claim that removeAttribute(name) silently ignores the attempt to
remove an unknown attribute. However, the current implementation
in the minidom module (part of _xmlplus) raises an xml.dom.NotFoundErr
exception.

--
components: Documentation, XML
messages: 62359
nosy: stefan
severity: normal
status: open
title: xml.dom documentation doesn't match implementation
type: behavior
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2101>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2041] __getslice__ still called

2008-02-07 Thread Stefan Seefeld

Stefan Seefeld added the comment:

Mark,

thanks for the quick follow-up.
OK, i now understand the situation better. The documentation I had read 
originally didn't talk about special-casing built-in objects. (And since 
I want to extend a tuple, I do have to override __getslice__ since I 
want to make sure the returned object still has the derived type.)

Yes, I believe this issue can be closed as invalid.
(Though I believe the docs could be a bit more clear about this.)

Thanks,
Stefan

-- 

   ...ich hab' noch einen Koffer in Berlin...

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2041>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2041] __getslice__ still called

2008-02-07 Thread Stefan Seefeld

New submission from Stefan Seefeld:

The python documentation states that since python 2.0 __getslice__ is
obsoleted by __getitem__. However, testing with python 2.3 as well as
2.5, I find the following surprising behavior:

class Tuple(tuple):

  def __getitem__(self, i): print '__getitem__', i
  def __getslice__(self, i): print '__getslice__', i

t = Tuple()
t[0] # __getitem__ called with type(i) == int
t[0:2] # __getslice__ called with type(i) == slice
t[0:2:1] # __getitem__ called with type(i) == slice

--
components: Interpreter Core
messages: 62162
nosy: stefan
severity: major
status: open
title: __getslice__ still called
type: behavior
versions: Python 2.3, Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2041>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com