[issue44524] __name__ attribute in typing module

2021-08-05 Thread Bas van Beek


Bas van Beek  added the comment:

All right, the `__name__` bug fix is up at 
https://github.com/python/cpython/pull/27614.

--

___
Python tracker 
<https://bugs.python.org/issue44524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44524] __name__ attribute in typing module

2021-08-05 Thread Bas van Beek


Change by Bas van Beek :


--
pull_requests: +26108
stage: resolved -> patch review
pull_request: https://github.com/python/cpython/pull/27614

___
Python tracker 
<https://bugs.python.org/issue44524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44524] __name__ attribute in typing module

2021-08-05 Thread Bas van Beek


Bas van Beek  added the comment:

I do agree that it's nice to have a `__name__` for special forms, as they do 
very much behave like types even though they're strictly speaking not distinct 
classes.

Whether we should have this feature is a distinct "problem" from its `__name__` 
being `None` (as can happen in its the current implementation), 
the latter of which is actively breaking tests over in 
https://github.com/numpy/numpy/pull/19612.
I don't recall ever seeing a non-string name before, so I'd argue that this is 
a bug.

It seems to be easy to fix though (see below for a `Union` example), so if 
there are objections I'd like to submit a PR.

```
-return _UnionGenericAlias(self, parameters)
+return _UnionGenericAlias(self, parameters, name="Union")
```

--

___
Python tracker 
<https://bugs.python.org/issue44524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44524] __name__ attribute in typing module

2021-08-04 Thread Bas van Beek


Bas van Beek  added the comment:

This PRs herein have created a situation wherein the `__name__`/`__qualname__` 
attributes of certain typing objects can be `None`.
Is this behavior intentional?


```
>>> from typing import Union

>>> print(Union[int, float].__name__)
None
```

--
nosy: +BvB93

___
Python tracker 
<https://bugs.python.org/issue44524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Bas van Beek  added the comment:

> Do any of the other deleted types strike you as possibly needing to come back?

I don't think so, no. The only other one that stands out to me is 
`DictProxyType`, which has already been reintroduced as `MappingProxyType`.

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Bas van Beek  added the comment:

According to the relevant commit 
(https://github.com/python/cpython/commit/c9543e42330e5f339d6419eba6a8c5a61a39aeca):
"Removed all types from the 'types' module that are easily accessible through 
builtins."

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Bas van Beek  added the comment:

`NoneType` and `NotImplementedType` have been added to the PR as of the latest 
set of pushes.

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Bas van Beek  added the comment:

Apparently pyright has some interest in `NoneType` 
(https://github.com/python/typeshed/pull/4519), so it seems there are already 
some actual use cases.

In any case, I'm ok with adding `NoneType` and `NotImplementedType` to the PR.

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Bas van Beek  added the comment:

If we're going ahead with this: PR https://github.com/python/cpython/pull/22336 
contains a concrete implementation of the proposed changes.

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-21 Thread Bas van Beek


Change by Bas van Beek :


--
keywords: +patch
pull_requests: +21381
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/22336

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-19 Thread Bas van Beek


Bas van Beek  added the comment:

If you're asking whether or not one can infer the return type of 
`type(Ellipsis)` then yes. 
In such case the inferred type is `builtins.ellipsis`, which is a private 
stub-only class (see the referenced typeshed issue in my original post).

If you're asking if a valid annotation can be constructed from `type(Ellipsis)` 
then the answer is unfortunately no (see below for a few examples).

```
EllipsisType = type(Ellipsis)

# Both examples are considered invalid
def func1(a: type(Ellipsis): ...
def func2(a: EllipsisType): ...

```

--

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41810] Consider reintroducing `types.EllipsisType` for the sake of typing

2020-09-18 Thread Bas van Beek


New submission from Bas van Beek :

`Ellipsis` is one of the few builtin objects whose type is not exposed via the 
`types` module. 
This is not really an issue during runtime, as you can always call 
`type(Ellipsis)`, but for the purpose of typing it is detrimental; the lack of 
suitable type means that it is impossible to properly annotate a function which 
takes or returns `Ellipsis` (unless one is willing to resort to the use of 
non-public types: https://github.com/python/typeshed/issues/3556).

In order to resolve this issue I propose to reintroduce `types.EllipsisType`. 
This should be a fairly simple process, so if there are no objections I'd be 
willing to give it a shot.

--
components: Library (Lib)
messages: 377135
nosy: BvB93
priority: normal
severity: normal
status: open
title: Consider reintroducing `types.EllipsisType` for the sake of typing
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38856] asyncio ProactorEventLoop: wait_closed() can raise ConnectionResetError

2020-07-16 Thread Bas Nijholt


Bas Nijholt  added the comment:

I have noticed the following on Linux too:
```
Traceback (most recent call last):
  File "/config/custom_components/kef_custom/aiokef.py", line 327, in 
_disconnect
await self._writer.wait_closed()
  File "/usr/local/lib/python3.7/asyncio/streams.py", line 323, in wait_closed
await self._protocol._closed
  File "/config/custom_components/kef_custom/aiokef.py", line 299, in 
_send_message
await self._writer.drain()
  File "/usr/local/lib/python3.7/asyncio/streams.py", line 339, in drain
raise exc
  File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 814, in 
_read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
```
after which every invocation of `await asyncio.open_connection` fails with 
`ConnectionRefusedError` until I restart the entire Python process.

IIUC, just ignoring the exception will not be enough.

This issue is about the `ProactorEventLoop` for Windows specifically, however, 
my process uses the default event loop on Linux.

--
nosy: +basnijholt

___
Python tracker 
<https://bugs.python.org/issue38856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36281] OSError: handle is closed for ProcessPoolExecutor and run_in_executor

2020-04-10 Thread Bas Nijholt


Bas Nijholt  added the comment:

Using `git bisect` I've discovered the commit (b713adf27a) 
(https://github.com/python/cpython/commit/b713adf27a) that broke the code.

I've used one script:
```test.py
import sys
sys.path.append("/Users/basnijholt/Downloads/cpython/Lib/concurrent/futures/")
from random import random
from process import ProcessPoolExecutor
import asyncio

ioloop = asyncio.get_event_loop()

async def func(ioloop, executor):
result = await ioloop.run_in_executor(executor, random)
executor.shutdown(wait=False)  # bug doesn't occur when `wait=True`

if __name__ == "__main__":
executor = ProcessPoolExecutor()
task = ioloop.run_until_complete(func(ioloop, executor))
```
and `test2.py`
```
import pexpect
import sys

child = pexpect.spawn("python /Users/basnijholt/Downloads/cpython/test.py")
try:
child.expect(["OSError", "AssertionError"], timeout=1)
raise Exception
except pexpect.EOF as e:
sys.exit(0)
```

Then did
```
git checkout master
git reset --hard 9b6c60cbce  # bad commit
git bisect start
git bisect bad
git bisect good ad2c2d380e  # good commit
git bisect run python test2.py
```

I will see if I can fix it.

--

___
Python tracker 
<https://bugs.python.org/issue36281>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34075] asyncio: We should prohibit setting a ProcessPoolExecutor in with set_default_executor

2019-03-26 Thread Bas Nijholt


Bas Nijholt  added the comment:

I think this issue is related to  the problem in 
https://bugs.python.org/issue36281

If it indeed is the case, then the fix proposed here and implemented in 
https://github.com/python/cpython/commit/22d25085db2590932b3664ca32ab82c08f2eb2db
 won't really help.

--
nosy: +basnijholt

___
Python tracker 
<https://bugs.python.org/issue34075>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36281] OSError: handle is closed for ProcessPoolExecutor and run_in_executor

2019-03-18 Thread Bas Nijholt


Change by Bas Nijholt :


--
type:  -> crash

___
Python tracker 
<https://bugs.python.org/issue36281>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36281] OSError: handle is closed for ProcessPoolExecutor and run_in_executor

2019-03-13 Thread Bas Nijholt


New submission from Bas Nijholt :

The following code in Python 3.7.1
```
import random
import concurrent.futures
import asyncio

executor = concurrent.futures.ProcessPoolExecutor()
ioloop = asyncio.get_event_loop()

async def func():
result = await ioloop.run_in_executor(executor, random.random)
executor.shutdown(wait=False)  # bug doesn't occur when `wait=True`

task = ioloop.create_task(func())
```


prints the following error:
```
Exception in thread QueueManagerThread:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
  File "/opt/conda/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
  File "/opt/conda/lib/python3.7/concurrent/futures/process.py", line 368, in 
_queue_management_worker
thread_wakeup.clear()
  File "/opt/conda/lib/python3.7/concurrent/futures/process.py", line 92, in 
clear
while self._reader.poll():
  File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 255, in 
poll
self._check_closed()
  File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 136, in 
_check_closed
raise OSError("handle is closed")
OSError: handle is closed
```

I think this is related to https://bugs.python.org/issue34073 and 
https://bugs.python.org/issue34075

This happens in the Adaptive package 
https://adaptive.readthedocs.io/en/latest/docs.html#examples and the related 
issue is https://github.com/python-adaptive/adaptive/issues/156

--
components: asyncio
messages: 337868
nosy: asvetlov, basnijholt, yselivanov
priority: normal
severity: normal
status: open
title: OSError: handle is closed for ProcessPoolExecutor and run_in_executor
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue36281>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13354] tcpserver should document non-threaded long-living connections

2016-02-20 Thread Bas Wijnen

Bas Wijnen added the comment:

Thank you for your fast response as well.

I overlooked that paragraph indeed.  It doesn't mention anything about avoiding 
a socket shutdown however.  Keeping a list of requests isn't very useful if all 
the sockets in the list are closed. ;-)

So I would indeed suggest an addition: I would change this paragraph:

These four classes process requests synchronously; each request must be 
completed before the next request can be started. This isn’t suitable if each 
request takes a long time to complete, because it requires a lot of 
computation, or because it returns a lot of data which the client is slow to 
process. The solution is to create a separate process or thread to handle each 
request; the ForkingMixIn and ThreadingMixIn mix-in classes can be used to 
support asynchronous behaviour.

into:

By default, these four classes process requests synchronously; each request 
must be completed before the next request can be started. This isn’t suitable 
if each request takes a long time to complete, because it requires a lot of 
computation, or because it returns a lot of data which the client is slow to 
process, or because the information that should be sent to the client isn't 
available yet when the request is made. One possible solution is to create a 
separate process or thread to handle each request; the ForkingMixIn and 
ThreadingMixIn mix-in classes can be used to support asynchronous behaviour. 
Another option is to store the socket for later use, and disable the shutting 
down of the socket by overriding process_request with an function that only 
calls finish_request, and not shutdown_request. In that case, the socket must 
be shut down by the program when it is done with it.

At the end of the last paragraph you refer to, it should also be mentioned that 
the program must do something to prevent the socket from being shut down.  In 
the description of process_request, it would probably also be useful to add 
that the default implementation (as opposed to *MixIn) calls shutdown_request() 
after finish_request().

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue13354>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13354] tcpserver should document non-threaded long-living connections

2016-02-19 Thread Bas Wijnen

Bas Wijnen added the comment:

For example, I have some programs which are HTTP servers that implement 
WebSockets.  For regular web page requests, it is acceptable if the connection 
is closed after the page is sent.  But for a WebSocket it isn't: the whole 
point of that protocol is to allow the server to send unsolicited messages to 
the browser.  If the connection is closed, it cannot do that.  The 
documentation currently suggests that the only way to avoid handle() closing 
the connection is to not return.  That means that the only way to do other 
things is by using multi-processing or (shiver) multi-threading.

My suggestion is to add a short explanation about how connections can be kept 
open after handle() returns, so that a single threaded event loop can be used.  
Of course the socket must then be manually closed when the program is done with 
it.

If I understand you correctly, overriding process_request would allow handle() 
to return without closing the socket.  That does sound better than overriding 
shutdown_request.

In the current documentation (same link as before, now for version 3.5.1), 
there is no mention at all about close() or shutdown() of the accepted sockets. 
 The only information on the subject is that if you want asynchronous handling 
of data, you must start a new thread or process.  This is not true, and in many 
cases it is not what I would recommend.  I think event loops are much easier to 
maintain and debug than multi-process applications, and infinitely easier than 
multi-threading applications.  I don't mind that other people disagree, and I'm 
not suggesting that those ways of handling should be removed (multi-process has 
its place, and I can't convince everyone that multi-threading is evil).  What 
I'm saying is that the ability to use an event loop to handle asynchronous data 
with this class deserves to be mentioned as well.

Obviously, it does then need to have the explanation about what to override to 
make it work.  My suggestion is simply what I ended up with after seeing it 
fail and reading the code.  If what you describe is the recommended way, please 
say that instead.  My point is that the scenario should presented as an option, 
I don't have an opinion on what it should say.

--
status: pending -> open

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue13354>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21430] Document ssl.pending()

2014-05-18 Thread Bas Wijnen

Bas Wijnen added the comment:

Alexey: please be more civil.

Antoine: In that case, can you please explain how you would recommend me to 
implement my use case, where most of my calls are master-initiated and 
blocking, but some slave-initiated events must be non-blocking?  Should I make 
a lot of calls to sslsocket.setblocking() to switch it on and off all the time? 
 AFAIK that is a system call (or isn't it?); while that won't make any real 
difference in performance in Python, it doesn't feel right to make system calls 
when there's technically no need for it.

Also, as I suggested previously, if you don't document the method, could you 
please add the word pending somewhere in the text?  This ensures people 
looking for documentation of what they see in the source will find this 
explanation.  It may also be good to add a note to the source code that this 
function should not be used.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21430
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21430] Document ssl.pending()

2014-05-17 Thread Bas Wijnen

Bas Wijnen added the comment:

After trying to use this, I think ssl.pending is a valuable function that 
should be supported and documented.

My use case is a half-synchronous system, where I want most calls to block but 
asynchronous events are possible as well.  Switching the socket between 
blocking and non-blocking all the time is annoying and results in code that is 
harder to read.

With ssl.pending, I can simply repeat the read as long as data is pending.  
Setting my buffer size high might technically work, but that seems too fragile. 
 I don't like using an undocumented function either, but I'm hoping that gets 
fixed. ;-)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21430
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21430] Document ssl.pending()

2014-05-12 Thread Bas Wijnen

Bas Wijnen added the comment:

The documentation about non-blocking is clear enough, thank you for pointing it 
out.

However, I would ask to write anything in there that contains the word 
pending.  The reason is that I didn't find anything in the documentation, 
looked in the source, found the pending() method and searched the documentation 
to see how it was defined.  If I would have found an explanation that I 
shouldn't be using it, even if that wasn't the documentation of the method, I 
would have done the right thing in my program.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21430
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21430] Document ssl.pending()

2014-05-04 Thread Bas Wijnen

New submission from Bas Wijnen:

In order to use ssl sockets asynchronously, it is important to use the 
pending() method, otherwise the internal buffer will be ignored, and select may 
block for new data while it's already waiting.  See bug #16976 and 
http://stackoverflow.com/questions/21663705/how-to-use-select-with-python-ssl-socket-buffering

Using pending() works fine, but is entirely undocumented.  
https://docs.python.org/2.7/library/ssl.html (and all other versions) don't 
even mention the existence of the method.  I hope this can be changed; using an 
undocumented feature isn't nice, but in this case there is no other good 
solution.

--
assignee: docs@python
components: Documentation
messages: 217884
nosy: docs@python, shevek
priority: normal
severity: normal
status: open
title: Document ssl.pending()
versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21430
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



OrderedEnum examples

2013-07-30 Thread Bas van der Wulp
Using the enum34 0.9.13 package from PyPi in Python 2.7.3, the examples 
for OrderedEnum seem to be broken.


The example in the package documentation reads:

class OrderedEnum(Enum):
def __ge__(self, other):
if self.__class__ is other.__class__:
return self._value = other._value
return NotImplemented
def __gt__(self, other):
if self.__class__ is other.__class__:
return self._value  other._value
return NotImplemented
def __le__(self, other):
if self.__class__ is other.__class__:
return self._value = other._value
return NotImplemented
def __lt__(self, other):
if self.__class__ is other.__class__:
return self._value  other._value
return NotImplemented

class Grade(OrderedEnum):
__ordered__ = 'A B C D F'
A = 5
B = 4
C = 3
D = 2
F = 1

Grade.C  Grade.A

to which Python replies with:

Traceback (most recent call last):
  File test.py, line 35, in module
print Grade.C  Grade.A
  File test.py, line 23, in __lt__
return self._value  other._value
AttributeError: 'Grade' object has no attribute '_value'

Also, in the example in the Python 3.4 library documentation (section 
8.17.2) has the __ordered__ attribute removed (presumably because, in 
contrast to Python 2.x, Python 3 will respect the order of attribute 
definition). This example gives the same ValueErrror when using the 
enum34 package in Python 2.7.3. It is the same example, after all.


Replacing each occurrence of self._value with either self._value_ or 
self.value in the examples seems to make them work as expected.


Are both examples incorrect, or not intended to work in Python 2.x?

--
S. van der Wulp
--
http://mail.python.org/mailman/listinfo/python-list


Re: OrderedEnum examples

2013-07-30 Thread Bas van der Wulp

On 30-7-2013 21:30, Ethan Furman wrote:

On 07/30/2013 11:58 AM, Ian Kelly wrote:

On Tue, Jul 30, 2013 at 12:18 PM, Bas van der Wulp
bas.vdw...@gmail.com wrote:

Replacing each occurrence of self._value with either self._value_ or
self.value in the examples seems to make them work as expected.

Are both examples incorrect, or not intended to work in Python 2.x?


The _value attribute was renamed _value_ in:

http://hg.python.org/cpython/rev/511c4daac102

It looks like the example wasn't updated to match.  You should
probably just use self.value here since the name of the private
attribute is an implementation detail.


In `__new__` it has to be `_value_`, but in the other methods `.value`
works fine.  Updated the 3.4 example with `.value`.

--
~Ethan~


That was quick! Thanks Ethan and Ian.

Regards,
Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Prime number generator

2013-07-10 Thread Bas
On Wednesday, July 10, 2013 4:00:59 PM UTC+2, Chris Angelico wrote:
[...]
 So, a few questions. Firstly, is there a stdlib way to find the key
 with the lowest corresponding value? In the above map, it would return
 3, because 18 is the lowest value in the list. I want to do this with
 a single pass over the dictionary. 

In [1]: prime = {2: 20, 3: 18, 5: 20, 7: 21, 11: 22, 13: 26}

In [2]: smallest_key = min(prime.iteritems(), key=lambda k_v: k_v[1])[0]

In [3]: smallest_key
Out[3]: 3

Still trying to figure out your algorithm ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Prime number generator

2013-07-10 Thread bas
On Wednesday, July 10, 2013 5:12:19 PM UTC+2, Chris Angelico wrote:
 Well, that does answer the question. Unfortunately the use of lambda
 there has a severe performance cost [ ...]
If you care about speed, you might want to check the heapq module. Removing the 
smallest item and inserting a new item in a heap both cost O(log(N)) time, 
while finding the minimum in a dictionary requires iterating over the whole 
dictionary, which cost O(N) time.

(untested)
#before loop
from heapq import *
primes = [(2,2)] #heap of tuples (multiple, prime). start with 1 item, so no 
need for heapify

#during loop
smallest, prm = heappop(primes)
heappush(primes, (smallest+prm, prm))

#when new prime found
heappush(primes, (i+i, i))

  Still trying to figure out your algorithm ...
 It's pretty simple.  [...]
I understand what you are trying, but it is not immediately clear to me that it 
works correctly if for example a smallest factor appears twice in the list. I 
don't have time for it now, but it for sure can be simplified. The while loop, 
for example, can be replaced by an if, since it will never execute more than 
once (since if i is prime  2, i+1 will be divisible by 2)


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: baffled classes within a function namespace. Evaluation order.

2013-04-25 Thread Bas
On Thursday, April 25, 2013 11:27:37 PM UTC+2, Alastair Thompson wrote:
 I am completely baffled by the behavior of this code with regards to the 
 evaluation order of namespaces when assigning the class attributes.  Both 
 classes are nested within a function I called whywhywhy.
[snip weird namespace problem]

Hi,

I am not a namespace expert, but I think the following example shows the same 
problem in an easier way without any classes, and gives a more helpful error 
message:

In [1]: a = 123
In [2]: def f():
   ...: print a
   ...: b = 456

In [3]: f()
123

In [4]: def g():
   ...: print a
   ...: a = 456

In [5]: g()
---
UnboundLocalError Traceback (most recent call last)
/home/xxx/ipython-input-5-d65ffd94a45c in module()
 1 g()

/home/xxx/ipython-input-4-c304da696fc2 in g()
  1 def g():
 2 print a
  3 a = 456
  4 

UnboundLocalError: local variable 'a' referenced before assignment


My guess is that in f(), the compiler sees no definition of a, so it assumes is 
must come from the outer namespace. In g(), however, it sees on the second line 
that a is uses as an assignment target, so it makes the variable a local. When 
it is executed, it rightfully complains that the local variable is not yet 
defined. A smarter compiler might prevent this problem, but then again a 
smarter programmer would not have local and global variables with the same name.

In your example, something similar is probably happening, since you assign 
something to third inside example2, thereby making it 'local'. Since you are 
dealing with classes, the error message is probably not so clear (but whywhywhy 
would you define a class inside a function?) Does that make sense?

HTH,
Bas
-- 
http://mail.python.org/mailman/listinfo/python-list


real-time monitoring of propriety system: embedding python in C or embedding C in python?

2013-02-05 Thread Bas
Hi Group,

at work, we are thinking to replace some legacy application, which is a 
home-grown scripting language for monitoring and controlling a large 
experiment. It is able to read live data from sensors, do some simple logic and 
calculations, send commands to other subsystems and finally generate some new 
signals. The way it is implemented is that it gets a chunk of 1 second of data 
(thousands of signals at sample rates from 1Hz to several kHz), does some 
simple calculations on selected signals, does some simple logic, sends some 
commands and finally computes some 1Hz output signals, all before the next 
chunk of data arrives. The purpose is mainly to monitor other fast processes 
and adjust things like process gains and set-points, like in a SCADA system. (I 
know about systems like Epics and Tango, but I cannot use those in the near 
future.) It can be considered soft-real time: it is desirable that the 
computation finishes within the next second most of the time, but if the 
deadline is missed
  occasionally, nothing bad should happen. The current system is hard to 
maintain and is limited in capabilities (no advanced math, no sub-functions, 
...).

I hope I don't have to convince you that python would be the perfect language 
to replace such a home-grown scripting language, especially since you than get 
all the power of tools like numpy, logging and interface to databases for free. 
Convincing my colleagues might cost some more effort, so I want to write a 
quick (and dirty?) demonstration project. Since all the functions I have to 
interface with (read and write of live data, sending commands, ...) are 
implemented in C, the solution will require writing both C and python. I have 
to choose between two architectures:

A) Implement the main program in C. In a loop, get a chunk of data using direct 
call of C functions, convert data to python variables and call an embedded 
python interpreter that runs one iteration of the user's algorithm. When the 
script finishes, you read some variables from the interpreter and then call 
some other C-function to write the results.

B) Implement the main loop in python. At the beginning of the loop, you call an 
embedded C function to get new data (using ctypes?), make the result readable 
from python (memoryview?), do the user's calculation and finally call another C 
function to write the result.

Are there any advantages for using one method over the other? Note that I have 
more experience with python than with C.

Thanks,
Bas
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue13354] tcpserver should document non-threaded long-living connections

2011-11-06 Thread Bas Wijnen

New submission from Bas Wijnen wij...@debian.org:

http://docs.python.org/py3k/library/socketserver.html says:

The solution is to create a separate process or thread to handle each request; 
the ForkingMixIn and ThreadingMixIn mix-in classes can be used to support 
asynchronous behaviour.

There is another way, which doesn't bring multi-threading hell over you: keep a 
copy of the file descriptor and use it when events occur. I recall that this 
suggestion used to be in the documentation as well, but I can't find it 
anymore. It would be good to add this suggestion to the documentation.

However, there is a thing you must know before you can use this approach: 
tcpserver calls shutdown() on the socket when handle() returns. This means that 
the network connection is closed. Even dup2() doesn't keep it open (it lets you 
keep a file descriptor, but it returns EOF). The solution for this is to 
override shutdown_request of your handler to only call close_request (or not 
call anything at all) for sockets which must remain open. That way, as long as 
there is a reference to the socket, the network connection will not be shut 
down. Optionally the socket can be shutdown() explicitly when you're done with 
the connection.

Something like the paragraph above would be useful in the documentation IMO.

--
assignee: docs@python
components: Documentation
messages: 147147
nosy: docs@python, shevek
priority: normal
severity: normal
status: open
title: tcpserver should document non-threaded long-living connections
type: feature request
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13354
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Too much code - slicing

2010-09-17 Thread Bas
On Sep 17, 10:01 pm, Andreas Waldenburger use...@geekmail.invalid
wrote:
 On Thu, 16 Sep 2010 16:20:33 -0400 AK andrei@gmail.com wrote:

  I also like this construct that works, I think, since 2.6:

  code = dir[int(num):] if side == 'l' else dir[:-1*int(num)]

 I wonder when this construct will finally start to look good.


Using IFs is just plain ugly. Why not go for the much more pythonic

code = (lambda s:dir[slice(*(s*int(num),None)[::s])])(cmp('o',side))

Much easier on the eyes and no code duplication ... ;)

Bas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: lambda with floats

2010-04-09 Thread Bas
On Apr 7, 6:15 am, Patrick Maupin pmau...@gmail.com wrote:
 I should stop making a habit of responding to myself, BUT.  This isn't
 quite an acre in square feet.  I just saw the 43xxx and assumed it
 was, and then realized it couldn't be, because it wasn't divisible by
 10.  (I used to measure land with my grandfather with a 66 foot long
 chain, and learned at an early age that an acre was 1 chain by 10
 chains, or 66 * 66 * 10 = 43560 sqft.)
 That's an exact number, and 208 is a poor approximation of its square
 root.

There is no need to remember those numbers for the imperially
challenged people:

In [1]: import scipy.constants as c

In [2]: def acre2sqft(a):
   ...: return a * c.acre / (c.foot * c.foot)
   ...:

In [3]: acre2sqft(1)
Out[3]: 43560.0


Cheers,
Bas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: medians for degree measurements

2010-01-25 Thread Bas
On Jan 23, 1:09 am, Steve Howell showel...@yahoo.com wrote:
[snip problem with angle data wrapping around at 360 degrees]

Hi,

This problem is trivial to solve if you can assume that you that your
data points are measured consecutively and that your boat does not
turn by more than 180 degrees between two samples, which seems a
reasonable use case. If you cannot make this assumption, the answer
seems pretty arbitrary to me anyhow. The standard trick in this
situation is to 'unwrap' the data (fix  180 deg jumps by adding or
subtracting 360 to subsequent points), do your thing and then 'rewrap'
to your desired interval ([0-355] or [-180,179] degrees).

In [1]: from numpy import *

In [2]: def median_degree(degrees):
   ...: return mod(rad2deg(median(unwrap(deg2rad(degrees,360)
   ...:

In [3]: print(median_degree([1, 2, 3, 4, 5, 6, 359]))
3.0

In [4]: print(median_degree([-179, 174, 175, 176, 177, 178, 179]))
177.0

If the deg2rad and rad2deg bothers you, you should write your own
unwrap function that handles data in degrees.

Hope this helps,
Bas

P.S.
Slightly off-topic rant against both numpy and matlab implementation
of unwrap: They always assume data is in radians. There is some option
to specify the maximum jump size in radians, but to me it would be
more useful to specify the interval of a complete cycle, so that you
can do

unwrapped_radians = unwrap(radians)
unwrapped_degrees = unwrap(degrees, 360)
unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: medians for degree measurements

2010-01-25 Thread Bas
 On 2010-01-25 10:16 AM, Bas wrote:

  P.S.
  Slightly off-topic rant against both numpy and matlab implementation
  of unwrap: They always assume data is in radians. There is some option
  to specify the maximum jump size in radians, but to me it would be
  more useful to specify the interval of a complete cycle, so that you
  can do

  unwrapped_radians = unwrap(radians)
  unwrapped_degrees = unwrap(degrees, 360)
  unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32)

On Jan 25, 5:34 pm, Robert Kern robert.k...@gmail.com wrote:
 Rants accompanied with patches are more effective. :-)

As you wish (untested):

def unwrap(p, cycle=2*pi, axis=-1):
docstring to be updated
p = asarray(p)
half_cycle = cycle / 2
nd = len(p.shape)
dd = diff(p, axis=axis)
slice1 = [slice(None, None)]*nd # full slices
slice1[axis] = slice(1, None)
ddmod = mod(dd+half_cycle, cycle)-half_cycle
_nx.putmask(ddmod, (ddmod==-half_cycle)  (dd  0), half_cycle)
ph_correct = ddmod - dd;
_nx.putmask(ph_correct, abs(dd)half_cycle, 0)
up = array(p, copy=True, dtype='d')
up[slice1] = p[slice1] + ph_correct.cumsum(axis)
return up

I never saw a use case for the discontinuity argument, so in my
preferred version it would be removed. Of course this breaks old code
(by who uses this option anyhow??) and breaks compatibility between
matlab and numpy.

Chears,
Bas


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: medians for degree measurements

2010-01-25 Thread Bas
  On 2010-01-25 10:16 AM, Bas wrote:
  P.S.
  Slightly off-topic rant against both numpy and matlab implementation
  of unwrap: They always assume data is in radians. There is some option
  to specify the maximum jump size in radians, but to me it would be
  more useful to specify the interval of a complete cycle, so that you
  can do

[snip]

  I never saw a use case for the discontinuity argument, so in my
  preferred version it would be removed. Of course this breaks old code
  (by who uses this option anyhow??) and breaks compatibility between
  matlab and numpy.

On Jan 25, 6:39 pm, Robert Kern robert.k...@gmail.com wrote:
 Sometimes legitimate features have phase discontinuities greater than pi.

We are dwelling more and more off-topic here, but anyhow: According to
me, the use of unwrap is inherently related to measurement instruments
that wrap around, like rotation encoders, interferometers or up/down
counters. Say you have a real phase step of +1.5 pi, how could you
possibly discern if from a real phase step of -pi/2? This is like an
aliasing problem, so the only real solution would be to increase the
sampling speed of your system. To me, the discontinuity parameter is
serving some hard to explain corner case (see matlab manual), which is
better left to be solved by hand in cases it appears. I regret matlab
ever added the feature.

 If you want your feature to be accepted, please submit a patch that does not 
 break
 backwards compatibility and which updates the docstring and tests 
 appropriately.
 I look forward to seeing the complete patch! Thank you.

I think my 'cycle' argument does have real uses, like the degrees in
this thread and the digital-counter example (which comes from own
experience and required me to write my own unwrap). I'll try to submit
a non-breaking patch if I ever have time.

Bas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting properly one subprocess output

2009-11-20 Thread Bas
On Nov 18, 12:25 pm, Jean-Michel Pichavant jeanmic...@sequans.com
wrote:
 Hi python fellows,

 I'm currently inspecting my Linux process list, trying to parse it in
 order to get one particular process (and kill it).
 I ran into an annoying issue:
 The stdout display is somehow truncated (maybe a terminal length issue,
 I don't know), breaking my parsing.

Below is the script I use to automatically kill firefox if it is not
behaving, maybe you are looking for something similar.

HTH,
Bas


#!/usr/bin/env python

import commands, os
lines = os.popen('ps ax|grep firefox').readlines()
lines = [line for line in lines if 'grep' not in line]
print lines[0]
pid = int(lines[0][:5])
print 'Found pid: %d' %pid
os.system('kill -9 %d' %pid)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need some advice

2008-10-21 Thread Bas
On Oct 21, 5:43 pm, azrael [EMAIL PROTECTED] wrote:
 Need some advice

I advice to come up with a more specific subject line for your posts,
might give you some more answers 
--
http://mail.python.org/mailman/listinfo/python-list


Re: type-checking support in Python?

2008-10-07 Thread Bas
On Oct 7, 8:36 am, Lawrence D'Oliveiro [EMAIL PROTECTED]
central.gen.new_zealand wrote:
 In message [EMAIL PROTECTED], Gabriel

 Genellina wrote:
  As an example, in the oil industry here in my country there is a mix of
  measurement units in common usage. Depth is measured in meters, but pump
  stroke in inches; loads in lbs but pressures in kg/cm².

 Isn't the right way to handle that to attach dimensions to each number?

What they taught me as a physics undergrad is to always convert random
units given as input to SI units as soon as possible, do all your
calculations internally in SI units and (only if really needed)
convert back to arbitrary units on output. Now, 15 years later, I am
still trying to stick to this rule whenever possisble. This convention
allows you to 'forget' about units and save your pre/post-fixes for
the exceptional case:
inch = 0.0254
lenght_inch = some_user_input_in_inch()
length = length_inch * inch

I have heard about some python package which overloads numbers and
calculations to include units (quick google found unum, not sure if
that is the only one). I guess that unless you are dealing with life-
critical equipment or are using extreme programming, this is overkill
(but I guess python is not really the right language for that anyway,
imagine a garbage collection just when you want to launch your
shuttle).

Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Matplotlib Polar Plot Angles/Axes

2008-09-26 Thread Bas
I only have experience with the matlab version of polar, but my wild
guess is that you have convert your degrees to radians. Go to the
Matplotlib newsgroup if you need any better help.

HTH,
Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Matplotlib Polar Plot Angles/Axes

2008-09-26 Thread Bas
On Sep 26, 10:33 pm, afrogazer [EMAIL PROTECTED] wrote:
 rad_angles = [elem*(pi/180) for elem in angles]
You are missing some more on a friday afternoon: angles is created by
arange, so it is a numpy array. In that case you simply can do
rad_angles = pi/180 * angles
No need to use list-comprehensions, that is the whole idea about using
these kick-ass objects!

Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: matrix algebra

2008-09-24 Thread Bas
On Sep 22, 10:02 am, Al Kabaila [EMAIL PROTECTED] wrote:
 There are several packages for matrix algebra. I tried Numeric, numpy and
 numarray. All three are very good, but each uses different syntax.
That argument might have been valid 3 years ago, but as already said
by others, Numeric and Numarray are deprecated. Numpy should be the
only thing needed for new users. I suggest you investigate a little
bit more the next time you make such efforts, since this fact should
be widely known among the users of the mentioned packages, see e.g.
the huge warning at the numarray page:
http://www.stsci.edu/resources/software_hardware/numarray/numarray.html

 1. Is there any interest in matrix algebra for the masses (I mean interest
 in a wrapper for a subset of functions of the packages with a unified
 simple syntax)?
In my opinion, no. I might be biased, since with my matlab background
I find numpy simple enough as is. But I don't see how A = B*C+D or
E=dot(F,G) is complicated for a beginner of linear algebra.

 My OS is Linux (openSUSE 10.3) and my interest in retirement is Python
 applications to Structural Analysis of  Civil Engineering structures,
 currently in 2 dimensions only (under GPL). Modern Structural Analysis is
 highly matrix oriented, but requires only a few basic matrix operations,
 namely matrix creation, transposition, multiplication, invertion and
 linear equation solution. For stability analysis one would require
 Eigenvalues and Eigenvectors. In 3 dimensions, additionally highly
 desirable would be vector algebra. The packages do have all these
 functions, but currently only the basic functions are in the wrapper.
If you care about contributing something useful to the community, I
think your time and skills are better spent writing some cool
mechanical analysis tool for inclusion in Scipy.

Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Profiling, sum-comprehension vs reduce

2008-09-13 Thread Bas
On Sep 13, 10:06 am, cnb [EMAIL PROTECTED] wrote:
 This must be because of implementation right? Shouldn't reduce be
 faster since it iterates once over the list?
 doesnt sum first construct the list then sum it?

No, sum also iterates the sequence just once and doesn't create a
list. It is probably implemented similar to

def sum(sequence, start=0):
it = iter(sequence)
total = start
for i in it:
total += i
return i

but then implemented in C for speed. Reduce is probably implemented
pretty similar, but then with a random function instead of addition.
Make sure that you understand the difference between generator
expression and list comprehension, and that [f(x) for x in something]
is (almost) equal to list(f(x) for x in something), so you can emulate
a LC by using the list constructor on the equivalent GE.

HTH,
Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using SWIG to build C++ extension

2008-07-11 Thread Bas Michielsen
mk wrote:

 Hello,
 
 I'm having terrible problems building C++ extension to Python 2.4 using
 SWIG. I'd appreciate if somebody knowledgeable at the subject took a
 look at it. swig-1.3.29, g++ (GCC) 4.1.1 20070105 (Red Hat 4.1.1-52).
 
 I used following commands to build C++ extension:
 
 # swig -c++ -python edit_distance.i
 # c++ -c edit_distance.c edit_distance_wrap.cxx edit_distance.cpp -I.
 -I/usr/include/python2.4
 
 
 Linux RH (9.156.44.105) root ~/tmp # c++ -c edit_distance.c
 edit_distance_wrap.cxx edit_distance.cpp -I. -I/usr/include/python2.4
 c++: edit_distance.cpp: No such file or directory
 edit_distance_wrap.cxx: In function ‘PyObject*
 _wrap_edit_distance(PyObject*, PyObject*)’:
 edit_distance_wrap.cxx:2579: error: ‘string’ was not declared in this
 scope edit_distance_wrap.cxx:2579: error: ‘arg1’ was not declared in this
 scope edit_distance_wrap.cxx:2580: error: ‘arg2’ was not declared in this
 scope edit_distance_wrap.cxx:2597: error: expected type-specifier before
 ‘string’ edit_distance_wrap.cxx:2597: error: expected `' before ‘string’
 edit_distance_wrap.cxx:2597: error: expected `(' before ‘string’
 edit_distance_wrap.cxx:2597: error: expected primary-expression before
 ‘’ token
 edit_distance_wrap.cxx:2597: error: expected `)' before ‘;’ token
 edit_distance_wrap.cxx:2605: error: expected type-specifier before
 ‘string’ edit_distance_wrap.cxx:2605: error: expected `' before ‘string’
 edit_distance_wrap.cxx:2605: error: expected `(' before ‘string’
 edit_distance_wrap.cxx:2605: error: expected primary-expression before
 ‘’ token
 edit_distance_wrap.cxx:2605: error: expected `)' before ‘;’ token
 
 What's weird is that I _did_ use std:: namespace prefix carefully in the
 code:
 
 #include string
 #include vector
 #include iostream
 
   const unsigned int cost_del = 1;
   const unsigned int cost_ins = 1;
   const unsigned int cost_sub = 1;
 
 
   unsigned int edit_distance( std::string s1, std::string s2 )
   {
  const size_t len1 = s1.length(), len2 = s2.length();
  std::vectorstd::vectorunsigned int  d(len1 + 1,
 std::vectorunsigned int(len2 + 1));
 
  for(int i = 1; i = len1; ++i)
  for(int j = 1; j = len2; ++j)
  d[i][j] = std::min(d[i - 1][j] + 1,
 std::min(d[i][j - 1] + 1, d[i - 1][j - 1] + (s1[i - 1] == s2[j - 1] ? 0
 : 1)));
 
  return d[len1][len2];
 }
 
 Ok, anyway I fixed it in the generated code (edit_distance_wrap.cxx). It
 compiled to .o file fine then. It linked to _edit_distance.so as well:
 
 # c++ -shared edit_distance_wrap.o -o _edit_distance.so
 
 But now I get import error in Python!
 
 Linux RH root ~/tmp # python
 Python 2.4.3 (#1, Dec 11 2006, 11:38:52)
 [GCC 4.1.1 20061130 (Red Hat 4.1.1-43)] on linux2
 Type help, copyright, credits or license for more information.
   import edit_distance
 Traceback (most recent call last):
File stdin, line 1, in ?
File edit_distance.py, line 5, in ?
  import _edit_distance
 ImportError: ./_edit_distance.so: undefined symbol: _Z13edit_distanceRSsS_
 
 
 
 What did I do to deserve this? :-)
 
 
 edit_distance.i file just in case:
 
 %module edit_distance
 %{
 #include edit_distance.h
 %}
 
 extern unsigned int edit_distance(string s1, string s2);

Hello, 

I took your example files and did the following:
changed the #include edit_distance.h to #include edit_distance.c
in the edit_distance.i file. 
Then I changed the first few lines of your function definition
unsigned int edit_distance( const char* c1, const char* c2 )
  {
std::string s1( c1), s2( c2);
and also adapted the signature in the edit_distance.i file.
Then
swig -shadow -c++ -python edit_distance.i
g++ -c -fpic  -I/usr/include/python edit_distance_wrap.cxx
g++ -shared edit_distance_wrap.o -o _edit_distance.so

I could import edit_distance without any error messages
 import edit_distance
 print edit_distance.edit_distance( toto, titi)
2
 print edit_distance.edit_distance( toto, toti)
1

Perhaps I changed too many things, but this may get you started, 

Regards, 

Bas
--
http://mail.python.org/mailman/listinfo/python-list

Re: Plotting Graphs + Bestfit lines

2008-06-13 Thread Bas
I am not going to reverse engineer your code, but it looks like your
writing your own least-squares fitting algorithm and doing some simple
plots. Also, from your many posts last days, it looks like you are a
newbie struggling with a python interface to gnuplot.

May I suggest that you have a look at numpy/scipy/matplotlib instead?
With those, the things that you are trying to do could be done
trivially in a just a few lines. What you want to do could be done
with something like (untested!):

from pylab import *
xdata = ... %whatever you get it from
ydata = ...

p = polyfit(xdata, ydata, 1) %fit 1st order polynomial
plot(xdata, ydata, xdata, 'o', polyval(p, xdata)) %plot original data
with fit
show()

Things like fitting algorithms are already included, so you can focus
your energy on the real work. E.g., if you later want to change to a
quadratic fit instead of a linear, you just change 1 to 2 in the
polyfit. As you said in one of your other posts, you need to find the
intersection point of two lines. If poly1 and poly2 are the
polynomials describing the lines, you would get your answer as

x_intersect = roots(poly1 - poly2)
y_intersect = polyval(poly1,x_intersect)
plot(x,polyval(poly1,x),x,polyval(poly2,x),x_intersect,y_intersect,'o')
show()

HTH,
Bas
--
http://mail.python.org/mailman/listinfo/python-list


Re: FM synthesis using Numpy

2007-08-15 Thread Bas
You got your math wrong. What you are calculating is:
sin(2*pi*(1000+15*sin(2*pi*6*t))*t) = sin(2*pi*1000*t +
2*pi*15*sin(2*pi*6*t)*t)
The 'instantaneous frequency' can be calculated by differentiating the
argument of the sine and dividing by 2pi:
x = sin(phi(t)) - f_inst = d (phi(t)) / dt / (2*pi)
So in your case:
f_inst = 1000 + 15*sin(2*pi*6*t) + 2*pi*t*6*15*cos(2*pi*6*t)
the last term explains the effect you hear.

What you want is:
f_inst = f0 + df*cos(2*pi*fm*t)
Integrating this and multiplying by 2pi gives the phase:
phi(t) = 2*pi*f0*t + sin(2*pi*fm*t)*df/fm

So you can achieve the frequency modulation by using phase modulation
(these two are related). You can do this with your own code by

phi = oscillator(t, freq=6, amp=15/6)
tone = oscillator(t, freq=1000, amp=0.1, phase=phi)

cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Is it possible to merge xrange and slice?

2007-04-30 Thread Bas
Hi,

stupid question, but would it be possible to somehow merge xrange
(which is supposed to replace range in py3k) and slice? Both have very
similar start, stop and step arguments and both are lightweight
objects to indicate a range. But you can't do a[xrange(10,20)] and
'for i in slice(10,20)'. The only difference is see is some behavior
with infinities (e.g. object[3:] gives a slice(3,maxint) inside
_getitem_ , but I think this should not be a large problem
(xrange(0,infinity) could just yield a generator that never stops).

Which problems am I overlooking that prevent this?

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: emulate a serial port in windows (create a virtual 'com' port)

2007-01-31 Thread Bas-i
On Jan 30, 7:34 am, Pom [EMAIL PROTECTED] wrote:

 how can I emulate a serial port in windows?

Google for ComEmulDrv3

This may do what you want.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ratfun-2.3 Polynomials and Rational Functions

2006-08-19 Thread Bas
Are there any differences between this module and the one already
present in numpy?

http://www.scipy.org/doc/numpy_api_docs/numpy.lib.polynomial.html

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Math package

2006-07-29 Thread Bas
I think you need one of these:

http://www-03.ibm.com/servers/deepcomputing/bluegene.html

Don't know if it runs python. If that doesn't work try to reformulate
your problem and have a look at

http://scipy.org/

Cheers,
Bas

[EMAIL PROTECTED] wrote:
 I want to write a program which would have a 2 dimensional array of 1
 billion by 1 billion. This is for computational purposes and evaluating
 a mathematical concept similar to Erdos number.

 Which is the best package for such programs (that would be fast
 enough). 
 
 Every help is appreciated.
 
 Thanks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: returning index of minimum in a list of lists

2006-06-21 Thread Bas
[EMAIL PROTECTED] wrote:
 Thanks so much for your help.  I was wondering if there was anything
 even simpler, but this will be great.

 from numpy import *
 a=array([[3,3,3,3], [3,3,3,1], [3,3,3,3]])
 where(a==a.min())
(array([1]), array([3]))

Probably overkill for your simple problem, but this is a nice
alternative if you do a lot of matrix work.

Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: comparing two arrays

2006-06-19 Thread Bas
You are comparing a normal python list to a constant, which are
obviously unequal. Try converting your lists to arrays first
(untested):

import numeric/numpy as N
a =N.array([0,1,2,5,6,6])
b = N.array([5,4,1,6,4,6])
print a==6 and b==6
print N.where(a==6 and b==6)

hth,
Bas



Sheldon wrote:
 Hi,

 I have two arrays that are identical and contain 1s and zeros. Only the
 ones are valid and I need to know where both arrays have ones in the
 same position. I thought logical_and would work but this example proves
 otherwise:
  a = [0,1,2,5,6,6]
  b = [5,4,1,6,4,6]
  Numeric.logical_and(a==6,b==6)
 0
  Numeric.where(a==b,1,0)
 0
  Numeric.where(a==6 and b==6,1,0)
 0

 The where() statement is also worhtless here. Does anyone have any
 suggestion on how to do this?
 
 Thanks in advance,
 Sheldon

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a good explanation

2006-05-25 Thread Bas
I guess that your version is faster, although the difference would be
negligible in this small example. The for loop is probably optimized
for speed under the hood (it is written in C), while the 'while' loop
is performed in python, which is much slower.

Much more important than the speed difference is the clarity: your
version is the accepted practice in the python world, so people will
understand it immediately. It also saves two lines of code. And most of
all, it prevents you from making mistakes: your friend, for example,
has forgotten to increase cnt, so he created an infinite loop!

Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


List of all syntactic sugar?

2006-04-14 Thread Bas
Hi group,

just out of curiosity, is there a list of all the syntactic sugar that
is used in python? If there isn't such a list, could it be put on a
wiki somewhere? The bit of sugar that I do know have helped me a lot in
understanding the inner workings of python.

To give a few examples (might not be totally correct):

x[i] - x.__getitem__(i)
x[a:b] - x.__getitem__(slice(a,b,None))
x+y - x._add__(y)
x.method(a) - call (x.__dict__[method], self, a) ??
for i in x: f(i) - it = iter(x); while True: i = it.next(); f(i) 
except stop: pass

TIA,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List of all syntactic sugar?

2006-04-14 Thread Bas
That kind of depends on what you mean by syntactic sugar.

Mightbe I was misusing the name of syntactic sugar, but I what I
intended to say was all the possible 'transformations' that can be
made to reduce all the 'advanced' syntax to some sort of minimal core
of the language.

Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Matplotlib logarithmic scatter plot

2006-02-27 Thread Bas
Try this, don't know if this works for al versions:

from pylab import *
x=10**linspace(0,5,100)
y=1/(1+x/1000)
loglog(x,y)
show()

If you only need a logarithm along one axis you can use semilogx() or
semilogy(). For more detailed questions go to the matplotlib mailing
list.

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: algorithm, optimization, or other problem?

2006-02-21 Thread Bas
Hi,

as the others have already said, move all the invariants out of the
loop. It is also not useful to randomly draw a vector out of random
array. Some other small things:

for i in range(len(x)): do something with x[i]
should probably be replaced by
for xi in x: do something with xi
this saves an array index operation, don't know if this works for a
numeric array

You use the y**2 twice. A good compiler might notice this, but Python
is interpreted...

use += instead of + for w, the calculation can be done in place, this
might save the creation of a new array/variable

I am not sure what you are doing with x, bit it seems that you are
transposing it a few times to many. Mightbe you can declare x as the
transpose of what you do now, thereby saving the transpose in the loop?

so my guess (not tested):

x=random((1000,100))# 1000 input vectors,  declared differently
for xx in x:
 y=dot(xx,w)
 y2 = y*y
 w+=ETA*(y*xx-y2*w);
 th+= INV_TAU*(y2-th); 


Hope it helps,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a numarray question

2006-02-15 Thread Bas
I believe it is something like

a[a==0] = 5

Note that numarray will eventually be replaced by Scipy/Numpy at some
time, but this wouldn't change the basic stuff.

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric and matlab

2006-02-06 Thread Bas
Hi,

I am also considering a switch from Matlab to NumPy/SciPy at some
point.

Note that in the last version of Matlab (7?) you don't have to use
'find', but you now can 'conditional arrays'  as an index, so instead
of
  idx=find(a5);
  a(idx)=6;
you can do:
  cond=a5;
  a(cond) = 6;
or even shorter
  a(a5) = 6;

Does someone know if the same trick is possible in NumPy?

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Arithmetic sequences in Python

2006-01-16 Thread Bas
I like the use of the colon as in the PEP better: it is consistant with
the slice notation and also with the colon operator in Matlab.

I like the general idea and I would probably use it a lot if available,
but the functionality is already there with range and irange.

Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sudoku solver: reduction + brute force

2006-01-14 Thread Bas
There is more in this thread:

http://groups.google.com/group/comp.lang.python/browse_frm/thread/479c1dc768f740a3/9252dab14e8ecabb?q=sudokurnum=2#9252dab14e8ecabb

Enjoy,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: important for me!!

2006-01-02 Thread Bas
Read this first:
http://www.catb.org/~esr/faqs/smart-questions.html
and then try again.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Vector math library

2005-12-31 Thread Bas
I am not a regular user of the libraries that you mention, but I played
around with some of them because I need a replacement for Matlab.

Numeric, NumArray and SciPy should be more or less compatible. All the
functions you mention should be in there, or otherwise should be
trivial to implement. Have a look at the functions cross(), dot(),
inner(), outer(). Addition is just a+b.

As far as I know Numeric was the original vector lib. NumArray was
written as a successor but ended up as a fork due to some speed
concerns. Scipy is the latest and tries to unite the previous two by
implementing the best of both worlds. For future work you should stick
to SciPy. Right now it is probably somewhere in a beta stage, but
expect a final version in half a year or so. Hopefully it ends up being
THE vector lib for python to avoid confusing beginners like you.

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Accessing next/prev element while for looping

2005-12-18 Thread Bas
Just make a custom generator function:

 def prevcurnext(seq):
it = iter(seq)
prev = it.next()
cur = it.next()
for next in it:
yield (prev,cur,next)
prev,cur = cur, next


 for (a,b,c) in prevcurnext(range(10)):
print a,b,c


0 1 2
1 2 3
2 3 4
3 4 5
4 5 6
5 6 7
6 7 8
7 8 9

Cheers,
Bas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Brute force sudoku cracker

2005-09-17 Thread Bas
 def all(seq, pred=bool):

What's this? What is bool?

That came straight out of the manual for itertools:
http://docs.python.org/lib/itertools-recipes.html

-- 
http://mail.python.org/mailman/listinfo/python-list


Brute force sudoku cracker

2005-09-16 Thread Bas
Hi group,

I came across some of these online sudoku games and thought after
playing a game or two that I'd better waste my time writing a solver
than play the game itself any longer. I managed to write a pretty dumb
brute force solver that can at least solve the easy cases pretty fast.

It basically works by listing all 9 possible numbers for all 81 fields
and keeps on striking out possibilities until it is done.

-any ideas how to easily incorporate advanced solving strategies?
solve(problem1) and solve(problem2) give solutions, but solve(problem3)
gets stuck...

-any improvements possible for the current code? I didn't play a lot
with python yet, so I probably missed some typical python tricks, like
converting for-loops to list expressions.

TIA,
Bas

***

from itertools import ifilterfalse

problem1 = [' 63   7  ',
'   69   8',
'97  2',
'  2 1  8 ',
' 5 8 6 9 ',
' 9  7 2  ',
'6  13',
'7   45   ',
'  9   14 ']

problem2 = [' 3   9  7',
' 1  8',
'   1   9 ',
'  49 5  6',
' 2 1 ',
'5  6 74  ',
' 5   1   ',
'4  2 ',
'7  5   3 ']

problem3 = [' 3 5  81 ',
'   76  9 ',
'4',
' 439 5  6',
' 1 7 ',
'6  8 193 ',
'9',
' 9  86   ',
' 61  2 8 ']

#define horizontal lines, vertical lines and 3x3 blocks
groups = [range(9*i,9*i+9) for i in range(9)] + \
 [range(i,81,9) for i in range(9)] + \
 [range(0+27*i+3*j,3+27*i+3*j) + range(9+27*i+3*j,12+27*i+3*j)
+ \
  range(18+27*i+3*j,21+27*i+3*j) for i in range(3) for j in
range(3)]

def display(fields):
for i in range(9):
line = ''
for j in range(9):
if len(fields[9*i+j]) == 1:
line += ' ' + str(fields[9*i+j][0])
else:
line += '  '
print line


def all(seq, pred=bool):
Returns True if pred(x) is True for every element in the iterable
for elem in ifilterfalse(pred, seq):
return False
return True

def product(seq):
prod = 1
for item in seq:
prod *= item
return prod

def solve(problem):
fields = [range(1,10) for i in range(81)] #fill with all
posibilities for all fields
for i,line in enumerate(problem):
for j,letter in enumerate(line):
if letter != ' ':
fields[9*i+j]=[int(letter)] #seed with numbers from
problem
oldpos = 0
while True:
pos = product(len(field) for field in fields)
if pos == oldpos: #no new possibilities eliminated, so stop
break
display(fields)
print pos,'possibilities'
oldpos = pos
for group in groups:
for index in group:
field = fields[index]
if len(field) == 1: #if only number possible for field
remove it from other fields in group
for ind in group:
if ind != index:
try:
fields[ind].remove(field[0])
except ValueError:
pass
else: #check if field contains number that does not
exist in rest of group
for f in field:
if all(f not in fields[ind] \
   for ind in group if ind!=index):
fields[index] = [f]
break

-- 
http://mail.python.org/mailman/listinfo/python-list


Obj.'s writing self-regeneration script ?

2005-07-08 Thread Bas Michielsen
Hello,

Is there a good/standard way of having (composite)
objects write a Python script which will regenerate
the very same object ?

This problem arises when I construct, for example,
a ComputationalProblem object, possibly through
an object editor GUI, importing data structures
from external geometric and material modelers etc.
Once the object has been constructed, one wants to
write it to a file on disk, for example to do the
computations later on.

In order to help users, familiar with the
(in)famous input-file monolithic-code output-file
sequence I would like to have this diskfile take
the form of recognisable and editable Python code
(instead of a dump solution with Pickle for
example).

I think there are problems with uniqueness and
ordering of the component instantiations.
I was thinking of something like a depth-first
recursive write-script() on the object's
attributes using the __class__ 's to construct
generic names for the instantiations.

Has anyone given this a thought already ?

Thank you in advance for any remarks,

-- 
Bas Michielsen
ONERA, Electromagnetics and Radar Department
2, avenue Edouard Belin, 31055 TOULOUSE cedex, France
Tel. (++33)(0)5 62 25 26 77
Fax. (++33)(0)5 62 25 25 77

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Obj.'s writing self-regeneration script ?

2005-07-08 Thread Bas Michielsen
Jerome Alet wrote:
 Hi,
 
 Le Fri, 08 Jul 2005 15:16:21 +0200, Bas Michielsen a écrit :
 
 
Is there a good/standard way of having (composite)
objects write a Python script which will regenerate
the very same object ?
 
 
 I've done something like this for the ReportLab toolkit.
 
 Last time I checked, this was still part of the project under
 the name pycanvas. You use a pycanvas.Canvas() instance
 just like you would use a canvas.Canvas() instance, but you can decide to
 regenerate an equivalent Python source program to your original program
 when rendering.
 
 The docstring explains how to use it. Also
 reportlab/test/test_pdfgen_pycanvas.py shows if it works or not.
 
 NB : this is not generic code, but maybe this can help you.
 
 bye
 
 Jerome Alet

Thank you very much, I will have a look at it.

Bas


-- 
Bas Michielsen
ONERA, Electromagnetics and Radar Department
2, avenue Edouard Belin, 31055 TOULOUSE cedex, France
Tel. (++33)(0)5 62 25 26 77
Fax. (++33)(0)5 62 25 25 77

-- 
http://mail.python.org/mailman/listinfo/python-list


pytone / _bsddb

2005-03-03 Thread Bas van Gils
Hi all,

I've been using the `pytone' tool for playing my mp3's for a while. Great
tool. However, after upgrading Python to version 2.4 it stopped working. The
traceback that I get is this:

-snip-
Traceback (most recent call last):  
  File src/pytone.py, line 104, in ?  
songdbid = songdbmanager.addsongdb(songdbname,  
+config.database[songdbname])   
  File /home/basvg/bin/PyTone-2.2.1/src/services/songdb.py, line 147, in  
+addsongdb  
import songdbs.local
  File /home/basvg/bin/PyTone-2.2.1/src/services/songdbs/local.py, line   
+24, in ?   
import bsddb.dbshelve   
  File /usr/lib/python2.4/bsddb/__init__.py, line 47, in ?
import _bsddb   
ImportError: No module named _bsddb 
-/snip-

The mentioned line in __init__.py is:

-snip-
import _bsddb  
-/snip-

I had a look around through /usr/lib/python2.4 but I couldn't find a _bsddb
indeed. Can anyone help me get my favorite py-tool to work again ;-)?

Bas


-- 
[EMAIL PROTECTED] - GPG Key ID: 2768A493  -  http://www.cs.ru.nl/~basvg
Radboud University Nijmegen Institute for Computing and Information Sciences

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pytone / _bsddb

2005-03-03 Thread Bas van Gils
On 2005-03-03, Bas van Gils [EMAIL PROTECTED] wrote:
 Hi all,

 I've been using the `pytone' tool for playing my mp3's for a while. Great
 tool. However, after upgrading Python to version 2.4 it stopped working. The
 traceback that I get is this:
[ ... ]

Great, I found it myself. The problem was a version of db that was `too new'.
Apparently Python didn't like this version of db. After downgrading it and
rebuilding Python everything worked as expected. Yay!

Bas


-- 
[EMAIL PROTECTED] - GPG Key ID: 2768A493  -  http://www.cs.ru.nl/~basvg
Radboud University Nijmegen Institute for Computing and Information Sciences

-- 
http://mail.python.org/mailman/listinfo/python-list