[issue46379] itertools.product reference implementation creates temporaries

2022-01-14 Thread Markus Wallerberger


Markus Wallerberger  added the comment:

> To a person well versed in recursion and in generator chains it makes sense 
> but not so much for anyone else.

There I pretty much fundamentally disagree.  I find the version in the docs 
much more magical in the sense that it builds up "laterally", i.e., 
level-by-level, rather than element-by-element.

Also, I think from a functional programming perspective, which, let's face it, 
is what these iteration/generator tools are really modelling, a recursive 
version is much more natural.  It also generalizes nicely to other problems 
which people may be having -- so it has the added benefit of explaining the 
code and teaching people useful patterns.

Take the itertools.permutation as an example:  writing that as it was in the 
reference implementation the code is IMHO pretty opaque and hard to reason 
about.  Write it in a recursive style and both its working and correctness is 
immediately obvious.

>  Plus it is hard to step through by hand to see what it is doing.

This I agree with.

Anyway, thanks for taking the time to explain the rejection.

--

___
Python tracker 
<https://bugs.python.org/issue46379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46379] itertools.product reference implementation creates temporaries

2022-01-14 Thread Markus Wallerberger


Change by Markus Wallerberger :


--
keywords: +patch
pull_requests: +28804
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30605

___
Python tracker 
<https://bugs.python.org/issue46379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46379] itertools.product reference implementation creates temporaries

2022-01-14 Thread Markus Wallerberger


New submission from Markus Wallerberger :

The reference implementation of itertools.product creates large temporaries, 
which we need to remind people of at the top of the code block.

However, using generator magic, we don't need to do this and can even simplify 
the code in the process!  Basically,we iterate over a generator of 
product(*seq[:-1]), and extend each of the values by every value in seq[-1].

--
assignee: docs@python
components: Documentation
messages: 410573
nosy: docs@python, mwallerb
priority: normal
severity: normal
status: open
title: itertools.product reference implementation creates temporaries
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46379>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14965] super() and property inheritance behavior

2021-12-06 Thread Markus Kitsinger (he/him/his)


Change by Markus Kitsinger (he/him/his) :


--
nosy: +SwooshyCueb
nosy_count: 23.0 -> 24.0
pull_requests: +28175
pull_request: https://github.com/python/cpython/pull/29950

___
Python tracker 
<https://bugs.python.org/issue14965>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-11-02 Thread Markus Israelsson


Markus Israelsson  added the comment:

I am currently updating the documentation source code.
On the cookiejar page it describes 'unverifiable' as a method.
I can however not find that method on the request page because it seems to be 
just a normal attribute.

I will make updates for that as well if that is ok with you?

--

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-10-21 Thread Markus Israelsson


Markus Israelsson  added the comment:

I got ok from the higherups.
Will plan this into next sprint so it will take a week or 2 before I get to it.

--

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-10-21 Thread Markus Israelsson


Markus Israelsson  added the comment:

I guess due to something having to be signed I would have to create a personal 
github account :/

--

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-10-21 Thread Markus Israelsson


Markus Israelsson  added the comment:

Sure.

But I will need to get an ok from my company to spend some time on this because 
I really am not very used to git yet (recently switched).

Also, is it possible to make the request/changes through the company github 
account or must that in that case be tied directly to this bug report account?

--

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-10-19 Thread Markus Israelsson


Markus Israelsson  added the comment:

The way I read the documentation for add_cookie_header is:

These methods must exist in the Request object:
- get_full_url()
- get_host()
- get_type()
- unverifiable... and so on.


The documentation for the request objects claims however that:
These methods are removed since version 3.4:
- add_data
- has_data
- get_data
- get_type
- get_host - This method , and some others, are listed as requirements for the 
add_cookie_header and extract_cookies functions. See list above or link I 
posted above to the correct places in the docs.


So it is only the documentation that is inconsistent. Not the code.
Unless, like I said, I misunderstand something in the documentation.

--

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42037] Documentation confusion in CookieJar functions

2020-10-14 Thread Markus Israelsson


New submission from Markus Israelsson :

The documentation in 
https://docs.python.org/3.8/library/http.cookiejar.html#http.cookiejar.CookieJar
claims the following for functions add_cookie_header and extract_cookies.

***
The request object (usually a urllib.request.Request instance) must support the 
methods get_full_url(), get_host(), get_type(), unverifiable(), has_header(), 
get_header(), header_items(), add_unredirected_header() and origin_req_host 
attribute as documented by urllib.request.
***


When reading the documentation for Request Objects 
https://docs.python.org/3.8/library/urllib.request.html?highlight=requests#request-objects
there is this:
***
Changed in version 3.4: The request methods add_data, has_data, get_data, 
get_type, get_host, get_selector, get_origin_req_host and is_unverifiable that 
were deprecated since 3.3 have been removed.
***

So basically the documentation claims that if the request object does not 
support functions that are removed then the headers will not be added. The code 
itself seem to do the correct thing however and add the header.

--
assignee: docs@python
components: Documentation
messages: 378624
nosy: docs@python, markus
priority: normal
severity: normal
status: open
title: Documentation confusion in CookieJar functions
type: enhancement
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue42037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40327] list(sys.modules.items()) can throw RuntimeError: dictionary changed size during iteration

2020-04-19 Thread Markus Mohrhard


New submission from Markus Mohrhard :

We have hit an issue in the pickle module where the code throws an exception in 
a threaded environment:

The interesting piece of the backtrace is:

  File "/xxx/1004060/lib/python3.7/site-packages/numpy/core/__init__.py", line 
130, in _ufunc_reduce
return _ufunc_reconstruct, (whichmodule(func, name), name)
  File "/xxx/lib/python3.7/pickle.py", line 309, in whichmodule
for module_name, module in list(sys.modules.items()):
RuntimeError: dictionary changed size during iteration

I tried to find a code path that would explain how the dict could be changed 
while the list is created but have not been able to find a code path that 
releases the GIL.

The executable is using many threads with imports happening in random threads 
and a custom class loader but we already make sure that the class loader is 
always holding the GIL.

The issue happens quite rarely (maybe once every one thousand's execution) so I 
don't have a reproducer right now.

--
components: Extension Modules
messages: 366762
nosy: Markus Mohrhard
priority: normal
severity: normal
status: open
title: list(sys.modules.items()) can throw RuntimeError: dictionary changed 
size during iteration
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue40327>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31539] asyncio.sleep may sleep less time then it should

2020-02-25 Thread Markus Roth


Markus Roth  added the comment:

When the fine tuning options for install-directories are set, the default 
directories "lib", "bin" and "include" are still created with essential 
content. Even if options like --libdir are set.

--
nosy: +CatorCanulis

___
Python tracker 
<https://bugs.python.org/issue31539>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39609] Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor

2020-02-17 Thread Markus Mohrhard


Markus Mohrhard  added the comment:

We have by now changed to a custom executor. Asyncio is used in some of our 
dependencies and therefore it took some work to figure out what is creating the 
thousands of threads that we were seeing.

This patch was part of the debuggin and we thought it would be useful for 
anyone else to immediately see what is creating the threads.

--

___
Python tracker 
<https://bugs.python.org/issue39609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39609] Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor

2020-02-11 Thread Markus Mohrhard


Change by Markus Mohrhard :


--
keywords: +patch
pull_requests: +17832
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18458

___
Python tracker 
<https://bugs.python.org/issue39609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39609] Set the thread_name_prefix for asyncio's default executor ThreadPoolExecutor

2020-02-11 Thread Markus Mohrhard


New submission from Markus Mohrhard :

The ThreadPoolExecutor in BaseEventLoop.run_in_executor should set a 
thread_name_prefix to simplify debugging.

Might also be worth to limit the number of max threads. On our 256 core 
machines we sometimes get 1000+ threads due to the cpu_count() * 5 default 
limit.

--
components: asyncio
messages: 361799
nosy: Markus Mohrhard, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: Set the thread_name_prefix for asyncio's default executor 
ThreadPoolExecutor
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37502] Fix default argument handling for buffers argument in pickle.loads

2019-07-04 Thread Markus Mohrhard


Markus Mohrhard  added the comment:

Sorr, I somehow managed to overwrite my title before submitting with the one 
from one if the results of my searches.

--

___
Python tracker 
<https://bugs.python.org/issue37502>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37502] Fix default argument handling for buffers argument in pickle.loads

2019-07-04 Thread Markus Mohrhard


Change by Markus Mohrhard :


--
title: Pure Python pickle module should not depend on _pickle.PickleBuffer -> 
Fix default argument handling for buffers argument in pickle.loads

___
Python tracker 
<https://bugs.python.org/issue37502>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37502] Pure Python pickle module should not depend on _pickle.PickleBuffer

2019-07-04 Thread Markus Mohrhard


Change by Markus Mohrhard :


--
keywords: +patch
pull_requests: +14409
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/14593

___
Python tracker 
<https://bugs.python.org/issue37502>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37502] Pure Python pickle module should not depend on _pickle.PickleBuffer

2019-07-04 Thread Markus Mohrhard


New submission from Markus Mohrhard :

The following piece of code

import pickle
pickle.loads(pickle.dumps(1, protocol=pickle.HIGHEST_PROTOCOL), buffers=None)

fails with "TypeError: 'NoneType' object is not iterable"

The corresponding PEP (https://www.python.org/dev/peps/pep-0574/) specifies 
that buffer=None is the default but the C implementation does not check for 
Py_None.

The PR contains a test for this case that fails without the fix.

--
components: Library (Lib)
messages: 347299
nosy: Markus Mohrhard
priority: normal
severity: normal
status: open
title: Pure Python pickle module should not depend on _pickle.PickleBuffer
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37502>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Checking refusal of a network connection

2019-06-03 Thread Markus Elfring
> How would this conversion take place?  Localhost is 127.0.0.1.
> Localhost6 is ::1.  They are different

My configuration file “/etc/hosts” provides the following information
as usual.

“…
::1 localhost ipv6-localhost ipv6-loopback
…”


> and you cannot route between the two.

I got other expectations for the corresponding software behaviour.


> What I can see is that your server binds to localhost6 and your client
> is trying to connect to localhost.

I am curious to clarify the circumstances further if such a combination
can also work finally.

If my software test client would pass the IPv6 address family for a connection,
both processes would use the same network protocol version.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking support for network connections from Python client to IPv6 server

2019-06-02 Thread Markus Elfring
>> How would like to explain the error message “socket.gaierror: [Errno -9]
>> Address family for hostname not supported” on my Linux test system then?
>
> Can you supply a tiny standalone piece of code demonstrating this error 
> please?

The following script part would be relevant.

…
def send_data(x):
   with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as so:
   global args
   so.connect((args.server_id, args.server_port))
…


If the address family “AF_INET6” was passed instead, the identification “::1”
can work also as a command parameter.
The data transmission seems to succeed by my script “socket-send_json_data2.py” 
then.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking support for network connections from Python client to IPv6 server

2019-06-02 Thread Markus Elfring
>> How would like to explain the error message “socket.gaierror: [Errno -9]
>> Address family for hostname not supported” on my Linux test system then?
>
> Can you supply a tiny standalone piece of code demonstrating this error 
> please?

The following script part would be relevant.

…
def send_data(x):
   with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as so:
   global args
   so.connect((args.server_id, args.server_port))
…


If the address family “AF_INET6” was passed instead, the identification “::1”
can work also as a command parameter.
The data transmission seems to succeed by my script “socket-send_json_data2.py” 
then.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking support for network connections from Python client to IPv6 server

2019-06-01 Thread Markus Elfring
> "Handled transparently" means that an ipv6 server can handle connections
> from ipv4 clients without doing anything special.

It is nice if this conversion is working.


> They just appear to come from a specific IPv6 address range.

I got expected connections by my small script “socket-send_test_data1.tcl”.


>> Under which circumstances will the Python programming interfaces
>> support the direct usage of the identification “::1”?
>
> I'm not sure I understand the question. They do.

How would like to explain the error message “socket.gaierror: [Errno -9]
Address family for hostname not supported” on my Linux test system then?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-06-01 Thread Markus Elfring
> It looks like the service isn't listening at the time the so.connect is 
> called.

* I get an other impression from the programs “/usr/bin/netstat” and 
“/usr/bin/ss”.

* The data transmission seems to work also for my small script 
“socket-send_test_data1.tcl”
  (even when the identification “::1” was passed as a command parameter).

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-06-01 Thread Markus Elfring
> Which specific information in that man page contradicts what I wrote?

We can agree that the mentioned IP addresses are distinct.
But the corresponding functionality should be equivalent.


> If you think of
>
> | IPv4 connections can be handled with the v6 API by using the
> | v4-mapped-on-v6 address type; thus a program needs to support only
> | this API  type to  support  both  protocols.
>
> please note that 127.0.0.1 mapped to IPv6 is ::7f00:1, not ::1.

I find another information like “This is handled transparently by
the address handling functions in the C library.” also interesting.


> So you still need to bind to two addresses.

I am unsure about this conclusion.

Under which circumstances will the Python programming interfaces
support the direct usage of the identification “::1”?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-06-01 Thread Markus Elfring
>> I would expect that the IPv4 address from such a connection attempt
>> would be automatically converted to a IPv6 loopback address.
>
> You haven't said which OS you are using, but as far as I know this
> expectation will be frustrated at least on Linux: There ::1 and
> 127.0.0.1 are distinct addresses.

How does this view fit to information from the Linux programmer's manual?
See also: command “man 7 ipv6”

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-06-01 Thread Markus Elfring
>> connect(3, {sa_family=AF_INET, sin_port=htons(37351), 
>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)
>
>   Without seeing the code, I'd be suspicious of that difference.

I would expect that the IPv4 address from such a connection attempt
would be automatically converted to a IPv6 loopback address.

Unfortunately, the direct specification “… socket-send_json_data.py --server_id 
::1 …”
does not work at the moment because of the error message “socket.gaierror: 
[Errno -9]
Address family for hostname not supported”.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-06-01 Thread Markus Elfring
> Also, it can be very useful to strace the client process, eg:

Do you find the following background information more helpful
for the desired clarification of unexpected software behaviour?

elfring@Sonne:~/Projekte/Python> LANG=C strace -e trace=network 
/usr/bin/python3 socket-send_json_data.py --server_id localhost --server_port 
37351
Using Python version:
3.7.2 …
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_IP) = 3
socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 5
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 4
connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = 0
sendto(4, "\2\0\0\0\r\0\0\0\6\0\0\0hosts\0", 18, MSG_NOSIGNAL, NULL, 0) = 18
recvmsg(4, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="hosts\0", 
iov_len=6}, {iov_base="\310O\3\0\0\0\0\0", iov_len=8}], msg_iovlen=2, 
msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, 
cmsg_data=[5]}], msg_controllen=20, msg_flags=MSG_CMSG_CLOEXEC}, 
MSG_CMSG_CLOEXEC) = 14
connect(3, {sa_family=AF_INET, sin_port=htons(37351), 
sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)
Traceback …:
…
  File "socket-send_json_data.py", line 17, in send_data
so.connect((args.server_id, args.server_port))
ConnectionRefusedError: [Errno 111] Connection refused
+++ exited with 1 +++


> You can also strace the running service process:

I do not observe additional function calls for the TCP client connection
attempt here.


> Also, on the service side it isn't enough to create the service socket,
> you also need to do an accept IIRC.

This should be performed by my implementation of the C++ function “setup”.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking refusal of a network connection

2019-05-31 Thread Markus Elfring
>   Well, providing minimal code samples that produce the problem would be 
> a start.

I prefer an other approach to clarify relevant software configuration 
differences.


>   Otherwise we are just guessing...

I can offer other data before.


> Maybe you have a firewall problem.

I hope not.

I can try another server variant out as expected.


elfring@Sonne:~/Projekte/Python> /usr/bin/python3 test-server2.py &
[1] 14067
elfring@Sonne:~/Projekte/Python> /usr/bin/ss -t -l -p -H|grep python
LISTEN0  5  127.0.0.1:search-agent 0.0.0.0:*
 users:(("python3",pid=14067,fd=3))
elfring@Sonne:~/Projekte/Python> /usr/bin/python3 socket-send_json_data.py 
--server_id localhost --server_port 1234
Using Python version:
3.7.2 (default, Dec 30 2018, 16:18:15) [GCC]
elfring@Sonne:~/Projekte/Python> Result:
…


Can connections work also with a network service address like “[::1]:35529”
(which would be used by the C++ server implementation so far)?
How does the software situation look like for the support of the IPv6 loopback 
address?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Checking refusal of a network connection

2019-05-31 Thread Markus Elfring
Hello,

I can start a service as desired.

elfring@Sonne:~/Projekte/Bau/C++/test-statistic-server1/local> 
./test-statistic-server2 & /usr/bin/ss -t -l -p -H|grep test
[1] 8961
waiting for connections
server_id: localhost
server_port: 35529
LISTEN 0   123  [::1]:35529  [::]:* 
 users:(("test-statistic-",pid=8961,fd=3))
elfring@Sonne:~/Projekte/Bau/C++/test-statistic-server1/local> 0 connections 
were handled.


But I wonder about the following error message then.

elfring@Sonne:~/Projekte/Python> /usr/bin/python3 
~/Projekte/Python/socket-send_json_data.py --server_id localhost --server_port 
35529
Using Python version:
3.7.2 …
Traceback …:
…
  File "/home/elfring/Projekte/Python/socket-send_json_data.py", line 17, in 
send_data
so.connect((args.server_id, args.server_port))
ConnectionRefusedError: [Errno 111] Connection refused


How should this inter-process communication difficulty be resolved?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-24 Thread Markus Elfring
> The file name for the client script is passed by a parameter to a command
> which is repeated by this server in a loop.
> It is evaluated then how often a known record set count was sent.

In which time ranges would you expect the receiving of the complete JSON data
which were sent by the child process (on the local host during my test)?
Can the program termination be taken into account for waiting on
still incoming data from the server socket?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-19 Thread Markus Elfring
>> I get an other impression from the statements “self._threads.append(t)” 
>> (process_request)
>> and “thread.join()” (server_close).
>
>   Okay -- v3.7 has added more logic that didn't exist in the v3.5 code
> I was examining... (block_on_close is new).

Thanks for such a version comparison.


>   However, I need to point out that this logic is part of server_close(),
> which is not the same as shutdown(). You have been calling shutdown() which
> only ends the loop accepting new connections, but leaves any in-process
> threads to finish handling their requests.
>
>   server_close() needs to be called by your code, I would expect AFTER
> calling shutdown() to stop accepting new requests (and starting new threads
> which may not be in the list that the close is trying to join).

Should this aspect be taken into account by the code specification “with 
server:”?


> And after calling server_close() you will not be able to simply "restart" the 
> server
> -- you must go through the entire server initialization process
> (ie: create a whole new server instance).

This should be finally achieved by the implementation of my method 
“perform_command”.
I hope that corresponding data processing can be cleanly repeated then
as desired for test purposes.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-19 Thread Markus Elfring
> socketserver threading model is that the main server loops waiting for
> connection requests, when it receives a request it creates a handler thread,

This data processing style can be generally fine as long as you would like
to work without a thread (or process) pool.


> and then it completely forgets about the thread

I have taken another look at the implementation of the corresponding methods.
https://github.com/python/cpython/blob/3.7/Lib/socketserver.py#L656

I get an other impression from the statements “self._threads.append(t)” 
(process_request)
and “thread.join()” (server_close).


> -- the thread is independent and completes the handling of the request.

Will a corresponding return value bet set?


> If you need to ensure everything has finished before starting the next server 
> instance,

I am still looking for the support of software constraints in this direction.


> you will have to write the extensions to socketserver.

Will the development situation evolve any more here?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-14 Thread Markus Elfring
> Nowadays, I develop typically web applications.
> There, something similar to "CORBA" is used: WSDL described
> "web services". Typically, they use the web infrastructure
> and its protocols ("http", "https") for communication.

The popularity varies for these programming interfaces over time.

* I am trying to get also a data processing variant working
  based on the JSON format together with an extended socket
  server class.

* Do you eventually know any proxy services which would provide
  useful data type conversions?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-14 Thread Markus Elfring
> Nowadays, I develop typically web applications.
> There, something similar to "CORBA" is used: WSDL described
> "web services". Typically, they use the web infrastructure
> and its protocols ("http", "https") for communication.

The popularity varies for these programming interfaces over time.

* I am trying to get also a data processing variant working
  based on the JSON format together with an extended socket
  server class.

* Do you eventually know any proxy services which would provide
  useful data type conversions?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-13 Thread Markus Elfring
> And in addition, you can derive your own class from `socketserver`
> and override methods to provide whatever additional functionality
> you think is necessary.

Such a programming possibility remains generally.
I am curious if the software development attention can be adjusted
also in this area.

I imagine that it can be easier to depend on a higher level
system infrastructure.
How do you think about to reuse software extensions around
the standard “CORBA” like “omniORB”?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-04 Thread Markus Elfring
>>> Server.shutdown() sets a flag that tells the main server to /stop
>>> accepting new requests/.
>>
>> Can it be that this method should perform a bit more resource management
>> (according to the selected configuration like “socketserver.ThreadingMixIn”)?
>>
>   There isn't much more it can do

I see further software design possibilities.


> -- it has been a long standing axiom that killing threads is not recommended.

This data processing approach will trigger various development challenges.


>>> You may also need to code logic to ensure any handler threads have completed
>>
>> Can a class like “BaseServer” be responsible for the determination
>> if all extra started threads (or background processes) finished their work
>> as expected?
>
>   You are asking for changes to the Python library for a use case
> that is not common.

I find this view questionable.


> Normally connections to a server are independent and do not share common data

Would you like to clarify corresponding application statistics any further?


> -- if there is anything in common,

Do you identify any more shared functionality?


> it is likely stored in a database management system

An use case evolved into my need to work also with an ordinary Python list
variable for a simple storage interface.


> which itself will provide locking for updates,

It is nice when you can reuse such software.


> and the connection handler will have to be coded to handle retries
> if multiple connections try to update the same records.

I am not concerned about this aspect for my test case.


> Servers aren't meant to be started and shutdown at a rapid rate

Their run times can vary considerably.


> (it's called "serve_forever" for a reason).

Will an other term become more appropriate?


>   If the socketserver module doesn't provide what you need,

It took a while to understand the observed software behaviour better.


> you are free to copy socketserver.py to some other file (myserver.py?),
> and modify it to fit your needs.

Will it help to clarify any software extensions with corresponding maintainers?


> Maybe have the function that spawns handler threads append the thread ID
> to a list, have the function that cleans up a handler thread at the end
> send its ID via a Queue object,

This approach can be reasonable to some degree.


> and have the master periodically (probably the same loop that checks
> for shutdown), read the Queue, and remove the received ID from the list
> of active threads.

I imagine that there are nicer design options available for notifications
according to thread terminations.


> On shutdown, you loop reading the Queue and removing IDs from the list
> of active threads until the list is empty.

Will a condition variable (or a semaphore) be more helpful here?


>> How much do the programming interfaces from the available classes support
>> the determination that submitted tasks were completely finished?
>>
>   Read the library reference manual and, for those modules with Python
> source, the source files (threading.py, socketserver.py, queue.py...)
>
>   The simpler answer is that these modules DON'T...

I suggest to improve the software situation a bit more.


> It is your responsibility.

This view can be partly appropriate.


> socketserver threading model is that the main server loops waiting
> for connection requests, when it receives a request it creates
> a handler thread, and then it completely forgets about the thread

I find that this technical detail can be better documented.


> -- the thread is independent and completes the handling of the request.

Would you occasionally like to wait on the return value from such
data processing?
(Are these threads joinable?)


> If you need to ensure everything has finished before starting
> the next server instance, you will have to write the extensions
> to socketserver.

I might achieve something myself while it can be also nice to achieve
adjustments together with other developers.


>> Should mentioned system constraints be provided already by the Python
>> function (or class) library?
>
>   Again, the model for socketserver, especially in threaded mode, is that
> requests being handled are completely independent. shutdown() merely stops
> the master from responding to new requests.

I find this understanding of the software situation also useful.


>   In a more normal situation, .shutdown() would be called
> and then the entire program would call exit.

* Do you expect any more functionality here than an exit from a single thread?

* Does this wording include the abortion of threads which were left over?


> It is NOT normal for a program to create a server, shut it down,
> only to then repeat the sequence.

Will your understanding of such an use case grow, too?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-04 Thread Markus Elfring
> If you have a multi-threaded application and you want to be on
> the "safe side", you always use your own locks.

I suggest to reconsider your software expectations around
the word “always”.
There are more software design options available.


> Python uses locks to protect its own data structures.
> Whether this protection is enough for your use of Python types
> depends on details you may not want to worry about.

I agree to such a general view.


> For example: most operations on Python types are atomic
> (if they do not involve some kind of "waiting" or "slow operation")
> *BUT* if they can detroy objects, then arbitrary code
> (from destructors) can be executed and then they are not atomic.

The safe handling of finalizers can trigger development challenges.


> As an example "list.append" is atomic (no object is detroyed),
> but "list[:] = ..." is not: while the list operation itself
> is not interrupted by another thread, the operation may destroy
> objects (the old list components) and other threads may get control
> before the assignment has finished.

How would you determine (with the help of the Python function/class library)
that previously submitted tasks were successfully executed?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-04 Thread Markus Elfring
>   Server.shutdown() sets a flag that tells the main server to /stop
> accepting new requests/.

Can it be that this method should perform a bit more resource management
(according to the selected configuration like “socketserver.ThreadingMixIn”)?


>   So far as I can tell, for a threaded server, any threads/requests that
> were started and haven't completed their handlers will run to completion --
> however long that handler takes to finish the request. Whether
> threaded/forked/single-thread -- once a request has been accepted, it will
> run to completion.

This is good to know and such functionality fits also to my expectations
for the software behaviour.


> Executing .shutdown() will not kill unfinished handler threads.

I got a result like the following from another test variant
on a Linux system.

elfring@Sonne:~/Projekte/Python> time python3 test-statistic-server2.py
incidence|"available records"|"running threads"|"return code"|"command output"
80|6|1|0|
1|4|3|0|
1|4|4|0|
3|5|2|0|
1|5|3|0|
5|7|1|0|
1|3|4|0|
1|8|1|0|
1|4|1|0|
3|5|1|0|
1|4|2|0|
1|3|2|0|
1|6|2|0|

real0m48,373s
user0m6,682s
sys 0m1,337s


>> How do you think about to improve the distinction for the really
>> desired lock granularity in my use case?
>
>   If your request handler updates ANY shared data, YOU have to code the
> needed MUTEX locks into that handler.

I suggest to increase the precision for such a software requirement.


> You may also need to code logic to ensure any handler threads have completed

Can a class like “BaseServer” be responsible for the determination
if all extra started threads (or background processes) finished their work
as expected?


> before your main thread accesses the shared data

I became unsure at which point a specific Python list variable
will reflect the received record sets from a single test command
in a consistent way according to the discussed data processing.


> -- that may require a different type of lock;

I am curious on corresponding software adjustments.


> something that allows multiple threads to hold in parallel
> (unfortunately, Event() and Condition() aren't directly suitable)

Which data and process management approaches will be needed finally?


>   Condition() with a global counter (the counter needs its own lock)
> might work: handler does something like

How much do the programming interfaces from the available classes support
the determination that submitted tasks were completely finished?


> The may still be a race condition on the last request if it is started
> between the .shutdown call and the counter test (ie; the main submits
> .shutdown, server starts a thread which doesn't get scheduled yet,
> main does the .acquire()s, finds counter is 0 so assumes everything is done,
> and THEN the last thread gets scheduled and increments the counter.

Should mentioned system constraints be provided already by the Python
function (or class) library?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-03 Thread Markus Elfring
>   I suggested UDP as a TEST, not for the end use...

I can understand such a suggestion.
Can it distract from other software surprises?


> If UDP gives you the results you expect, it most likely means there is a 
> problem

There is a questionable software behaviour still waiting for a proper solution
(without switching the data transmission technique for this use case).


> in how you are processing TCP data.

Which impressions did you get from the implementation of the published
Python functions “receive_data” and “receive_message” (according to
the previously mentioned links)?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-03 Thread Markus Elfring
> In any multi-threaded application, you must be carefull when
> accessing (and especially modifying) shared (e.g. "global") objects.
> In general, you will need locks to synchronize the access.

I agree to this general view.


> Appending to a list (and some other elementary operations
> on Python data structures) is protected by Python's "GIL"
> (= "Global Interpreter Lock") and thereby becomes "atomic".
> Thus, if all you do is appending - then the append has no need
> of explicite locking.

Thanks for such information.


> It does not protect the integrity of your data structures.

It can be that the data protection needs to be extended occasionally.
But I am more interested in the detail that a specific Python list variable
should reflect the received record sets from a single test command
in a consistent way.

Did all extra worker threads exit after the statement “server.shutdown()”
was successfully executed?


> Thus, if your threads modify shared objects, then you are responsible
> to protect access to them with appropriate locking.

How do you think about to improve the distinction for the really
desired lock granularity in my use case?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-02 Thread Markus Elfring
>> An instance for the class “threaded_TCP_server” and a call of 
>> “subprocess.run(…)”.
>>
>   And how many connections does the subprocess make?

A new TCP connection is performed with the information from the passed 
parameters
for each call of the function “sendall”. The test command will send JSON data
over these six connections then.


> A full connect(), send(), close() for each output?

Yes, for this test variant.


> OTOH, if you are making multiple connect() calls, then OS scheduling
> could result in multiple server threads running.

This functionality is desired.


…
> Of course, you still have to use your head!
…
> For instance, it makes no sense to use a forking server
…

I would expect that the main test process will not be forked here.


>> I hope that the execution of the statement “server.shutdown()” triggers
>> a separation in the reported inter-process communication.
>
>   .shutdown() stops the server from processing new connections...

I hope so, too.
(Additional server objects can be created in subsequent loop iterations.)

How does this interpretation fit to the wording “Tell the serve_forever() loop
to stop and wait until it does.” from the documentation of
the class “socketserver.BaseServer”?


> It does nothing for the nature of TCP streams

This aspect should be fine.


>   I'd be tempted to convert the master and subprocess from TCP to UDP,
> just to see if there is a difference.

I do not want to change the data transmission technique for this use case.


I suggest to take another look at a related discussion topic like
“Data exchange over network interfaces by SmPL scripts” if you would prefer
to review a concrete source code example instead of recognising
data processing consequences from a known test algorithm.
https://systeme.lip6.fr/pipermail/cocci/2019-April/005792.html
https://lore.kernel.org/cocci/6ec5b70f-39c3-79e5-608f-446a870f0...@web.de/


Will it help to add a bug report in an issue tracker?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-02 Thread Markus Elfring
>> May I expect that data were completely received from clients and accordingly
>> processed by the request handler in the started threads after
>> the statement “server.shutdown()” was sucessfully executed?
>
> Python delegates those low level services to the underlaying
> network library, which likely will implement the TCP specification.

* How do you think about data processing consequences when record sets are
  concurrently received and are appended to a global list (for a Python 
variable)?

* Would you expect that the run time system takes care for data consistency
  because of involved multi-threading?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-05-01 Thread Markus Elfring
> For the truly lazy, we have hash links.

Thanks for your reminder.


> https://docs.python.org/3/library/socketserver.html#socketserver.BaseServer.allow_reuse_address

The relationship of this class attribute with the identifier (or option) 
“SO_REUSEADDR”
might become easier to find.
The use of such a configuration parameter might need further considerations.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-05-01 Thread Markus Elfring
>   https://docs.python.org/3.4/library/socketserver.html
> about the middle of the page...

It seems that you would like to refer to an other document.
https://docs.python.org/3/library/socket.html#socket-example

Under which circumstances will the execution of the method “socket.setsockopt”
be triggered by a class like “socketserver.TCPServer”?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-01 Thread Markus Elfring
>   Why not provide them so that anyone trying to analyze what you are
> seeing can try them for themselves.

I assume that an other information system can be more appropriate
for file distribution than this mailing list.


>   Starting a full process takes time

This is usual. - Such an action would be necessary for the test case.


>   Do you have ANY synchronization enabled,

I would expect that this is provided by the run time system for the
Python scripting language.


>   so that you start all the children

The main process waits on the exit for the single started command
in each loop iteration.


> and they all wait for a single "go" signal to be set

No.


> -- so that they all start processing at the same time?

I suggest to take another look at the description for the test algorithm.


>   No idea what is in each loop...

An instance for the class “threaded_TCP_server” and a call of 
“subprocess.run(…)”.


>   If there is only one list (implied by #1)

There are two lists adjusted.

1. Storage for received records before the statement “server.shutdown()”
   (and a reset to an empty list) is executed at the end of each loop iteration.

2. List lengths are recorded for analysis after the data collection phase.


>   Again, it is not clear what is being counted here...

How many record sets were stored and are accessible at one point in time
in such a data structure?


>   What is an "incidence", how is "available records" determined?

The generated table shows how often a specific number of record sets
was temporarily available for further data processing.
I would expect that the correct count should be 100 (once) here
(instead of the display of deviations from the original six test records).


>   Also, TCP is a stream protocol, not a record protocol.

This view is generally fine.

JSON data structures are transmitted by the discussed approach.


> A client could use, say, 6 separate .send() operations,

This should be happening in this test.


> while the server receives everything in a single .recv() call.

I hope that the execution of the statement “server.shutdown()” triggers
a separation in the reported inter-process communication.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-01 Thread Markus Elfring
> https://docs.python.org/3/library/socketserver.html#asynchronous-mixins

I have constructed a pair of small scripts for another test.
A command (which refers to the second Python script) is executed 100 times
by “subprocess.run()” with parameters so that the child process can send six
test records back to the caller over a TCP connection.

1. The received records are appended to a global list variable during
   each loop iteration.

2. The list length is appended to another global list variable.

3. The stored list lengths are counted by grouping of this information.

   Now I wonder again about a test result like the following for
   the software “Python 3.7.2-3.1” (for an openSUSE system).


elfring@Sonne:~/Projekte/Python> time /usr/bin/python3 test-statistic-server1.py
incidence|"available records"|"return code"|"command output"
44|6|0|
12|5|0|
13|4|0|
16|3|0|
2|7|0|
8|2|0|
3|8|0|
1|1|0|
1|9|0|

real0m29,123s
user0m5,925s
sys 0m1,073s


Does this data processing approach indicate a need for further software 
corrections?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-05-01 Thread Markus Elfring
> https://docs.python.org/3/library/socketserver.html#asynchronous-mixins

An example is provided also in this software documentation.
May I expect that data were completely received from clients and accordingly
processed by the request handler in the started threads after
the statement “server.shutdown()” was sucessfully executed?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-05-01 Thread Markus Elfring
> If your server listens on a random port how does the client know
> which port to connect to?

I am fiddling also with the data processing variant that such connection
properties are passed as parameters for a command which is executed
as a child process so that desired data can be sent back to its caller.

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-05-01 Thread Markus Elfring
> You'll get anything in the ephemeral ports range.

From which documentation source did you get this information for
the handling of a zero as an useful setting?


> But the other cause is that you recently shut the server down,
> and the port is still in the TIME_WAIT state.

Does this technical detail occasionally increase the need to choose
an additional number for a quickly restarted service in a dynamic way?


> You can avoid this with the SO_REUSEADDR flag.

Can such a configuration parameter be used also together with
programming interfaces from the module “socketserver”?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-04-30 Thread Markus Elfring
> In Python, there's a certain amount of support. You can attempt to
> bind to a port, and if you fail, try the next one in a sequence.

* The zero seems to be also an usable parameter here.
  Is the system configuration documentation unclear about the applied
  value range for the port allocation?
  Can it happen that special permissions would be needed for a setting?

* How are the chances to improve programming interfaces around
  the module “socketserver”?


> It's simple, it's straight-forward, and it doesn't cause problems.

Would you like to reduce connection management difficulties because of
an error message like “OSError: [Errno 98] Address already in use”?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Dynamic selection for network service ports?

2019-04-30 Thread Markus Elfring
> * Which challenges from inter-process communication should be taken better 
> into
>   account for the involved programming interfaces (because of 
> multi-threading)?

I would like to clarify another recurring challenge with network configuration.
A port number should be selected somehow for the service where the desired
input data will be received.

It can be nice when a fixed port can be specified for such data exchange.
Advanced service management can achieve a reasonably safe number allocation.

But I would occasionally prefer a dynamic selection for some data processing
approaches on my test systems.
How does the support look like for the handling of ephemeral ports by
programming interfaces for Python?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking network input processing by Python for a multi-threaded server

2019-04-30 Thread Markus Elfring
> Does this mean that the second number in a listing like the above
> should be approximatively equal?

The expectation (or hope) was that an identical number of record sets
would be available for each loop iteration in such a data processing approach
by the server.
But the reality is different for the observed input value distribution.


> Then, obviously, your threads must communicate among themselves
> and wait in case they have too much run ahead.

This is a software design option. I am looking again for adjustments
around programming interfaces for efficient appending to mapped data
according to required context management.


> As you say to have created a "multi-threaded TCP server", I assume
> that your server spawns a thread for each TCP connection.

Is this the software behaviour we usually get from
the class “socketserver.ThreadingMixIn”?


> Then, you can concentrate on proper multi-thread synchronization
> and forget about "inter-process communication challenges".

I showed an experiment where I put a loop around a test command.
I am wondering also about varying data processing results from
the execution of a single command. The visualisation of output
statistics seems to be more challenging for such a test case.
Can any other test tools help a bit more here?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Checking support for efficient appending to mapped data

2019-04-30 Thread Markus Elfring
> The file name for the client script is passed by a parameter to a command
> which is repeated by this server in a loop.

Additional explanations can be helpful for the shown software situation.


> It is evaluated then how often a known record set count was sent.

It was actually determined for a specific function variant how many input
record sets were available for data processing by the server in a loop 
iteration.


> I got a test result like the following.
>
>
> elfring@Sonne:~/Projekte/Coccinelle/janitor> time /usr/bin/python3 
> list_last_two_statements_from_if_branches-statistic-server3.py
> "return code"|"received records"|"command output"|incidence
> 0|9||63
> 0|5||5
> 0|13||2
…

Such a test result shows that some records could be handled by the server
within the time range from the start of a command and the exit of
the corresponding child process which sends desired data back to
its caller.
A fraction of records can be handled only at a subsequent iteration
if no session management was applied as in this test case.


> * How should this data processing approach be adjusted so that the same number
>   of records will be handled in a consistent way?

There is a need to add session management so that required context information
will be connected with the provided input data. Each session needs to be
identified by an unique key. The Python programming language provides the data
type “dict” for such a mapping as standard functionality.
https://docs.python.org/3/library/stdtypes.html#mapping-types-dict

Python's support for associative containers is generally nice.
But I find that consequences from modification of mapped data can become more
interesting for the mentioned use case.
The operator “+=” (augmented assignment) can be used there, can't it?
The appending of data is supported for some types by this operation.
I am informed in the way that some data types expect that a well-known
method like “append” must (or should be) be called instead for such
an “addition” (if you care for efficient data processing).
https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types

Now I am looking again for further clarifications and improvements
in this area.
Would you like to share any more ideas here?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


Checking network input processing by Python for a multi-threaded server

2019-04-29 Thread Markus Elfring
Hello,

I constructed another multi-threaded TCP server for my needs
(based on the available software documentation).
https://docs.python.org/3/library/socketserver.html#asynchronous-mixins

I constructed also a corresponding script which should send nine record sets
(which get extracted from a simple JSON file) to this server on my local host
by the software “Python 3.7.2-3.1” (for an openSUSE system).
Related background information can be seen for a discussion topic like
“Data exchange over network interfaces by SmPL scripts”.
https://systeme.lip6.fr/pipermail/cocci/2019-April/005792.html
https://lore.kernel.org/cocci/6ec5b70f-39c3-79e5-608f-446a870f0...@web.de/

The file name for the client script is passed by a parameter to a command
which is repeated by this server in a loop.
It is evaluated then how often a known record set count was sent.
I got a test result like the following.


elfring@Sonne:~/Projekte/Coccinelle/janitor> time /usr/bin/python3 
list_last_two_statements_from_if_branches-statistic-server3.py
"return code"|"received records"|"command output"|incidence
0|9||63
0|5||5
0|13||2
0|10||5
0|11||7
0|8||3
0|7||3
0|14||3
0|6||4
0|12||3
0|4||2

real1m23,301s
user0m6,434s
sys 0m1,430s


* How should this data processing approach be adjusted so that the same number
  of records will be handled in a consistent way?

* Which challenges from inter-process communication should be taken better into
  account for the involved programming interfaces (because of multi-threading)?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue36382] socket.getfqdn() returns domain "mshome.net"

2019-03-20 Thread Markus


Markus  added the comment:

I found the IP of mshome.net in an Ethernet adapter "vEthernet"
It seems that this adapter stems from Hyper-V.


It therefore seems that socket.getfqdn() reported the wrong network adapter 
once.

Because I cannot reproduce this, I leave this issue closed.

--
resolution: not a bug -> works for me

___
Python tracker 
<https://bugs.python.org/issue36382>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36382] socket.getfqdn() returns domain "mshome.net"

2019-03-20 Thread Markus


Markus  added the comment:

Dear Steve,
in fact not a Python bug at all.

I used 2 commands: 
  ipconfig /release
  ipconfig /renew
and rebootet.

This fixed the issue, for now.

Also, I found this domain in c:\Windows\System32\drivers\etc\hosts.ics

Unclear who created that entry.


Documenting this here could help others with the same problem.

Closing this issue.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue36382>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36382] socket.getfqdn() returns domain "mshome.net"

2019-03-20 Thread Markus


New submission from Markus :

In a corporate network, `wmic computersystem get domain` returns the correct 
domain.

On some clients, the Python query "socket.getfqdn()" returns the wrong domain, 
namely "mshome.net"

>>> import socket
>>> socket.getfqdn()
'*.mshome.net'

I have only found only very old nominations of that domain.

Problems persists after reboot.

Tried versions 
Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit 
(AMD64)] on win32
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit 
(Intel)] on win32

Currently, I suspect this is a DHCP configuration problem.

--
components: Windows
messages: 338472
nosy: markuskramerIgitt, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: socket.getfqdn() returns domain "mshome.net"
type: behavior
versions: Python 2.7, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue36382>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35671] reserved identifier violation

2019-01-07 Thread Markus Elfring


Markus Elfring  added the comment:

* How do you think about to reduce the tampering with the reserved name space 
in mentioned source code (and also in related components)?
* Would you like to avoid that this software depends on undefined behaviour?

--

___
Python tracker 
<https://bugs.python.org/issue35671>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35671] reserved identifier violation

2019-01-06 Thread Markus Elfring

New submission from Markus Elfring :

I would like to point out that identifiers like “__DYNAMIC_ANNOTATIONS_H__” and 
“_Py_memory_order” do not fit to the expected naming convention of the C++ 
language standard.
https://www.securecoding.cert.org/confluence/display/cplusplus/DCL51-CPP.+Do+not+declare+or+define+a+reserved+identifier

Would you like to adjust your selection for unique names?
* 
https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Include/dynamic_annotations.h#L56
* 
https://github.com/python/cpython/blob/130893debfd97c70e3a89d9ba49892f53e6b9d79/Include/internal/pycore_atomic.h#L36

--
components: Interpreter Core
messages: 333105
nosy: elfring
priority: normal
severity: normal
status: open
title: reserved identifier violation
type: security
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue35671>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30545] Enum equality across modules: comparing objects instead of values

2018-08-12 Thread Markus Wegmann


Markus Wegmann  added the comment:

Hi Ethan

> Your Enum example in flawless is not an IntEnum, so the error (unable to add 
> an integer to None) seems entirely unrelated.

The TypeError is just a consequence of the faulty Enum identity comparison some 
lines before. I mentioned the TypeError so you can verify whether your Python 
version takes the same program flow.


I also did further research. The Enum's are definitely different regarding the 
module path -- the instance comparison will therefore return False. I checked 
__module__ + __qualname__:

`flora_tools.radio_configuration.RadioModem.LORA`

vs.

`radio_configuration.RadioModem.LORA`


The cause is the wrong import statement in `flora_tools/codegen/codegen.py`:

`from radio_configuration import RadioConfiguration,\ 
RADIO_CONFIGURATIONS`

It should have been

`from flora_tools.radio_configuration import RadioConfiguration\
RADIO_CONFIGURATIONS`

The real deal here is why I was allowed to directly import from 
`radio_configuration` in the first place. I'm not allowed to directly import a 
submodule in the toy project without the proper root module name appended. 
Maybe I don't see the big picture, or have some crude options/monkey_patching 
enabled.

Nevertheless, the behaviour regarding Enum comparisons and different import 
paths seems to me quite misleading.

Best regards
Atokulus

--

___
Python tracker 
<https://bugs.python.org/issue30545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30545] Enum equality across modules: comparing objects instead of values

2018-08-11 Thread Markus Wegmann


Markus Wegmann  added the comment:

Hi there!

I came as well in contact with this kind of bug. Sadly, I could not replicate 
it with a simplified toy example.

You can experience the bug, if checkout

https://github.com/Atokulus/flora_tools/tree/56bb17ea33c910915875214e916ab73f567b3b0c

and run

`python3.7 main.py generate_code -d ..`

You find the crucial part where the identity check fails at 'radio_math.py:102' 
in function `get_message_toa`. The program then fails, due to the wrong program 
flow.

`
C:\Users\marku\AppData\Local\Programs\Python\Python37\python.exe 
C:/Users/marku/PycharmProjects/flora_tools/flora_tools/__main__.py 
generate_code -d C:\Users\marku\Documents\flora
Traceback (most recent call last):
  File "C:/Users/marku/PycharmProjects/flora_tools/flora_tools/__main__.py", 
line 110, in 
main()
  File "C:/Users/marku/PycharmProjects/flora_tools/flora_tools/__main__.py", 
line 104, in main
generate_code(args.path)
  File "C:/Users/marku/PycharmProjects/flora_tools/flora_tools/__main__.py", 
line 58, in generate_code
code_gen = CodeGen(flora_path)
  File 
"C:\Users\marku\PycharmProjects\flora_tools\flora_tools\codegen\codegen.py", 
line 37, in __init__
self.generate_all()
  File 
"C:\Users\marku\PycharmProjects\flora_tools\flora_tools\codegen\codegen.py", 
line 40, in generate_all
self.generate_radio_constants()
  File 
"C:\Users\marku\PycharmProjects\flora_tools\flora_tools\codegen\codegen.py", 
line 72, in generate_radio_constants
radio_toas.append([math.get_message_toa(payload) for payload in payloads])
  File 
"C:\Users\marku\PycharmProjects\flora_tools\flora_tools\codegen\codegen.py", 
line 72, in 
radio_toas.append([math.get_message_toa(payload) for payload in payloads])
  File "C:\Users\marku\PycharmProjects\flora_tools\flora_tools\radio_math.py", 
line 130, in get_message_toa
) / self.configuration.bitrate
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

Process finished with exit code 1
`

In the appendix, you find a toy project with similar structure, which in 
contrast to `flora_tools` works flawlessly.

I hope there is somebody out there with a little more experience in spotting 
the cause. If there is a bug in Python or CPython, it might have quite a 
security & safety impact.

Meanwhile I will hotfix my program by comparing the unerlying enum values.

Best regards
Atokulus

--
nosy: +Markus Wegmann
Added file: https://bugs.python.org/file47747/enum_bug-flawless_example.zip

___
Python tracker 
<https://bugs.python.org/issue30545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33246] Note in 18.2. json — JSON encoder and decoder is incorrect

2018-04-09 Thread Markus Järvisalo

New submission from Markus Järvisalo <markus.jarvis...@gmail.com>:

The note in https://docs.python.org/2/library/json.html section "18.2. json — 
JSON encoder and decoder".

The note says incorrectly that JSON is a subset of YAML 1.2 but if you follow 
the text written in YAML 1.2 specification it states that:

"The primary objective of this revision is to bring YAML into compliance with 
JSON as an official subset."

So it should be that YAML is a subset of JSON and not the other way around.

--
assignee: docs@python
components: Documentation
messages: 315111
nosy: Markus Järvisalo, docs@python
priority: normal
severity: normal
status: open
title: Note in 18.2. json — JSON encoder and decoder is incorrect
type: behavior
versions: Python 2.7

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31103] Windows Installer Product Version 3.6.2150.0 Offset By 0.0.150.0

2017-08-04 Thread Markus Kramer

Markus Kramer added the comment:

Steve, yes: from my point of view, the version in "Programs and Features" 
should be  3.6.2 or 3.6.2.0 or 3.6.2.150. 


Eryk, thank you for your explanation - I had a wrong memory about the third 
field in ProductVersion to be the micro version and learned something. 


If that would be helpful, I could change the title of this issue.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31103>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31103] Windows Installer Product Version 3.6.2150.0 Offset By 0.0.150.0

2017-08-02 Thread Markus Kramer

Markus Kramer added the comment:

Screenshot of Product Version 3.6.2150.0

--
Added file: http://bugs.python.org/file47056/3.6.2.png

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31103>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31103] Windows Installer Product Version 3.6.2150.0 Offset By 0.0.150.0

2017-08-02 Thread Markus Kramer

New submission from Markus Kramer:

Each Windows installation has a “product version”.

The Windows installer python-3.6.2.exe has product version "3.6.2150.0"  
(accessible with context menu / Properties / Details).

This causes at least 2 problems:
 - Automated software inventory relies on product version and therefore does 
not detect version 3.6.2
 - Microsoft installation guidelines require the first three fields to be 
smaller than 256.

Proposed alternatives for the value of product version:
- "3.6.2.0" to indicate the final release build.
- "3.6.2.150" to indicate the build number. The build number may be higher than 
256, but this is unusual for a final release.



Side note: 
This is a sub-problem of http://bugs.python.org/issue31077

--
components: Windows
messages: 299651
nosy: Damon Atkins, Markus Kramer, paul.moore, steve.dower, tim.golden, 
zach.ware
priority: normal
severity: normal
status: open
title: Windows Installer Product Version 3.6.2150.0 Offset By 0.0.150.0
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31103>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29740] Visual C++ CRT security update from 14 June 2011

2017-03-07 Thread Markus

Markus added the comment:

I beg pardon to be pedantic.
The issue is not MFC, but CRT.

The related safety bulletin 
(https://technet.microsoft.com/library/security/ms11-025) says

Your application may be an attack vector if all of the following conditions 
are true:

 - Your application makes use of the Microsoft Foundation Class (MFC) 
Library
 - Your application allows the loading of dynamic link libraries from 
untrusted locations, such as WebDAV shares

This is clearly **not** the case for Python.
So far so good.

I am concerned that the security update contains an updated vc90.crt 
9.0.30729.6161. 
If Python find the 6161 update, it will use it.

I found no information on the change between the 4940 version (from Python 
2.7.13) and the 6161 update (from the security update).

But as Python uses the 6161 update (if it is installed) I would like to raise 
the question if Python should ship it.

I am not a security expert, so this issue is based completely on the above 
observations and a crumb of logic.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29740>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29740] Visual C++ CRT security update from 14 June 2011

2017-03-06 Thread Markus

New submission from Markus:

In 14 June 2011 Microsoft released Visual C++ 2008 runtime MFC Security Update 
https://www.microsoft.com/en-us/download/details.aspx?id=26368

The Security Update also updates the CRT runtime (used by Python 2.7)

Without the security update, Python 2.7.13 uses vc90.crt 9.0.30729.4940
With the security  update, Python 2.7.13 uses vc90.crt 9.0.30729.6161
(Use e.g. Sysinternals procexp to see)

Why does Python not install the vc90.crt of the security update?

--
components: Build, Windows
messages: 289135
nosy: markuskramerIgitt, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Visual C++ CRT security update from 14 June 2011
type: security
versions: Python 2.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29740>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29351] absolute imports for logging

2017-01-23 Thread Markus Gerstel

Markus Gerstel added the comment:

Yes, this is indeed the same for other stdlib modules, too. Logging is just the 
first one that came to attention in our investigations.

I haven't prepared any other patches yet though, because your answer could 
easily be "No, we cannot consider these changes under any circumstances for 2.7 
because $reason".

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29351>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29351] absolute imports for logging

2017-01-23 Thread Markus Gerstel

Changes by Markus Gerstel <markus.gers...@diamond.ac.uk>:


--
nosy: +vinay.sajip

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29351>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29351] absolute imports for logging

2017-01-23 Thread Markus Gerstel

New submission from Markus Gerstel:

Running 'import logging' causes at minimum 46 failing 'open' and 12 failing 
'stat' calls because python looks for packages inside python/Lib/logging which 
will never be there, in particular: sys, os, time, cStringIO, traceback, 
warnings, weakref, collections, codecs, thread, threading, atexit.

The impact of this is limited when python is installed locally, but noticeable 
when run on a networked file system.


How to reproduce: 
run 
$ strace python -c "import logging;" 2>&1 | grep ENOENT | grep "\/logging\/"


How to fix:
Add 'from __future__ import absolute_import' to all files in the logging 
directory.
A relevant patch is attached.

--
components: Library (Lib)
files: 0001-absolute-import.patch
keywords: patch
messages: 286083
nosy: mgerstel
priority: normal
severity: normal
status: open
title: absolute imports for logging
type: resource usage
versions: Python 2.7
Added file: http://bugs.python.org/file46390/0001-absolute-import.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29351>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28310] Mixing yield and return with value is allowed

2016-09-29 Thread Markus Unterwaditzer

New submission from Markus Unterwaditzer:

The attached example code raises a SyntaxError in Python 2, but is 
syntactically valid in Python 3. The return value is ignored and an empty 
generator is produced. I see no reason for this behavioral change.

--
components: Interpreter Core
files: lol.py
messages: 277691
nosy: untitaker
priority: normal
severity: normal
status: open
title: Mixing yield and return with value is allowed
versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6
Added file: http://bugs.python.org/file44877/lol.py

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28310>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-13 Thread Markus Holtermann

Markus Holtermann added the comment:

Thanks for your input. I remove the versionchanged block.

--
Added file: 
http://bugs.python.org/file44649/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Markus Holtermann added the comment:

Thanks for your input, Berker. Updated as suggested. I still include the 
versionchanged annotation as I suspect more people to look at the docs than the 
entire changelog

--
Added file: 
http://bugs.python.org/file44615/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Removed file: 
http://bugs.python.org/file44609/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.3.6.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Removed file: 
http://bugs.python.org/file44610/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.master.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com




[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Markus Holtermann added the comment:

Since the patch applies cleanly to the 3.5, 3.6 and master branch I only 
attached one updated version of it.

--
Added file: 
http://bugs.python.org/file44612/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Removed file: 
http://bugs.python.org/file44608/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.3.5.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Added file: 
http://bugs.python.org/file44609/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.3.6.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Added file: 
http://bugs.python.org/file44610/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.master.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Changes by Markus Holtermann <i...@markusholtermann.eu>:


Added file: 
http://bugs.python.org/file44608/0001-Enable-WebSocket-URL-schemes-in-urllib.parse.urljoin.3.5.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25895] urllib.parse.urljoin does not handle WebSocket URLs

2016-09-12 Thread Markus Holtermann

Markus Holtermann added the comment:

As discussed with rbcollins during the KiwiPyCon sprints, I'll add patches for 
3.5, 3.6 and master with docs mentioning the addition of `ws` and `wss`  as of 
3.5.3

--
nosy: +MarkusH

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25137] Behavioral change / regression? with nested functools.partial

2015-09-18 Thread Markus Holtermann

Markus Holtermann added the comment:

Interesting thing tough, this optimization is nowhere mentioned in the 3.5 
release notes ;)

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25137>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Elektra 0.8.13 with python plugins

2015-09-18 Thread Markus Raab
## Notes

There are some misconceptions about Elektra and semi structured data (like 
XML, JSON). Elektra is a key/value storage, that internally represents 
everything with key and values. Even though, Elektra can use XML and JSON 
files elegantly, there are limitations whatXML and JSON can represent. XML, 
e.g., cannot have holes within its structure, while this is obviously easily 
possible with key/value. And JSON, e.g., cannot have non-array entries 
within an array. This is a more general issue of that configuration files in 
general are constrained in what they are able to express. The solution to 
this problem is validation, i.e. keys that does not fit in the underlying 
format are rejected. Note there is no issue the other way round: special 
characteristics of configuration files can always be captured in Elektra's 
metadata.



## Get It!

You can download the release from
[here](http://www.libelektra.org/ftp/elektra/releases/elektra-0.8.13.tar.gz)
and now also [here on github]
(https://github.com/ElektraInitiative/ftp/tree/master/releases/elektra-0.8.13.tar.gz)

- name: elektra-0.8.13.tar.gz
- size: 2141758
- md5sum: 6e7640338f440e67aba91bd64b64f613
- sha1: ca58524d78e5d39a540a4db83ad527354524db5e
- sha256: f5c672ef9f7826023a577ca8643d0dcf20c3ad85720f36e39f98fe61ffe74637



This release tarball now is also available
[signed by me using gpg]
(http://www.libelektra.org/ftp/elektra/releases/elektra-0.8.13.tar.gz.gpg)

already built API-Docu can be found [here]
(http://doc.libelektra.org/api/0.8.13/html/)


## Stay tuned! ##

Subscribe to the
[RSS feed](http://doc.libelektra.org/news/feed.rss)
to always get the release notifications.

For any questions and comments, please contact the
[Mailing List](https://lists.sourceforge.net/lists/listinfo/registry-list)
the issue tracker [on github](http://git.libelektra.org/issues)
or by mail elek...@markus-raab.org.

[Permalink to this NEWS 
entry](http://doc.libelektra.org/news/3c00a5f1-c017-4555-92b5-a2cf6e0803e3.html)

For more information, see [http://libelektra.org](http://libelektra.org)

Best regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue25137] Behavioral change / regression? with nested functools.partial

2015-09-15 Thread Markus Holtermann

New submission from Markus Holtermann:

Since #7830 nested partials are flattened. This is a behavioral change that 
causes a test failure in Django because we use nested partials to resolve 
relationships between models: 
https://github.com/django/django/pull/4423#issuecomment-138996095

In my opinion this is a regression since there's no way to turn off the new 
behavior.

--
components: Library (Lib)
messages: 250814
nosy: MarkusH, belopolsky
priority: normal
severity: normal
status: open
title: Behavioral change / regression? with nested functools.partial
type: behavior
versions: Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25137>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20438] inspect: Deprecate getfullargspec?

2015-09-15 Thread Markus Unterwaditzer

Markus Unterwaditzer added the comment:

It should be properly noted that the API isn't going to be actually removed 
anytime soon.

Also I think issuing a warning about this was a mistake. For software that 
wants to stay compatible with both Python 2 and 3 it's basically useless.

--
nosy: +untitaker

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue20438>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20438] inspect: Deprecate getfullargspec?

2015-09-15 Thread Markus Unterwaditzer

Markus Unterwaditzer added the comment:

My last comment was in reference to getfullargspec, which is, as far as I 
understand, not going to be deprecated until after 3.7.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue20438>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24711] Document getpass.getpass behavior on ^C

2015-07-24 Thread Markus Unterwaditzer

New submission from Markus Unterwaditzer:

getpass.getpass doesn't enter a newline when the user aborts input with ^C, 
while input/raw_input does.

This behavior is surprising and can lead to mis-formatting of subsequent 
output. However, since this behavior exists since 2.7 and applications may have 
started to rely on it, I'd add a note to the documentation.

--
assignee: docs@python
components: Documentation
messages: 247302
nosy: docs@python, untitaker
priority: normal
severity: normal
status: open
title: Document getpass.getpass behavior on ^C
versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24711
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23266] Faster implementation to collapse non-consecutive ip-addresses

2015-01-20 Thread Markus

Markus added the comment:

My initial patch was wrong wrt. _find_address_range.
It did not loop over equal addresses.
Thats why performance with many equal addresses was degraded when dropping the 
set().

Here is a patch to fix _find_address_range, drop the set, and improve 
performance again.

python3 -m timeit -s import bipaddress; ips = 
[bipaddress.ip_address('2001:db8::1000') for i in range(1000)] -- 
bipaddress.collapse_addresses(ips)
1000 loops, best of 3: 1.76 msec per loop

python3 -m timeit -s import aipaddress; ips = 
[aipaddress.ip_address('2001:db8::1000') for i in range(1000)] -- 
aipaddress.collapse_addresses(ips)
1000 loops, best of 3: 1.32 msec per loop

--
Added file: http://bugs.python.org/file37794/ipaddress_faster_collapse4.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23266
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23266] Faster implementation to collapse non-consecutive ip-addresses

2015-01-20 Thread Markus

Markus added the comment:

Eleminating duplicates before processing is faster once the overhead of the set 
operation is less than the time required to sort the larger dataset with 
duplicates.

So we are basically comparing sort(data) to sort(set(data)).
The optimum depends on the input data.

python3 -m timeit -s import random; import bipaddress; ips = 
[bipaddress.ip_address('2001:db8::') + i for i in range(10)]; 
random.shuffle(ips) -- bipaddress.collapse_addresses(ips)

10 loops, best of 3: 1.49 sec per loop
vs.
10 loops, best of 3: 1.59 sec per loop

If the data is pre-sorted, possible if you retrieve from database, things are 
drastically different:

python3 -m timeit -s import random; import bipaddress; ips = 
[bipaddress.ip_address('2001:db8::') + i for i in range(10)];  -- 
bipaddress.collapse_addresses(ips)
10 loops, best of 3: 136 msec per loop
vs
10 loops, best of 3: 1.57 sec per loop

So for my usecase, I basically have less than 0.1% duplicates (if at all), 
dropping the set would be better, but ... other usecases will exist.

Still, it is easy to emulate the use of sorted(set()) from a users 
perspective - just call collapse_addresses(set(data)) in case you expect to 
have duplicates and experience a speedup by inserting unique, possibly even 
sorted, data.

On the other hand, if you have a huge load of 99.99% sorted non collapseable 
addresses, it is not possible to drop the set() operation in your sorted(set()) 
from a users perspective, no way to speed things up, and the slowdown you get 
is x10.

That said, I'd drop the set().
Optimization depends on data input, dropping the set() allows the user to 
optimize base on the nature of his input data.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23266
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23266] Faster implementation to collapse non-consecutive ip-addresses

2015-01-18 Thread Markus

New submission from Markus:

I found the code used to collapse addresses to be very slow on a large number 
(64k) of island addresses which are not collapseable.

The code at
https://github.com/python/cpython/blob/0f164ccc85ff055a32d11ad00017eff768a79625/Lib/ipaddress.py#L349
was found to be guilty, especially the index lookup.
The patch changes the code to discard the index lookup and have 
_find_address_range return the number of items consumed.
That way the set operation to dedup the addresses can be dropped as well.

Numbers from the testrig I adapted from http://bugs.python.org/issue20826 with 
8k non-consecutive addresses:

Execution time: 0.6893927365541458 seconds
vs.
Execution time: 12.116527611762285 seconds


MfG
Markus Kötter

--
components: Library (Lib)
files: ipaddress_collapse_non_consecutive_performance.diff
keywords: patch
messages: 234239
nosy: cmn
priority: normal
severity: normal
status: open
title: Faster implementation to collapse non-consecutive ip-addresses
type: performance
versions: Python 3.5
Added file: 
http://bugs.python.org/file37762/ipaddress_collapse_non_consecutive_performance.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23266
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23266] Faster implementation to collapse non-consecutive ip-addresses

2015-01-18 Thread Markus

Markus added the comment:

I just signed the agreement, ewa@ is processing it.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23266
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23266] Faster implementation to collapse non-consecutive ip-addresses

2015-01-18 Thread Markus

Markus added the comment:

Added the testrig.

--
Added file: http://bugs.python.org/file37763/testrig.tar.gz

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23266
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22956] Improved support for prepared SQL statements

2015-01-03 Thread Markus Elfring

Markus Elfring added the comment:

Are you really against benefits from reusing of existing application 
programming interfaces for the explicit preparation and compilation of SQL 
statements?

It seems that other software contributors like Marc-Andre Lemburg and Tony 
Locke show more constructive opinions.
https://mail.python.org/pipermail/db-sig/2014-December/006133.html
https://www.mail-archive.com/db-sig@python.org/msg01829.html
http://article.gmane.org/gmane.comp.python.db/3784

--
resolution: rejected - later

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22956
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22956] Improved support for prepared SQL statements

2014-11-27 Thread Markus Elfring

New submission from Markus Elfring:

An interface for parameterised SQL statements (working with placeholders) is 
provided by the execute() method from the Cursor class at the moment.
https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute

I assume that the SQL Statement Object from the SQLite C interface is reused 
there already.
http://sqlite.org/c3ref/stmt.html

I imagine that it will be more efficient occasionally to offer also a base 
class like prepared_statement so that the parameter specification does not 
need to be parsed for every passed command.
I suggest to improve corresponding preparation and compilation possibilities.

--
components: Extension Modules
messages: 231759
nosy: elfring
priority: normal
severity: normal
status: open
title: Improved support for prepared SQL statements

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22956
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22957] Multi-index Containers Library

2014-11-27 Thread Markus Elfring

New submission from Markus Elfring:

I find a data structure like it is provided by the Boost Multi-index 
Containers Library interesting for efficient data processing.
http://www.boost.org/doc/libs/1_57_0/libs/multi_index/doc/index.html

How are the chances to integrate a class library with similar functionality 
into the Python software infrastructure?

--
components: Extension Modules
messages: 231761
nosy: elfring
priority: normal
severity: normal
status: open
title: Multi-index Containers Library
type: enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22957
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22956] Improved support for prepared SQL statements

2014-11-27 Thread Markus Elfring

Changes by Markus Elfring elfr...@users.sourceforge.net:


--
type:  - enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22956
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22328] ur'foo\d' raises SyntaxError

2014-09-02 Thread Markus Unterwaditzer

New submission from Markus Unterwaditzer:

The string literal `ur'foo\d'` causes a SyntaxError while the equivalent 
`r'foo\d'` causes the correct string to be produced.

--
components: Interpreter Core
messages: 226281
nosy: untitaker
priority: normal
severity: normal
status: open
title: ur'foo\d' raises SyntaxError
type: behavior
versions: Python 3.1, Python 3.2, Python 3.3, Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22328
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Elektra 0.8.7 improved Python support

2014-07-30 Thread Markus Raab
Hello list,

Elektra provides a universal and secure framework to store configuration 
parameters in a global, hierarchical key database. The core is a small 
library implemented in C. The plugin-based framework fulfills many 
configuration-related tasks to avoid any unnecessary code duplication across 
applications while it still allows the core to stay without any external 
dependency. Elektra abstracts from cross-platform-related issues with an 
consistent API, and allows applications to be aware of other applications' 
configurations, leveraging easy application integration.
http://www.libelektra.org

While the core is in C, both applications and soon plugins can be written in 
python. The API is complete and fully functional but not yet stable. So if 
you have any suggestions, please let us know.

Additionally the latest release added:
- python2 next to python3 bindings + lots of improvements there
- ini plugin
- 3 way merge
- tab completion
- for more details see:
http://sourceforge.net/p/registry/mailman/message/32655639/

You can download the release at:
 http://www.markus-raab.org/ftp/elektra/releases/elektra-0.8.7.tar.gz
 size: 1566800
 md5sum: 4996df62942791373b192c793d912b4c

Make sure to enable BUILD_SWIG and BUILD_SWIG_PYTHON2 or BUILD_SWIG_PYTHON3.

Docu (C/C++) can be found here:
 http://doc.libelektra.org/api/0.8.7/html/

Best regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt

2014-06-25 Thread Markus Unterwaditzer

Markus Unterwaditzer added the comment:

Can this issue or #9205 be reopened as this particular instance of the problem 
doesn't seem to be resolved? I still seem to need the workaround from 
http://stackoverflow.com/a/1408476

--
nosy: +untitaker

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8296
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   >