[Python-ideas] Re: Revise using the colon ( : )

2023-09-20 Thread Celelibi
2023-09-12 7:10 UTC+02:00, Dom Grigonis :
> I don’t think your example depicts the ambiguity well.
>
> 1. is the correct parsing of the Original1, 2. is the correct parsing of
> Original2 and 3. is not correct for either of those.
>
> Original1
>> while a < b (x := c) - 42
> Original2
>> while a < b(x := c) - 42

They're the same. AFAIK `foo(...)` and `foo (...)` are parsed exactly
the same way. A space in this place is irrelevant. And it's unlikely
to change in the next few decades.

>
> 1. In this case, it’s obviously this one
>> while a < b:
>>(x := c) - 42

Why tho? Why would this one be more obvious than the other
alternatives? What would be the rules to decide? If I may quote the
PEP 20:
> In the face of ambiguity, refuse the temptation to guess.

> 2. In here your removed the space, which is not ambiguity, but rather
> mistype
Again, the space in this place doesn't matter. I removed it to show
more explicitely how it could be parsed.

> 3. Same as in 2. But also, this is not an option, because if there was no
> indent on the next line, then body exists. So this is incorrect.
>> Or:
>> while a < b(x := c) - 42:
>># ???

I actually think this is the better parsing of the three: a SyntaxError.


Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WSOJIXRMEYCNQGVNABPCJJ4S23PB6X4L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Revise using the colon ( : )

2023-09-11 Thread Celelibi
2023-09-05 18:26 UTC+02:00, Dom Grigonis :
> I like the idea of eliminating it if it was possible. This is one of my most
> common syntax errors (although mostly after function signature) and I am not
> C/C++ programmer. I am not sure about the core technical issues, but
> readability of code, such as:
> a = 1
> while a < 5: a += 1
> , would suffer:
> a = 1
> while a < 5 a += 1
> So IMO, it does improve readability in certain cases.
>
> Although, removing colon does sound like an idea worth thinking about. I
> might even dare to suggest disallowing body statement on the same line, but
> can’t do that for backwards compatibility ofc...
>
> So my point is, if anyone is going to give a thought about this, please add
> function signature to the list.
>

I just want to mention that without colons, one-liners could become ambiguous:
while a < b (x := c) - 42

Is it:
while a < b:
(x := c) - 42

Or:
while a < b(x := c):
-42

Or:
while a < b(x := c) - 42:
# ???

If anything, the colon would become optional but would definitely
remain in the grammar. Kinda like the semi-colon, only used in
one-liners.

When teaching python I noticed the usual uselessness of the colon
regarding the correct parsing of correctly indented code.
But I've never seen it as a burden and rarely if ever forgot one. It
also allows code editors to be very simple: you see a line ending with
a colon, you indent the next line. The keyword at the begining of the
line doesn't matter.

Best regards,
Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/UL3FSZWUKOVUASIPM46IPQIQPJ7GEV7Q/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: while(tt--)

2023-08-19 Thread Celelibi
2023-08-04 8:18 UTC+02:00, daniil.arashkev...@gmail.com
:
> Currently in Python we have construction like this:
>
> tt = 5
> while t:
> # do something
> tt -= 1
>
> It would be great if in Python we have something like this:
> tt = 5
> while (tt--):
> # do something
>
> It is exists in C++. And in my opinion it is very Pythonic

Just as a reminder of C/C++: {pre,post}-{inc,dec}rementations bring
their lot of issues with the language. Mostly the infamous *undefined
behavors*. If you've done some C or C++ this very expression might
send shivers down your spine.

Assuming a = 0, what's the value of a after: a = a++ ?
What's the value of the expression (a++) + (++a) ?
What values are passed to the function with: f(a, a++, a) ?

I am not sure these would be welcome in python. And allowing such
syntax wouldn't feel really pythonic.

The worst part is not that the evaluation order is not always defined.
It's that even if it were, most people wouldn't remember the rules
since they're not obvious from the syntax. I'm not even sure most C
programmers know what a "sequence point" is.

Some of the backlash about the walrus operator was precisely that it
makes the order of evaluation much more important to know in order to
understand a program.
Assuming a = 0, what does this do: print(a, (a := a + 1), a) ?
At least python makes it defined and obvious *when* the side effect
occurs... As long as you know that expressions are evaluation left to
right.

Best regards,
Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/VBHRSOPRCM5RMPADJTV4TZWROCDZY263/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: else without except

2023-08-07 Thread Celelibi
2023-08-01 21:46 UTC+02:00, Ronald Oussoren via Python-ideas
:
>
>
>> On 1 Aug 2023, at 19:59, Mitch  wrote:
>>
>> Is this a relevant argument (either way) here?
>
> Adding features to the language always has a cost in that it slightly
> complicates the language and makes it harder to teach. Because of that the
> bar for adding features is high.
>
> Showing how a new feature would improve realistic code patterns helps to
> defend to proposal.
>
> Ronald

To me it's a simplification of the language and its teaching because
it removes an exception (pun not intended).

A 'try' might be followed by some blocks:
- except with an exception
- bare except
- else
- finally

They are all optional. And if present, they must appear in that order.
They are optional, except for the 'else', which must follow either an
'except' with an exception, or a bare 'except'.
I think this could be made simpler by allowing 'else' without any 'except'.

Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ZNZN6ZVGOH3DVW2M7IJC3P7XEPZFK76Z/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: else without except

2023-08-07 Thread Celelibi
2023-08-01 18:14 UTC+02:00, Chris Angelico :
> On Wed, 2 Aug 2023 at 02:02, Celelibi  wrote:
>> If later on an "except" is added, the developper doing the
>> modification should be reminded to move the call to
>> no_error_occurred() into an "else". With real-world non-trivial code,
>> it might not be so simple to see.
>>
>
> Can you give us an example of real-world non-trivial code that would
> benefit from this?

I guess Daniel Walker (OP) would be the best to answer that since he's
the one who came up with this need. I've rarely used 'else' with a
'try'.
Actually, now that I think about it, I guess a 'try' block without an
'except' and just a 'finally' would probably be a good candidate for a
context manager. (Which I'm a big fan of. ^^)

Here's what I think could be a not so far fetched example: writing a
structured file through a serializing class. Conceptually the code
could look like this:

myfile = MySerializer("foo.bin")
try:
# Write data with myfile.write()
else:
# Write footer with myfile.write()
finally:
myfile.close()

You might want to keep it as a 'try' statement instead of a 'with'
because in the future you might want to handle some the exceptions
generated by MySerializer but not those generated by the OS. (Although
I agree, you can have a 'try' in a 'with'.)

And in the real world, that code might look something like this.
It write two blocks of data, each preceeded by the data length and
followed by a checksum. And the footer is the checksum of the whole
data.

assert len(data) <= 256
myfile = MySerializer("foo.bin")
try:
myfile.write(128)
myfile.write(data[:128])
myfile.write(crc32(data[:128]))
myfile.write(len(data) - 128)
myfile.write(data[128:])
myfile.write(crc32(data[128:]))
myfile.write(crc32(data))
finally:
myfile.close()

I don't know about you, but I don't think it's obvious which 'write'
should be put in an 'else' block when an 'except' gets added. Even if
we added comments to identify the data blocks and footers, it's not
obvious that the exceptions that could be raised by writing the footer
shouldn't be captured by the future 'except'.

This example is not perfect, but here it is.


Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JQLJ4W3WPRJ7B6SV4ZRX64XVWNKTMP6G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: else without except

2023-08-01 Thread Celelibi
2023-06-30 19:54 UTC+02:00, MRAB :
> On 2023-06-30 14:55, Daniel Walker wrote:
>> As most of you probably know, you can use else with try blocks:
>>
>> try:
>>  do_stuff()
>> except SomeExceptionClass:
>> handle_error()
>> else:
>> no_error_occurred()
>>
>> Here, no_error_occurred will only be called if do_stuff() didn't raise
>> an exception.
>>
>> However, the following is invalid syntax:
>>
>> try:
>> do_stuff()
>> else:
>> no_error_occurred()
>>
>> Now you might say that this isn't needed as you can achieve the same
>> result with
>>
>> do_stuff()
>> no_error_occurred()
>>
>> However, what if I want to use finally as well:
>>
>> try:
>> do_stuff()
>> else:
>> no_error_occurred()
>> finally:
>> cleanup()
>>
>> and I need no_error_occurred to be called before cleanup?  For my actual
>> use case, I've done
>>
>> try:
>> do_stuff()
>> except Exception:
>>  raise
>> else:
>> no_error_occurred()
>> finally:
>> cleanup()
>>
>> This seems very non-Pythonic.  Is there a reason why else without except
>> has to be invalid syntax?
>>
> What would be the difference between
>
>  try:
> do_stuff()
>  else:
> no_error_occurred()
>  finally:
> cleanup()
>
> and
>
>  try:
> do_stuff()
> no_error_occurred()
>  finally:
> cleanup()
>
> ?
>

One could argue that it's conceptually not the same although the
actual execution is the same.

In the first case you intend to only catch the exceptions generated by
do_stuff().
In the second case, you intend to catch those generated by do_stuff()
and those generated by no_error_occurred().

If later on an "except" is added, the developper doing the
modification should be reminded to move the call to
no_error_occurred() into an "else". With real-world non-trivial code,
it might not be so simple to see.

I'm not an expert, but I actually see no downside to having an "else"
without an "except".


Best regards,
Celelibi
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ROEHA53I2LV6PIR32ZAGAWTK6SEV5CBJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: [Python-Dev] Making asyncio more thread-friendly

2020-06-08 Thread Celelibi
On Sun, Jun 07, 2020 at 03:47:05PM +0900, Stephen J. Turnbull wrote:
> Hi, Celelibi,
> 
> Welcome to Python Ideas.
> 
> Python Dev is more for discussions of implementations of proposed
> features, typically clearly on their way to an accepted pull request
> into master.  Python-Ideas is a better place for a request for
> enhancement without an implementation patch, so I'm moving it here.

If it's just that, I can probably provide a patch for
run_until_complete.

>  > This makes it difficult to make threads and asyncio coexist in a
>  > single application
> 
> True.  Do you know of a programming environment that makes it easy
> that we can study?
> 

Unfortunately no.
But that doesn't mean we can't try to design asyncio to be more
thread-friendly.

>  > Unless I'm mistaken, there's no obvious way to have a function run a
>  > coroutine in a given event loop, no matter if the loop in running or
>  > not, no matter if it's running in the current context or not.
>  > There are at least 3 cases that I can see:
>  > 1) The loop isn't running -> run_until_complete is the solution.
>  > 2) The loop is running in another context -> we might use
>  >call_soon_threadsafe and wait for the result.
>  > 3) The loop is running in the current context but we're not in a
>  >coroutine -> there's currently not much we can do without some major
>  >code change.
>  > 
>  > What I'd like to propose is that run_until_complete handle all three
>  > cases.
> 
> I don't see how the three can be combined safely and generically.  I
> can imagine approaches that might work in a specific application, but
> the general problem is going to be very hard.  async programming is
> awesome, and may be far easier to get 100% right than threading in
> some applications, but it has its pitfall too, as Nathaniel Smith
> points out:
> https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/
> 

I'm not sure what's hard about combining the three cases.
The distinction between the three cases can be made easily with
loop.is_running() and comparing asyncio.get_event_loop() to self.

Implementing the third case can be a bit tricky but should be just a
matter of making sure the loop itself behave nicely with its nested and
parent instances. With a few important notes:
- Infinite recursion is easily possible but it can be mitigated by:
  - Giving transitively a higher priority to tasks the current one is
awaiting on. (Likely non-trivial to implement.)
  - Emiting a warning when the recursion depth goes beyon a given
threshold.
- This case as a whole would also likely deserve a warning if it's
  called directly from a coroutine.


>  > Did I miss an obvious way to make the migration from threads to asyncio
>  > easier?
> 
> Depends on what your threads are doing.  If I were in that situation,
> I'd probably try to decouple the thread code from the async code by
> having the threads feed one or more queues into the async code base,
> and vice versa.
> 
>  > It becomes harder and harder to not believe that python is purposfully
>  > trying to make this kind of migration more painful.
> 
> Again, it would help a lot if you had an example of known working APIs
> we could implement, and/or more detail about your code's architecture
> to suggest workarounds.
> 

That's the thing. I don't know all the threads nor all the machinery in
the application. If I did, I could probably make stronger assumptions.
The application connects to several servers and relay messages beetween
them. There's at least one thread per connection and one that process
commands.

I'm working on switching out a thread-based client protocol lib for an
asyncio-based lib instead. To make the switch mostly invisible, I run
an event loop in its own thread.

When the server-connection-thread wants to send a message, it calls the
usual method, which I modified to use call_soon_threadsafe to run a
coroutine in the event loop and wait for the result.

When an async callback is triggered by the lib, I just run the same old
code which calls the whole machinery I do not control. By chance, it
never end up wanting to call a coroutine. Therefore no need for a
recursive call to the event loop... apparently...
But if it did, I'd end up calling call_soon_threadsafe from the thread
that runs the event loop and wait for the result. Which would just lock
up. And that's what bugs me the most. To work around that, I would
probably have to create a thread for the whole callback so that the
event loop can get control back immediately. (And possibly create more
threads. With all the bad things this entails.)

Another issue I found is that the client protocol lib class isn't always
instanciated by the same thread, and its constructo

[Python-ideas] asyncio: recursive event loops

2020-06-04 Thread Celelibi
Hello world,

I've been recently trying to switch from a library using gevent to a
library using asyncio. And do so without contaminating the whole code
with the "async" keyword in one go.

As expected, calling the synchronous code from a coroutine is pretty
straightforward. But the other way around is a bit more tricky.
If the function is not running in the thread that runs the event loop,
I can use something like this:

asyncio.run_coroutine_threadsafe(...).result()

Which seems to work, despite the documentation saying that "If the
future's result isn't yet available, raises InvalidStateError.".

But if the function that want to call a coroutine is in the thread
that runs the event loop, then there's not much I can do to run a
coroutine in the event loop that indirectly called that function.

As a somewhat more concrete example, let's consider a class that sends
and receive messages:

class Client:
async def on_msg(self, msg):
process(msg)

async def asend(self, msg):
print("async sending", msg)
return len(msg)

def send(self, msg):
print("sending", msg)
# return await self.asend(msg)

async def run(self):
await self.on_msg("PING")

def process(msg):
# Some legacy code that indirectly ends up wanting to call:
print("Received", msg)
print(client.send("PONG"))

client = Client()
asyncio.run(client.run())


Unless I missed something, it's pretty difficult to have `Client.send`
"await" on `Client.asend` without contaminating the whole code with
"async" and "await".
I only see two solutions right now.
Either have `Client.on_msg` call `process` in a new thread and then
use the trick above. But running `process` in a separate thread might
have some consequences since it doesn't expect it.
Or have `Client.send` start a new event loop in a new thread. Which
might behave suprisingly when sharing file descriptors with the
existing loop.


What I'm proposing is to change a bit the behavior of
`loop.run_until_complete` as follow:
- If the loop isn't running and there's no loop in the current
context, keep the current behavior.
- If the loop is running, but not in the current context, use the
`call_soon_threadsafe` trick above.
- If the loop is running in the current context, then loop until the
coroutine is done.

Additionally, `asyncio.run` could be changed similarly.

There's of course a strong caveat that it's easy to create an infinite
recursion with this. But I think it's worth it as it would allow a
much easier integration of asyncio into existing code. Which (I think)
limit its adoption.

What do you think about it?

Best regards
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/BOQIH5GE4SOJULVX3RVH5FIRCIWHNHZY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] hybrid regex engine: backtracking + Thompson NFA

2018-11-25 Thread Celelibi
Hello,

I found this topic discussed on the python-dev ML back in 2010 [1].
I'm bringing it up 8 years later with a variation.

In short: The article [2] highlight that backtracking-based regex
engine (like SRE in python) have pathological cases that run in an
exponential time of the input, while they could run in a linear time.
Not mentioned by the article is that even quadratic time can be a
problem with large inputs. Which happen pretty often when you're
looking for delimited stuff like this :
re.match(r'.*?,(.*),,', "a,"*1)

Of course, there's a catch. Backreferences most-likely cannot be
implemented in a guaranteed linear time, and some cases of look-behind
assertions might prove difficult as well. But other features like
alternatives (foo|bar) and repetitions (.*) are no problem.

The general idea of the Thompson's algorithm is that the simulation of
the NFA is basically in all the reachable states at the same time
while parsing the input string character by character. Of course, for
regex engines that use a VM (like Python's SRE) you can also make the
execution of the equivalent byte-code be in several states at once.

The 2010 discussion seems to be about having two separate engines and
selecting the best one for a given regex. What I'm proposing here is
to resort to backtracking only for the regex features that need it.
Meaning that within a regex like r'(.*),.* .*,\1' the evaluation of
the middle part would use the Thompson's algorithm, while the \1 could
trigger the backtracking mechanism if the string doesn't match.

What do you think about it?
Has this already been discussed and rejected?
Is it just a matter of showing the code? (_sre.c seems... non-trivial)
Has this already been juged not worth the maintainance effort?

AFAIK, there's not many hybrid regex engines out there. But one
notable implementation is the third version of the Henry Spencer's
regex engine [3]. Which he doesn't seem to have documented publicly,
but postgres has done a pretty good job at reverse-engineering a high
level view of it [4]. I'm still unsure how the backtracking of some
parts of the regex interact with the multi-state evaluation of the
other parts. But at least it exists and works, so it is feasible.


Best regards,
Celelibi

[1] https://mail.python.org/pipermail/python-dev/2010-March/098354.html
[2] https://swtch.com/~rsc/regexp/regexp1.html
[3] https://github.com/garyhouston/hsrex
[4] https://github.com/postgres/postgres/tree/master/src/backend/regex
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Make threading.Event support test-and-set

2016-11-26 Thread Celelibi
Hello,

I have a case of multithreaded program where under some condition, I
would want to test the flag of the Event object and clear it atomically.
This would allow me to know which thread has actually cleared the flag
and thus should perform some more work before setting it again.

The isSet method is of course not good enough because it's subject to
race conditions. (Not even sure the GIL would make it safe.)

The changes to threading.py would be pretty trivial:

@@ -515,8 +515,10 @@
 
 """
 with self._cond:
+prevflag = self._flag
 self._flag = True
 self._cond.notify_all()
+return prevflag
 
 def clear(self):
 """Reset the internal flag to false.
@@ -526,7 +528,9 @@
 
 """
 with self._cond:
+prevflag = self._flag
 self._flag = False
+return prevflag
 
 def wait(self, timeout=None):
 """Block until the internal flag is true.



What do you think about it?


Best regards,
Celelibi
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/