Data model and attribute resolution in subclasses

2020-02-27 Thread Adam Preble
I have been making some progress on my custom interpreter project but I found I 
have totally blown implementing proper subclassing in the data model. What I 
have right now is PyClass defining what a PyObject is. When I make a PyObject 
from a PyClass, the PyObject sets up a __dict__ that is used for attribute 
lookup. When I realized I needed to worry about looking up parent namespace 
stuff, this fell apart because my PyClass had no real notion of a namespace.

I'm looking at the Python data model for inspiration. While I don't have to 
implement the full specifications, it helps me where I don't have an 
alternative. However, the data model is definitely a programmer document; it's 
one of those things where the prose is being very precise in what it's saying 
and that can foil a casual reading.

Here's what I think is supposed to exist:
1. PyObject is the base.
2. It has an "internal dictionary." This isn't exposed as __dict__
3. PyClass subclasses PyObject.
4. PyClass has a __dict__

Is there a term for PyObject's internal dictionary. It wasn't called __dict__ 
and I think that's for good reasons. I guess the idea is a PyObject doesn't 
have a namespace, but a PyClass does (?).

Now to look something up. I assume that __getattribute__ is supposed to do 
something like:
1. The PyClass __dict__ for the given PyObject is consulted.
2. The implementation for __getattribute__ for the PyObject will default to 
looking into the "internal dictionary."
3. Assuming the attribute is not found, the subclasses are then consulted using 
the subclass' __getattribute__ calls. We might recurse on this. There's 
probably some trivia here regarding multiple inheritance; I'm not entirely 
concerned (yet).
4. Assuming it's never found, then the user sees an AttributeError

Would each of these failed lookups result in an AttributeError? I don't know 
how much it matters to me right now that I implement exactly to that, but I was 
curious if that's really how that goes under the hood.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Mental model of lookahead assertions

2020-02-27 Thread Python

Stefan Ram wrote:

   One can count overlapping occurences as follows.

|>>> print(len(findall('(?=aa)','cb')))
|3

   Every web page says that lookahead assertions do
   not consume nor move the "current position".

   But what mental model can I make of the regex
   engine that explains why it is not caught in an
   endless loop matching "aa" at the same position
   again and again and never advancing to the other
   occurences?

   (Something with nondeterminism?)


Old but EXCELLENT insight into how regular expressions works:

https://perl.plover.com/yak/regex/

(pdf and html slides)
--
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio question (rmlibre)

2020-02-27 Thread rmlibre
What resources are you trying to conserve? 


If you want to try conserving time, you shouldn't have to worry about
starting too many background tasks. That's because asyncio code was
designed to be extremely time efficient at handling large numbers of
concurrent async tasks. 

For your application, it seems starting background tasks that
appropriately await execution based on their designated queue is a good
idea. This is more time efficient since it takes full advantage of async
concurrency, while also allowing you to control the order of execution.

Although, there may be other efficiency boosts to be had, for instance,
if all but the precise changes that need to be atomic are run
concurrently. 


However, if you want to conserve cpu cycles per unit time, then
staggering the processing of requests sequentially is the best option,
although, there's little need for async code if this is the case. 


Or, if you'd like to conserve memory, making the code more
generator-based is a good option. Lazy computation is quite efficient on
memory and time. Although, rewriting your codebase to run on generators
can be a lot of work, and their efficiency won't really be felt unless
your code is handling "big data" or very large requests.


In any case, you'd probably want to run some benchmark and profiling
tools against a mock-up runtime of your code and optimize/experiment
only after you've noticed there's an efficiency problem and have deduced
its causes. Barring that, it's just guess-work & may just be a waste of
time.




On 2020-02-21 17:00, python-list-requ...@python.org wrote:
> Hi all

> I use asyncio in my project, and it works very well without my having to 
> understand what goes on under the hood. It is a multi-user client/server 
> system, and I want it to scale to many concurrent users. I have a situation 
> where I have to decide between two approaches, and I want to choose the least 
> resource-intensive, but I find it hard to reason about which, if either, is 
> better.
>
> I use HTTP. On the initial connection from a client, I set up a session 
> object, and the session id is passed to the client. All subsequent requests 
> from that client include the session id, and the request is passed to the 
> session object for handling.
>
> It is possible for a new request to be received from a client before the 
> previous one has been completed, and I want each request to be handled 
> atomically, so each session maintains its own asyncio.Queue(). The main 
> routine gets the session id from the request and 'puts' the request in the 
> appropriate queue. The session object 'gets' from the queue and handles the 
> request. It works well.
>
> The question is, how to arrange for each session to 'await' its queue. My 
> first attempt was to create a background task for each session which runs for 
> the life-time of the session, and 'awaits' its queue. It works, but I was 
> concerned about having a lot a background tasks active at the same time.
>
> Then I came up with what I thought was a better idea. On the initial 
> connection, I create the session object, send the response to the client, and 
> then 'await' the method that sets up the session's queue. This also works, 
> and there is no background task involved. However, I then realised that the 
> initial response handler never completes, and will 'await' until the session 
> is closed.
>
> Is this better, worse, or does it make no difference? If it makes no 
> difference, I will lean towards the first approach, as it is easier to reason 
> about what is going on.
>
> Thanks for any advice.
>
> Frank Millman
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Logging all the requests into a specific file

2020-02-27 Thread DL Neil via Python-list

On 28/02/20 9:29 AM, valon.januza...@gmail.com wrote:

I am new to python and all of this, I am  using this FastAPI, to build API,
I want when users hit any endpoint for ex /products, that to be written into a 
file , how do I do it?


The Python Standard Library offers a Logging library.
It has "handlers" to decide where each message to go, "filters" to 
determine 'levels' of messages, and "formatters" to organise the output.


--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list


Logging all the requests into a specific file

2020-02-27 Thread valon . januzaj98
Hello guys,

I am new to python and all of this, I am  using this FastAPI, to build API, 
I want when users hit any endpoint for ex /products, that to be written into a 
file , how do I do it?
-- 
https://mail.python.org/mailman/listinfo/python-list


Managing concurrent.futures exit-handlers

2020-02-27 Thread Remy NOEL
Hello !

I am currently using concurrent.futures ThreadPoolExecutor, but i am
annoyed by its exit_handler preventing program exit if any of the jobs it
is running is blocked.

Currently i can workaround it by either unregister the exit handler
concurrent.futures.thread._python_exit or by subclassing the
threadpoolExecutor and overriding the _adjust_thread_count method with one
that does not register its threads queues.

Both seems kinda ugly though.

I was wondering if there was a better way.
Also, would adding an option to executors so that their worker threads is
not be globally joined  was conceivable.

Thanks !

Remy Noel
-- 
https://mail.python.org/mailman/listinfo/python-list