Re: Using a background thread with asyncio/futures with flask

2024-03-24 Thread Frank Millman via Python-list

On 2024-03-23 3:25 PM, Frank Millman via Python-list wrote:



It is not pretty! call_soon_threadsafe() is a loop function, but the 
loop is not accessible from a different thread. Therefore I include a 
reference to the loop in the message passed to in_queue, which in turn 
passes it to out_queue.




I found that you can retrieve the loop from the future using 
future.get_loop(), so the above is not necessary.


Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Using a background thread with asyncio/futures with flask

2024-03-23 Thread Frank Millman via Python-list

On 2024-03-22 12:08 PM, Thomas Nyberg via Python-list wrote:

Hi,

Yeah so flask does support async (when installed with `pip3 install 
flask[async]), but you are making a good point that flask in this case 
is a distraction. Here's an example using just the standard library that 
exhibits the same issue:


`app.py`
```
import asyncio
import threading
import time
from queue import Queue


in_queue = Queue()
out_queue = Queue()


def worker():
     print("worker started running")
     while True:
     future = in_queue.get()
     print(f"worker got future: {future}")
     time.sleep(5)
     print("worker sleeped")
     out_queue.put(future)


def finalizer():
     print("finalizer started running")
     while True:
     future = out_queue.get()
     print(f"finalizer got future: {future}")
     future.set_result("completed")
     print("finalizer set result")


threading.Thread(target=worker).start()
threading.Thread(target=finalizer).start()


async def main():
     future = asyncio.get_event_loop().create_future()
     in_queue.put(future)
     print(f"main put future: {future}")
     result = await future
     print(result)


if __name__ == "__main__":
     loop = asyncio.get_event_loop()
     loop.run_until_complete(main())
```

If I run that I see the following printed out (after which is just hangs):

```


Combining Dieter's and Mark's ideas, here is a version that works.

It is not pretty! call_soon_threadsafe() is a loop function, but the 
loop is not accessible from a different thread. Therefore I include a 
reference to the loop in the message passed to in_queue, which in turn 
passes it to out_queue.


Frank

===

import asyncio
import threading
import time
from queue import Queue


in_queue = Queue()
out_queue = Queue()


def worker():
print("worker started running")
while True:
loop, future = in_queue.get()
print(f"worker got future: {future}")
time.sleep(5)
print("worker sleeped")
out_queue.put((loop, future))


def finalizer():
print("finalizer started running")
while True:
loop, future = out_queue.get()
print(f"finalizer got future: {future}")
loop.call_soon_threadsafe(future.set_result, "completed")
print("finalizer set result")


threading.Thread(target=worker, daemon=True).start()
threading.Thread(target=finalizer, daemon=True).start()


async def main():
loop = asyncio.get_event_loop()
future = loop.create_future()
in_queue.put((loop, future))
print(f"main put future: {future}")
result = await future
print(result)


if __name__ == "__main__":
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
asyncio.run(main())

--
https://mail.python.org/mailman/listinfo/python-list


Re: Using a background thread with asyncio/futures with flask

2024-03-22 Thread Frank Millman via Python-list

On 2024-03-22 1:23 PM, Frank Millman via Python-list wrote:

On 2024-03-22 12:09 PM, Frank Millman via Python-list wrote:


I am no expert. However, I do have something similar in my app, and it 
works.


I do not use 'await future', I use 'asyncio.wait_for(future)'.



I tested it and it did not work.

I am not sure, but I think the problem is that you have a mixture of 
blocking and non-blocking functions.


Here is a version that works. However, it is a bit different, so I don't 
know if it fits your use case.


I have replaced the threads with background asyncio tasks.

I have replaced instances of queue.Queue with asyncio.Queue.

Frank

===

import asyncio

in_queue = asyncio.Queue()
out_queue = asyncio.Queue()

async def worker():
     print("worker started running")
     while True:
     future = await in_queue.get()
     print(f"worker got future: {future}")
     await asyncio.sleep(5)
     print("worker sleeped")
     await out_queue.put(future)

async def finalizer():
     print("finalizer started running")
     while True:
     future = await out_queue.get()
     print(f"finalizer got future: {future}")
     future.set_result("completed")
     print("finalizer set result")

async def main():
     asyncio.create_task(worker())  # start a background task
     asyncio.create_task(finalizer())  # ditto
     future = asyncio.get_event_loop().create_future()
     await in_queue.put(future)
     print(f"main put future: {future}")
     result = await asyncio.wait_for(future, timeout=None)
     print(result)

if __name__ == "__main__":
     # loop = asyncio.get_event_loop()
     # loop.run_until_complete(main())

     # this is the preferred way to start an asyncio app
     asyncio.run(main())




One more point.

If I change 'await asyncio.wait_for(future, timeout=None)' back to your 
original 'await future', it still works.


--
https://mail.python.org/mailman/listinfo/python-list


Re: Using a background thread with asyncio/futures with flask

2024-03-22 Thread Frank Millman via Python-list

On 2024-03-22 12:09 PM, Frank Millman via Python-list wrote:


I am no expert. However, I do have something similar in my app, and it 
works.


I do not use 'await future', I use 'asyncio.wait_for(future)'.



I tested it and it did not work.

I am not sure, but I think the problem is that you have a mixture of 
blocking and non-blocking functions.


Here is a version that works. However, it is a bit different, so I don't 
know if it fits your use case.


I have replaced the threads with background asyncio tasks.

I have replaced instances of queue.Queue with asyncio.Queue.

Frank

===

import asyncio

in_queue = asyncio.Queue()
out_queue = asyncio.Queue()

async def worker():
print("worker started running")
while True:
future = await in_queue.get()
print(f"worker got future: {future}")
await asyncio.sleep(5)
print("worker sleeped")
await out_queue.put(future)

async def finalizer():
print("finalizer started running")
while True:
future = await out_queue.get()
print(f"finalizer got future: {future}")
future.set_result("completed")
print("finalizer set result")

async def main():
asyncio.create_task(worker())  # start a background task
asyncio.create_task(finalizer())  # ditto
future = asyncio.get_event_loop().create_future()
await in_queue.put(future)
print(f"main put future: {future}")
result = await asyncio.wait_for(future, timeout=None)
print(result)

if __name__ == "__main__":
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())

# this is the preferred way to start an asyncio app
asyncio.run(main())


--
https://mail.python.org/mailman/listinfo/python-list


Re: Using a background thread with asyncio/futures with flask

2024-03-22 Thread Frank Millman via Python-list

On 2024-03-20 10:22 AM, Thomas Nyberg via Python-list wrote:


Hello,

I have a simple (and not working) example of what I'm trying to do. This 
is a simplified version of what I'm trying to achieve (obviously the 
background workers and finalizer functions will do more later):


`app.py`

```
import asyncio
import threading
import time
from queue import Queue

from flask import Flask

in_queue = Queue()
out_queue = Queue()


def worker():
     print("worker started running")
     while True:
     future = in_queue.get()
     print(f"worker got future: {future}")
     time.sleep(5)
     print("worker sleeped")
     out_queue.put(future)


def finalizer():
     print("finalizer started running")
     while True:
     future = out_queue.get()
     print(f"finalizer got future: {future}")
     future.set_result("completed")
     print("finalizer set result")


threading.Thread(target=worker, daemon=True).start()
threading.Thread(target=finalizer, daemon=True).start()

app = Flask(__name__)


@app.route("/")
async def root():
     future = asyncio.get_event_loop().create_future()
     in_queue.put(future)
     print(f"root put future: {future}")
     result = await future
     return result


if __name__ == "__main__":
     app.run()
```

If I start up that server, and execute `curl http://localhost:5000`, it 
prints out the following in the server before hanging:


```
$ python3 app.py
worker started running
finalizer started running
  * Serving Flask app 'app'
  * Debug mode: off
WARNING: This is a development server. Do not use it in a production 
deployment. Use a production WSGI server instead.

  * Running on http://127.0.0.1:5000
Press CTRL+C to quit
root put future: 
worker got future: 
worker sleeped
finalizer got future: 
finalizer set result
```

Judging by what's printing out, the `final result = await future` 
doesn't seem to be happy here.


Maybe someone sees something obvious I'm doing wrong here? I presume I'm 
mixing threads and asyncio in a way I shouldn't be.


Here's some system information (just freshly installed with pip3 install 
flask[async] in a virtual environment for python version 3.11.2):


```
$ uname -a
Linux x1carbon 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 
(2024-02-01) x86_64 GNU/Linux


$ python3 -V
Python 3.11.2

$ pip3 freeze
asgiref==3.7.2
blinker==1.7.0
click==8.1.7
Flask==3.0.2
itsdangerous==2.1.2
Jinja2==3.1.3
MarkupSafe==2.1.5
Werkzeug==3.0.1
```

Thanks for any help!

Cheers,
Thomas


Hi Thomas

I am no expert. However, I do have something similar in my app, and it 
works.


I do not use 'await future', I use 'asyncio.wait_for(future)'.

HTH

Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: Question about garbage collection

2024-01-16 Thread Frank Millman via Python-list

On 2024-01-17 3:01 AM, Greg Ewing via Python-list wrote:

On 17/01/24 1:01 am, Frank Millman wrote:
I sometimes need to keep a reference from a transient object to a more 
permanent structure in my app. To save myself the extra step of 
removing all these references when the transient object is deleted, I 
make them weak references.


I don't see how weak references help here at all. If the transient
object goes away, all references from it to the permanent objects also
go away.

A weak reference would only be of use if the reference went the other
way, i.e. from the permanent object to the transient object.



You are right. I got my description above back-to-front. It is a pub/sub 
scenario. A transient object makes a request to the permanent object to 
be notified of any changes. The permanent object stores a reference to 
the transient object and executes a callback on each change. When the 
transient object goes away, the reference must be removed.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Question about garbage collection

2024-01-16 Thread Frank Millman via Python-list

On 2024-01-16 2:15 PM, Chris Angelico via Python-list wrote:


Where do you tend to "leave a reference dangling somewhere"? How is
this occurring? Is it a result of an incomplete transaction (like an
HTTP request that never finishes), or a regular part of the operation
of the server?



I have a class that represents a database table, and another class that 
represents a database column. There is a one-to-many relationship and 
they maintain references to each other.


In another part of the app, there is a class that represents a form, and 
another class that represents the gui elements on the form. Again there 
is a one-to-many relationship.


A gui element that represents a piece of data has to maintain a link to 
its database column object. There can be a many-to-one relationship, as 
there could be more than one gui element referring to the same column.


There are added complications which I won't go into here. The bottom 
line is that on some occasions a form which has been closed does not get 
gc'd.


I have been trying to reproduce the problem in my toy app, but I cannot 
get it to fail. There is a clue there! I think I have just 
over-complicated things.


I will start with a fresh approach tomorrow. If you don't hear from me 
again, you will know that I have solved it!


Thanks for the input, it definitely helped.

Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Question about garbage collection

2024-01-16 Thread Frank Millman via Python-list

On 2024-01-15 3:51 PM, Frank Millman via Python-list wrote:

Hi all

I have read that one should not have to worry about garbage collection 
in modern versions of Python - it 'just works'.


I don't want to rely on that. My app is a long-running server, with 
multiple clients logging on, doing stuff, and logging off. They can 
create many objects, some of them long-lasting. I want to be sure that 
all objects created are gc'd when the session ends.




I did not explain myself very well. Sorry about that.

My problem is that my app is quite complex, and it is easy to leave a 
reference dangling somewhere which prevents an object from being gc'd.


This can create (at least) two problems. The obvious one is a memory 
leak. The second is that I sometimes need to keep a reference from a 
transient object to a more permanent structure in my app. To save myself 
the extra step of removing all these references when the transient 
object is deleted, I make them weak references. This works, unless the 
transient object is kept alive by mistake and the weak ref is never removed.


I feel it is important to find these dangling references and fix them, 
rather than wait for problems to appear in production. The only method I 
can come up with is to use the 'delwatcher' class that I used in my toy 
program in my original post.


I am surprised that this issue does not crop up more often. Does nobody 
else have these problems?


Frank



--
https://mail.python.org/mailman/listinfo/python-list


Question about garbage collection

2024-01-15 Thread Frank Millman via Python-list

Hi all

I have read that one should not have to worry about garbage collection 
in modern versions of Python - it 'just works'.


I don't want to rely on that. My app is a long-running server, with 
multiple clients logging on, doing stuff, and logging off. They can 
create many objects, some of them long-lasting. I want to be sure that 
all objects created are gc'd when the session ends.


I do have several circular references. My experience is that if I do not 
take some action to break the references when closing the session, the 
objects remain alive. Below is a very simple program to illustrate this.


Am I missing something? All comments appreciated.

Frank Millman

==

import gc

class delwatcher:
    # This stores enough information to identify the object being watched.
    # It does not store a reference to the object itself.
    def __init__(self, obj):
    self.id = (obj.type, obj.name, id(obj))
    print('***', *self.id, 'created ***')
    def __del__(self):
    print('***', *self.id, 'deleted ***')

class Parent:
    def __init__(self, name):
    self.type = 'parent'
    self.name = name
    self.children = []
    self._del = delwatcher(self)

class Child:
    def __init__(self, parent, name):
    self.type = 'child'
    self.parent = parent
    self.name = name
    parent.children.append(self)
    self._del = delwatcher(self)

p1 = Parent('P1')
p2 = Parent('P2')

c1_1 = Child(p1, 'C1_1')
c1_2 = Child(p1, 'C1_2')
c2_1 = Child(p2, 'C2_1')
c2_2 = Child(p2, 'C2_2')

input('waiting ...')

# if next 2 lines are included, parent and child can be gc'd
# for ch in p1.children:
# ch.parent = None

# if next line is included, child can be gc'd, but not parent
# p1.children = None

del c1_1
del p1
gc.collect()

input('wait some more ...')

--
https://mail.python.org/mailman/listinfo/python-list


Type hints - am I doing it right?

2023-12-12 Thread Frank Millman via Python-list

Hi all

I am adding type hints to my code base.

I support three databases - sqlite3, Sql Server, PostgreSQL. The db 
parameters are kept in an ini file, under the section name 'DbParams'. 
This is read on program start, using configparser, and passed to a 
function config_database() in another module with the argument 
cfg['DbParams'].


In the other module I have this -

 def config_database(db_params):

To add a type hint, I now have this -

 def config_database(db_params: configparser.SectionProxy):

To get this to work, I have to add 'import configparser' at the top of 
the module.


I have three separate modules, one for each database, with a subclass 
containing the methods and attributes specific to that database. Each 
one has a connect() method which receives db_params as a parameter. Now 
I have to add 'import configparser' at the top of each of these modules 
in order to type hint the method.


This seems verbose. If it is the correct way of doing it I can live with 
it, but I wondered if there was an easier way.


BTW I have realised that I can bypass the problem by converting 
db_params to a dict, using dict(cfg['DbParams']). But I would still like 
an answer to the original question, as I am sure similar situations will 
occur without such a simple solution.


Thanks

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple webserver

2023-10-25 Thread Frank Millman via Python-list

On 2023-10-22 7:35 PM, Dieter Maurer via Python-list wrote:


The web server in Python's runtime library is fairly simple,
focusing only on the HTTP requirements.

You might want additional things for an HTTP server
exposed on the internet which should potentially handle high trafic:
e.g.

  * detection of and (partial) protection against denial of service attacks,
  * load balancing,
  * virtual hosting
  * proxing
  * URL rewriting
  * high throughput, low latency

Depending on your requirements, other web servers might be preferable.


Dieter's response was very timely for me, as it provides some answers to 
a question that I was thinking of posting. My use-case is reasonably 
on-topic for this thread, so I won't start a new one, if that is ok.


I am writing a business/accounting application. The server uses Python 
and asyncio, the client is written in Javascript. The project is inching 
towards a point where I may consider releasing it. My concern was 
whether my home-grown HTTP server was too simplistic for production, and 
if so, whether I should be looking into using one of the more 
established frameworks. After some brief investigation into Dieter's 
list of additional requirements, here are my initial thoughts. Any 
comments will be welcome.


I skimmed through the documentation for flask, Django, and FastAPI. As 
far as I can tell, none of them address the points listed above 
directly. Instead, they position themselves as one layer in a stack of 
technologies, and rely on other layers to provide additional 
functionality. If I read this correctly, there is nothing to stop me 
doing the same.


Based on this, I am considering the following -

1. Replace my HTTP handler with Uvicorn. Functionality should be the 
same, but performance should be improved.


2. Instead of running as a stand-alone server, run my app as a 
reverse-proxy using Nginx. I tested this a few years ago using Apache, 
and it 'just worked', so I am fairly sure that it will work with Nginx 
as well. Nginx can then provide the additional functionality that Dieter 
has mentioned.


My main concern is that, if I do release my app, I want it to be taken 
seriously and not dismissed as 'Mickey Mouse'. Do you think the above 
changes would assist with that?


When I talk about releasing it, it is already available on Github here - 
https://github.com/FrankMillman/AccInABox.


You are welcome to look at it, but it needs a lot of tidying up before 
it will be ready for a wider audience.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Changing the original SQLite version to the latest

2023-02-14 Thread Frank Millman

On 2023-02-15 5:59 AM, Thomas Passin wrote:
>
> "Download the latest release from http://www.sqlite.org/download.html
> and manually copy sqlite3.dll into Python's DLLs subfolder."
>

I have done exactly this a number of times and it has worked for me.

Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio questions

2023-01-27 Thread Frank Millman

On 2023-01-27 2:14 PM, Frank Millman wrote:


I have changed it to async, which I call with 'asyncio.run'. It now 
looks like this -


     server = await asyncio.start_server(handle_client, host, port)
     await setup_companies()
     session_check = asyncio.create_task(
     check_sessions())  # start background task

     print('Press Ctrl+C to stop')

     try:
     await server.serve_forever()
     except asyncio.CancelledError:
     pass
     finally:
     session_check.cancel()  # tell session_check to stop running
     await asyncio.wait([session_check])
     server.close()



I don't think I need the 'finally' clause - the cleanup can all happen 
in the 'except' block.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio questions

2023-01-27 Thread Frank Millman

On 2023-01-26 7:16 PM, Dieter Maurer wrote:

Frank Millman wrote at 2023-1-26 12:12 +0200:

I have written a simple HTTP server using asyncio. It works, but I don't
always understand how it works, so I was pleased that Python 3.11
introduced some new high-level concepts that hide the gory details. I
want to refactor my code to use these concepts, but I am not finding it
easy.

In simple terms my main loop looked like this -

     loop = asyncio.get_event_loop()
     server = loop.run_until_complete(
     asyncio.start_server(handle_client, host, port))
     loop.run_until_complete(setup_companies())
     session_check = asyncio.ensure_future(
     check_sessions())  # start background task
     print('Press Ctrl+C to stop')
     try:
     loop.run_forever()
     except KeyboardInterrupt:
     print()
     finally:
     session_check.cancel()  # tell session_check to stop running
     loop.run_until_complete(asyncio.wait([session_check]))
     server.close()
     loop.stop()


Why does your code uses several `loop.run*` calls?

In fact, I would define a single coroutine and run that
with `asyncio.run`.
This way, the coroutine can use all `asyncio` features,
including `loop.create_task`.


You are right, Dieter. The function that I showed above is a normal 
function, not an async one. There was no particular reason for this - I 
must have got it working like that at some point in the past, and 'if it 
ain't broke ...'


I have changed it to async, which I call with 'asyncio.run'. It now 
looks like this -


server = await asyncio.start_server(handle_client, host, port)
await setup_companies()
session_check = asyncio.create_task(
check_sessions())  # start background task

print('Press Ctrl+C to stop')

try:
await server.serve_forever()
except asyncio.CancelledError:
pass
finally:
session_check.cancel()  # tell session_check to stop running
await asyncio.wait([session_check])
server.close()

It works exactly the same as before, and it is now much neater.

Thanks for the input.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


asyncio questions

2023-01-26 Thread Frank Millman

Hi all

I have written a simple HTTP server using asyncio. It works, but I don't 
always understand how it works, so I was pleased that Python 3.11 
introduced some new high-level concepts that hide the gory details. I 
want to refactor my code to use these concepts, but I am not finding it 
easy.


In simple terms my main loop looked like this -

    loop = asyncio.get_event_loop()
    server = loop.run_until_complete(
    asyncio.start_server(handle_client, host, port))
    loop.run_until_complete(setup_companies())
    session_check = asyncio.ensure_future(
    check_sessions())  # start background task
    print('Press Ctrl+C to stop')
    try:
    loop.run_forever()
    except KeyboardInterrupt:
    print()
    finally:
    session_check.cancel()  # tell session_check to stop running
    loop.run_until_complete(asyncio.wait([session_check]))
    server.close()
    loop.stop()

Using 3.11 it now looks like this -

    with asyncio.Runner() as runner:
    server = runner.run(asyncio.start_server(
    handle_client, host, port)
    runner.run(setup_companies())
    session_check = asyncio.ensure_future(
    check_sessions())  # start background task
    print('Press Ctrl+C to stop')
    try:
    runner.run(server.serve_forever())
    except KeyboardInterrupt:
    print()
    finally:
    session_check.cancel()  # tell session_check to stop running
    runner.run(asyncio.wait([session_check]))
    server.close()

It works, and I guess it looks a bit neater.

Problem 1.

The docs to 'asyncio.ensure_future' state 'See also the create_task() 
function which is the preferred way for creating new Tasks'


If I change 'ensure_future' to 'create_task', I get
    RuntimeError: no running event loop

I don't know how to fix this.

Problem 2.

The docs have a section on 'Handling Keyboard Interruption'

https://docs.python.org/3.11/library/asyncio-runner.html#asyncio.Runner

I have not figured out how to adapt my code to use this new approach.

Any suggestions appreciated.

Frank Millman

P.S. Might it be better to ask these questions on the Async_SIG 
Discussion Forum?

--
https://mail.python.org/mailman/listinfo/python-list


Re: To clarify how Python handles two equal objects

2023-01-14 Thread Frank Millman

On 2023-01-15 4:36 AM, Roel Schroeven wrote:



Chris Angelico schreef op 15/01/2023 om 1:41:

On Sun, 15 Jan 2023 at 11:38, Jen Kris  wrote:
>
> Yes, in fact I asked my original question – "I discovered something 
about Python array handling that I would like to clarify" -- because I 
saw that Python did it that way.

>

Yep. This is not specific to arrays; it is true of all Python objects.
Also, I suspect you're still thinking about things backwards, and am
trying to lead you to a completely different way of thinking that
actually does align with Python's object model.
Indeen, I also still have the impression that Jen is thinking in terms 
of variables that are possible aliased such as you can have in a 
language like C, instead of objects with one or more names like we have 
in Python. Jens, in the Python model you really have to think of the 
objects largely independently of the names that are or are not 
referencing the objects.




My 'aha' moment came when I understood that a python object has only 
three properties - a type, an id, and a value. It does *not* have a name.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to understand nested loops

2022-08-05 Thread Frank Millman

On 2022-08-05 9:34 AM, ojomooluwatolami...@gmail.com wrote:

Hello, I’m new to learning python and I stumbled upon a question nested loops. 
This is the question below. Can you please how they arrived at 9 as the answer. 
Thanks

var = 0
for i in range(3):
   for j in range(-2,-7,-2):
 var += 1
  print(var)



Welcome to Python. I am sure you are going to enjoy it.

To learn Python, you must learn to use the Python interactive prompt 
(also known as the REPL).


Type 'python' at your console, and it should bring up something like this -

C:\Users\E7280>python
Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 
bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.
>>>

The confusing part of your example above is 'for j in range(-2,-7,-2)'.

To find out what it does, enter it at the '>>>' prompt -

>>> for j in range(-2, -7, -2):
... print(j)
...
-2
-4
-6
>>>

For the purposes of your exercise, all you need to know at this stage is 
that it loops three times.


Does that help answer your question? If not, feel free to come back with 
more questions.


BTW, there is an indentation error in your original post - line 5 should 
line up with line 4. It is preferable to copy/paste your code into any 
messages posted here rather than type it in, as that avoids the 
possibility of any typos.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: list indices must be integers or slices, not str

2022-07-21 Thread Frank Millman

On 2022-07-20 4:45 PM, Chris Angelico wrote:

On Wed, 20 Jul 2022 at 23:50, Peter Otten <__pete...@web.de> wrote:


I found

https://peps.python.org/pep-3101/

"""
PEP 3101 – Advanced String Formatting
...
An example of the ‘getitem’ syntax:

"My name is {0[name]}".format(dict(name='Fred'))

It should be noted that the use of ‘getitem’ within a format string is
much more limited than its conventional usage. In the above example, the
string ‘name’ really is the literal string ‘name’, not a variable named
‘name’. The rules for parsing an item key are very simple. If it starts
with a digit, then it is treated as a number, otherwise it is used as a
string.



Cool. I think this is a good justification for a docs patch, since
that really should be mentioned somewhere other than a historical
document.

ChrisA


I have submitted the following -

https://github.com/python/cpython/issues/95088

Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: list indices must be integers or slices, not str

2022-07-20 Thread Frank Millman



On 2022-07-20 12:31 PM, Frank Millman wrote:

On 2022-07-20 11:37 AM, Chris Angelico wrote:

On Wed, 20 Jul 2022 at 18:34, Frank Millman  wrote:


Hi all

C:\Users\E7280>python
Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
  >>>
  >>> x = list(range(10))
  >>>
  >>> '{x[1]}'.format(**vars())
'1'
  >>>
  >>> '{x[-1]}'.format(**vars())
Traceback (most recent call last):
    File "", line 1, in 
TypeError: list indices must be integers or slices, not str
  >>>

Can anyone explain this error? It seems that a negative index is deemed
to be a string in this case.



Yeah, that does seem a little odd. What you're seeing is the same as
this phenomenon:


"{x[1]} {x[spam]}".format(x={1: 42, "spam": "ham"})

'42 ham'

"{x[1]} {x[spam]}".format(x={"1": 42, "spam": "ham"})

Traceback (most recent call last):
   File "", line 1, in 
KeyError: 1

But I can't find it documented anywhere that digits-only means
numeric. The best I can find is:

https://docs.python.org/3/library/string.html#formatstrings
"""The arg_name can be followed by any number of index or attribute
expressions. An expression of the form '.name' selects the named
attribute using getattr(), while an expression of the form '[index]'
does an index lookup using __getitem__()."""

and in the corresponding grammar:

field_name    ::=  arg_name ("." attribute_name | "[" 
element_index "]")*

index_string  ::=   +

In other words, any sequence of characters counts as an argument, as
long as it's not ambiguous. It doesn't seem to say that "all digits is
interpreted as an integer, everything else is interpreted as a
string". ISTM that a negative number should be interpreted as an
integer too, but that might be a backward compatibility break.



Thanks for investigating this further. I agree it seems odd.

As quoted above, an expression of the form '[index]' does an index 
lookup using __getitem()__.


The only __getitem__() that I can find is in the operator module, and 
that handles negative numbers just fine.


Do you think it is worth me raising an issue, if only to find out the 
rationale if there is one?


Frank


I saw this from Paul Rubin - for some reason his posts appear in google 
groups, but not python-list.


"It seems to only want integer constants. x[2+2] and x[k] where k=2
don't work either.

I think the preferred style these days is f'{x[-1]}' which works."

Unfortunately the 'f' option does not work for me in this case, as I am 
using a string object, not a string literal.


Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: list indices must be integers or slices, not str

2022-07-20 Thread Frank Millman

On 2022-07-20 11:37 AM, Chris Angelico wrote:

On Wed, 20 Jul 2022 at 18:34, Frank Millman  wrote:


Hi all

C:\Users\E7280>python
Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
  >>>
  >>> x = list(range(10))
  >>>
  >>> '{x[1]}'.format(**vars())
'1'
  >>>
  >>> '{x[-1]}'.format(**vars())
Traceback (most recent call last):
File "", line 1, in 
TypeError: list indices must be integers or slices, not str
  >>>

Can anyone explain this error? It seems that a negative index is deemed
to be a string in this case.



Yeah, that does seem a little odd. What you're seeing is the same as
this phenomenon:


"{x[1]} {x[spam]}".format(x={1: 42, "spam": "ham"})

'42 ham'

"{x[1]} {x[spam]}".format(x={"1": 42, "spam": "ham"})

Traceback (most recent call last):
   File "", line 1, in 
KeyError: 1

But I can't find it documented anywhere that digits-only means
numeric. The best I can find is:

https://docs.python.org/3/library/string.html#formatstrings
"""The arg_name can be followed by any number of index or attribute
expressions. An expression of the form '.name' selects the named
attribute using getattr(), while an expression of the form '[index]'
does an index lookup using __getitem__()."""

and in the corresponding grammar:

field_name::=  arg_name ("." attribute_name | "[" element_index "]")*
index_string  ::=   +

In other words, any sequence of characters counts as an argument, as
long as it's not ambiguous. It doesn't seem to say that "all digits is
interpreted as an integer, everything else is interpreted as a
string". ISTM that a negative number should be interpreted as an
integer too, but that might be a backward compatibility break.



Thanks for investigating this further. I agree it seems odd.

As quoted above, an expression of the form '[index]' does an index 
lookup using __getitem()__.


The only __getitem__() that I can find is in the operator module, and 
that handles negative numbers just fine.


Do you think it is worth me raising an issue, if only to find out the 
rationale if there is one?


Frank
--
https://mail.python.org/mailman/listinfo/python-list


list indices must be integers or slices, not str

2022-07-20 Thread Frank Millman

Hi all

C:\Users\E7280>python
Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 
bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> x = list(range(10))
>>>
>>> '{x[1]}'.format(**vars())
'1'
>>>
>>> '{x[-1]}'.format(**vars())
Traceback (most recent call last):
  File "", line 1, in 
TypeError: list indices must be integers or slices, not str
>>>

Can anyone explain this error? It seems that a negative index is deemed 
to be a string in this case.


Thanks

Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: One-liner to merge lists?

2022-02-22 Thread Frank Millman

On 2022-02-22 5:45 PM, David Raymond wrote:

Is there a simpler way?



d = {1: ['aaa', 'bbb', 'ccc'], 2: ['fff', 'ggg']}
[a for b in d.values() for a in b]

['aaa', 'bbb', 'ccc', 'fff', 'ggg']






Now that's what I was looking for.

I am not saying that I will use it, but as an academic exercise I felt 
sure that there had to be a one-liner in pure python.


I had forgotten about nested comprehensions. Thanks for the reminder.

Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: One-liner to merge lists?

2022-02-22 Thread Frank Millman

On 2022-02-22 11:30 AM, Chris Angelico wrote:

On Tue, 22 Feb 2022 at 20:24, Frank Millman  wrote:


Hi all

I think this should be a simple one-liner, but I cannot figure it out.

I have a dictionary with a number of keys, where each value is a single
list -

  >>> d = {1: ['aaa', 'bbb', 'ccc'], 2: ['fff', 'ggg']}

I want to combine all values into a single list -

  >>> ans = ['aaa', 'bbb', 'ccc', 'fff', 'ggg']

I can do this -

  >>> a = []
  >>> for v in d.values():
...   a.extend(v)
...
  >>> a
['aaa', 'bbb', 'ccc', 'fff', 'ggg']

I can also do this -

  >>> from itertools import chain
  >>> a = list(chain(*d.values()))
  >>> a
['aaa', 'bbb', 'ccc', 'fff', 'ggg']
  >>>

Is there a simpler way?



itertools.chain is a good option, as it scales well to arbitrary
numbers of lists (and you're guaranteed to iterate over them all just
once as you construct the list). But if you know that the lists aren't
too large or too numerous, here's another method that works:


sum(d.values(), [])

['aaa', 'bbb', 'ccc', 'fff', 'ggg']

It's simply adding all the lists together, though you have to tell it
that you don't want a numeric summation.



Thanks, that is neat.

However, I did see this -

>>> help(sum)
Help on built-in function sum in module builtins:

sum(iterable, /, start=0)
Return the sum of a 'start' value (default: 0) plus an iterable of 
numbers


When the iterable is empty, return the start value.
This function is intended specifically for use with numeric values
and may reject non-numeric types.
>>>

So it seems that it is not recommended.

I think I will stick with itertools.chain.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


One-liner to merge lists?

2022-02-22 Thread Frank Millman

Hi all

I think this should be a simple one-liner, but I cannot figure it out.

I have a dictionary with a number of keys, where each value is a single 
list -


>>> d = {1: ['aaa', 'bbb', 'ccc'], 2: ['fff', 'ggg']}

I want to combine all values into a single list -

>>> ans = ['aaa', 'bbb', 'ccc', 'fff', 'ggg']

I can do this -

>>> a = []
>>> for v in d.values():
...   a.extend(v)
...
>>> a
['aaa', 'bbb', 'ccc', 'fff', 'ggg']

I can also do this -

>>> from itertools import chain
>>> a = list(chain(*d.values()))
>>> a
['aaa', 'bbb', 'ccc', 'fff', 'ggg']
>>>

Is there a simpler way?

Thanks

Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


A bit of nostalgia

2022-01-30 Thread Frank Millman

Hi all

Sadly my ex-boss died recently. I learnt a huge amount from him. I also 
enjoyed and got valuable insights from his stories of 'the good old 
days', so I thought I would share a couple of them.


Back in the 1960's (long before we met) he worked for and eventually 
became manager of a large ICL bureau here in South Africa, running 
accounting and other applications for a number of corporate customers.


One day some new equipment arrived from UK. The engineers spent some 
time moving things around in the computer room and installing the new 
kit, but nobody told the software department what was going on. When it 
was finished, they were told 'Right, now you have disk drives.'


Up to that point, their programs used magnetic tape for data storage. A 
typical debtors run would involve loading a master file onto one tape 
drive, and a transaction file, sorted in account number sequence, onto a 
second tape drive. To print statements, the program would read the first 
master record, then read the first transaction record, if it found a 
match keep reading transactions until it hit a new account number, etc. 
When all reports were run, the last step was to create a new master file 
for the following month. This required mounting a scratch tape onto a 
third tape drive, and the program would run in a similar fashion but 
write a new record with an updated balance to the scratch tape on every 
change of account number.


When they asked how they were supposed to use the new disk drives, the 
answer was 'No idea, but there are the manuals' (no doubt several inches 
thick). After some discussion someone came up with an idea - load the 
master file onto one disk drive, the transaction file onto a second one, 
and a blank disk onto a third one. They ran it, it worked, and it ran 
much faster, so everyone was happy. Over time they realised that they 
could write the master record back onto the first drive, and slowly they 
adapted to a new way of developing applications.


The second story involves a rumour coming down the grapevine from UK 
that a new product was coming out that would put all their jobs at risk. 
It would make programming so easy that there would be no need to hire 
specialist programmers any more. Eventually this new product was 
released. It was called Cobol.


Up to that point they programmed everything in assembler. I recall my 
boss telling me that the ICL assembler was called PLAN, which was an 
acronym, but I forget what it stood for.


Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: Negative subscripts

2021-11-26 Thread Frank Millman

On 2021-11-26 11:24 PM, dn via Python-list wrote:

On 26/11/2021 22.17, Frank Millman wrote:

In my program I have a for-loop like this -


for item in x[:-y]:

...    [do stuff]

'y' may or may not be 0. If it is 0 I want to process the entire list
'x', but of course -0 equals 0, so it returns an empty list.

...



[...]

That was an interesting read - thanks for spelling it out.




for y in [ 0, 1, 2, 3, 4, 5 ]:

... print( y, x[ :len( x ) - y ] )
...
0 ['a', 'b', 'c', 'd', 'e']
1 ['a', 'b', 'c', 'd']
2 ['a', 'b', 'c']
3 ['a', 'b']
4 ['a']
5 []

and yes, if computing y is expensive/ugly, for extra-credit, calculate
the 'stop' value outside/prior-to the for-loop!



Ignoring the 'ugly' for the moment, what if computing y is expensive?

To check this, I will restate the example to more closely match my use case.

>>> x = [1, 2, 3, 4, 5, 6, 7]
>>> y = [5, 4, 3]
>>> z = []
>>>
>>> for i in x[ : len(x) - len(y) ]:
...   i
...
1
2
3
4
>>>
>>> for i in x[ : len(x) - len(z) ]:
...   i
...
1
2
3
4
5
6
7
>>>

So it works perfectly (not that I had any doubts).

But what if it is expensive to compute y? Or to rephrase it, is y 
computed on every iteration, or only on the first one?


Without knowing the internals, it is not possible to tell just by 
looking at it. But there is a technique I learned from Peter Otten 
(haven't heard from him for a while - hope he is still around).


>>> def lng(lst):
...   print(f'*{len(lst)}*')
...   return len(lst)
...
>>>
>>> for i in x[ : lng(x) - lng(y) ]:
...   i
...
*7*
*3*
1
2
3
4
>>>
>>> for i in x[ : lng(x) - lng(z) ]:
...   i
...
*7*
*0*
1
2
3
4
5
6
7
>>>

From this it is clear that y is only computed once, when the loop is 
started. Therefore I think it follows that there is no need to 
pre-compute y.


Hope this makes sense.

Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: Negative subscripts

2021-11-26 Thread Frank Millman

On 2021-11-26 11:17 AM, Frank Millman wrote:

Hi all

In my program I have a for-loop like this -

 >>> for item in x[:-y]:
...    [do stuff]

'y' may or may not be 0. If it is 0 I want to process the entire list 
'x', but of course -0 equals 0, so it returns an empty list.


In theory I can say

 >>> for item in x[:-y] if y else x:
...    [do stuff]

But in my actual program, both x and y are fairly long expressions, so 
the result is pretty ugly.


Are there any other techniques anyone can suggest, or is the only 
alternative to use if...then...else to cater for y = 0?




Thanks for all the replies. A selection of neat ideas for me to choose from.

Much appreciated.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Negative subscripts

2021-11-26 Thread Frank Millman

Hi all

In my program I have a for-loop like this -

>>> for item in x[:-y]:
...    [do stuff]

'y' may or may not be 0. If it is 0 I want to process the entire list 
'x', but of course -0 equals 0, so it returns an empty list.


In theory I can say

>>> for item in x[:-y] if y else x:
...    [do stuff]

But in my actual program, both x and y are fairly long expressions, so 
the result is pretty ugly.


Are there any other techniques anyone can suggest, or is the only 
alternative to use if...then...else to cater for y = 0?


Thanks

Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: Fun Generators

2021-04-22 Thread Frank Millman

On 2021-04-23 7:34 AM, Travis Griggs wrote:

Doing an "industry experience" talk to an incoming class at nearby university tomorrow. 
Have a couple points where I might do some "fun things" with python. Said students have 
been learning some python3.

I'm soliciting any *fun* generators people may have seen or written? Not so much the cool 
or clever ones. Or the mathematical ones (e.g. fib). Something more inane and 
"fun". But still showcasing generators uniqueness. Short and simple is good.

Thanks in advance!



Have you looked at this?

http://www.dabeaz.com/generators/

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Yield after the return in Python function.

2021-04-05 Thread Frank Millman

On 2021-04-05 2:25 PM, Bischoop wrote:

The return suspends the function execution so how is it that in below
example I got output: 

def doit():
 return 0
 yield 0
 
print(doit())




The 'yield' in the function makes the function a 'generator' function.

'Calling' a generator function does not execute the function, it returns 
a generator object.


You have to iterate over the generator object (e.g. by calling next() on 
it) in order to execute the function and return values.


Frank Millman



--
https://mail.python.org/mailman/listinfo/python-list


Re: Question about generators

2021-03-05 Thread Frank Millman

On 2021-03-06 8:21 AM, Frank Millman wrote:

Hi all

This is purely academic, but I would like to understand the following -

 >>>
 >>> a = [('x', 'y')]
 >>>
 >>> s = []
 >>> for b, c in a:
...   s.append((b, c))
...
 >>> s
[('x', 'y')]


This is what I expected.

 >>>
 >>> s = []
 >>> s.append(((b, c) for b, c in a))
 >>> s
[ at 0x019FC3F863C0>]
 >>>

I expected the same as the first one.

I understand the concept that a generator does not return a value until 
you call next() on it, but I have not grasped the essential difference 
between the above two constructions.





Thanks, Alan and Ming.

I think I have got it now.

In my first example, a 'for' loop both creates an iterable object *and* 
iterates over it.


In my second example, a generator expression creates an iterable object, 
but does nothing with it until asked.


Changing 'append' to 'extend' has the effect of iterating over the 
generator expression and appending the results.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Question about generators

2021-03-05 Thread Frank Millman

Hi all

This is purely academic, but I would like to understand the following -

>>>
>>> a = [('x', 'y')]
>>>
>>> s = []
>>> for b, c in a:
...   s.append((b, c))
...
>>> s
[('x', 'y')]


This is what I expected.

>>>
>>> s = []
>>> s.append(((b, c) for b, c in a))
>>> s
[ at 0x019FC3F863C0>]
>>>

I expected the same as the first one.

I understand the concept that a generator does not return a value until 
you call next() on it, but I have not grasped the essential difference 
between the above two constructions.


TIA for any insights.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: name for a mutually inclusive relationship

2021-02-24 Thread Frank Millman

On 2021-02-24 6:12 PM, Ethan Furman wrote:
I'm looking for a name for a group of options that, when one is 
specified, all of them must be specified.


For contrast,

- radio buttons: a group of options where only one can be specified 
(mutually exclusive)
- check boxes:   a group of options that are independent of each other 
(any number of

  them may be specified)

- ???: a group of options where, if one is specified, all must be 
specified (mutually

    inclusive)

So far, I have come up with:

- the Three Musketeers
- clique
- club
- best friends
- tight knit
- group

Is there a name out there already to describe that concept?



I have something vaguely similar, but my approach is different, so I 
don't know if this will be helpful or not.


Take an example of a hospital admission form. One of the fields is 
'Sex'. If the answer is Male, then certain fields are required, if the 
answer is Female, certain other fields are required.


There can be more than one such occurrence on the same form. A field for 
'Has Insurance?' could require insurance details if True, or payment 
method if False.


I construct a dialog for each possibility, with all the fields 
associated with that possibility.


Initially the dialogs are hidden. When the selection is made, the 
appropriate dialog is shown.


The controlling field can have multiple options, so rather than a 
check-box, it is the equivalent of a radio-button.


The term I use for this arrangement is 'sub-types'.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


etree, gzip, and BytesIO

2021-01-20 Thread Frank Millman

Hi all

This question is mostly to satisfy my curiosity.

In my app I use xml to represent certain objects, such as form 
definitions and process definitions.


They are stored in a database. I use etree.tostring() when storing them 
and etree.fromstring() when reading them back. They can be quite large, 
so I use gzip to compress them before storing them as a blob.


The sequence of events when reading them back is -
   - select gzip'd data from database
   - run gzip.decompress() to convert to a string
   - run etree.fromstring() to convert to an etree object

I was wondering if I could avoid having the unzipped string in memory, 
and create the etree object directly from the gzip'd data. I came up 
with this -


   - select gzip'd data from database
   - create a BytesIO object - fd = io.BytesIO(data)
   - use gzip to open the object - gf = gzip.open(fd)
   - run etree.parse(gf) to convert to an etree object

It works.

But I don't know what goes on under the hood, so I don't know if this 
achieves anything. If any of the steps involves decompressing the data 
and storing the entire string in memory, I may as well stick to my 
present approach.


Any thoughts?

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: strip() method makes me confused

2020-11-07 Thread Frank Millman

On 2020-11-07 1:28 PM, Frank Millman wrote:

On 2020-11-07 1:03 PM, Bischoop wrote:



[...]


another example:

text = "this is text, there should be not commas, but as you see there
are still"
y = txt.strip(",")
print(text)

output:
this is text, there should be not commas, but as you see there are still





P.S. If you wanted to remove all the commas, you could do it like this -

y = txt.replace(',', '')

Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: strip() method makes me confused

2020-11-07 Thread Frank Millman

On 2020-11-07 1:03 PM, Bischoop wrote:


According to documentation strip method removes heading and trailing
characters.


Both are explained in the docs -



Why then:

txt = ",rrttggs...,..s,bananas...s.rrr"

x = txt.strip(",s.grt")

print(x)

output: banana


"The chars argument is not a prefix or suffix; rather, all combinations 
of its values are stripped"


As you can see, it has removed every leading or trailing ',', 's', '.', 
'g', 'r', and 't'. It starts at the beginning and removes characters 
until it finds a character that is not in the chars arguments. Then it 
starts at the end and, working backwards, removes characters until it 
finds a character that is not in the chars arguments.




another example:

text = "this is text, there should be not commas, but as you see there
are still"
y = txt.strip(",")
print(text)

output:
this is text, there should be not commas, but as you see there are still



As you yourself said above, it removes 'leading and trailing 
characters'. In your example, the commas are all embedded in the string. 
They are not leading or trailing, so they are not removed.


Note that in Python 3.9, 2 new string methods have been added -

str.removeprefix(prefix, /)

If the string starts with the prefix string, return 
string[len(prefix):]. Otherwise, return a copy of the original string


str.removesuffix(suffix, /)

If the string ends with the suffix string and that suffix is not empty, 
return string[:-len(suffix)]. Otherwise, return a copy of the original 
string


HTH

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a conflict of libraries here?

2020-11-05 Thread Frank Millman

On 2020-11-06 9:25 AM, Steve wrote:

In my program, I have the following lines of code:

 import random

 import re

 import time

 import datetime

 from datetime import timedelta

 from time import gmtime, strftime ##define strftime as time/date right
now

 import winsound as ws

 import sys

These may or may not affect my new program code but here is the issue:

  


If I add the code:

   from datetime import datetime

these new lines work:

dt = datetime.fromisoformat(ItemDateTime)

dt_string = dt.strftime(' at %H:%M on %A %d %B %Y')

and will fail without that "datetime import datetime" line

  


however;

  


With that "datetime import datetime" line included,

all of the lines of code throughout the program that contain
"datetime.datetime" fail.
These have been in use for over three years and there are at least a dozen
of them.

The error produced is:


 time1  = datetime.datetime.strptime(T1, date_format)

 AttributeError: type object 'datetime.datetime' has no attribute
'datetime'



I think all you have to do is -

1. Remove the line 'from datetime import datetime'.

2. Change dt = datetime.fromisoformat(ItemDateTime) to
  dt = datetime.datetime.fromisoformat(ItemDateTime)

Unless I have missed something, that should work.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


asyncio question

2020-11-03 Thread Frank Millman

Hi all

My app runs an HTTP server using asyncio. A lot of the code dates back 
to Python 3.4, and I am trying to bring it up to date. There is one 
aspect I do not understand.


The 'old' way looks like this -

import asyncio

def main():
loop = asyncio.get_event_loop()
server = loop.run_until_complete(
asyncio.start_server(handle_client, host, port))
loop.run_forever()

if __name__ == '__main__':
main()

According to the docs, the preferred way is now like this -

import asyncio

async def main():
loop = asyncio.get_running_loop()
server = await asyncio.start_server(
 handle_client, host, port)
async with server:
server.serve_forever()

if __name__ == '__main__':
asyncio.run(main())

It works, and it does look neater. But I want to start some background 
tasks before starting the server, and cancel them on Ctrl+C.


Using the 'old' method, I can wrap 'loop.run_forever()' in a 
try/except/finally, check for KeyboardInterrupt, and run my cleanup in 
the 'finally' block.


Using the 'new' method, KeyboardInterrupt is not caught by 
'server.serve_forever()' but by 'asyncio.run()'. It is too late to do 
any cleanup at this point, as the loop has already been stopped.


Is it ok to stick to the 'old' method, or is there a better way to do this.

Thanks

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Basic Python help

2020-10-23 Thread Frank Millman

On 2020-10-23 12:41 PM, mikael petterson wrote:

Hi,

I need to use the following code but in java.

  END_DELIM = '\n##\n'
  def start_delim(data_len): return '\n#%s\n' % (data_len)
   data = "%s%s%s" % (start_delim(len(data)), data, END_DELIM)

Can anyone help me to understand what it means:

I am guessing now:

a function defined "start_delim" takes the length of a data string.
function does modulo on something. This something I am not sure of
:-)
Does '\n#%s\n' will this be evaluated to a number  when %s i replaced with 
data_len?

Then the result is used as one parameter in "%s%s%s"
start_delim then for the other
data
END_DELIM



I think it is simpler than that.

>>>
>>> '\n#%s\n' % 2
'\n#2\n'
>>>

All it is doing is replacing '%s' with the length of the string.

So the result is the concatenation of -

1. '\n' + '#' + length of string + '\n'  as the start delimiter

2. the string itself

3. '\n' + '#' + '#' + '\n'  as the end delimiter

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: How do I get datetime to stop showing seconds?

2020-10-16 Thread Frank Millman

On 2020-10-16 9:42 AM, Steve wrote:

d2 =  datetime.datetime.now() #Time Right now

Show this: 2020-10-16 02:53
and not this: 2020-10-16 02:53:48.585865



>>>
>>> str(d2)
'2020-10-16 10:29:38.423371'
>>>
>>> d2.strftime('%Y-%m-%d %H:%M')
'2020-10-16 10:29'
>>>


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging technique

2020-10-03 Thread Frank Millman

On 2020-10-03 8:58 AM, Chris Angelico wrote:

On Sat, Oct 3, 2020 at 4:53 PM Frank Millman  wrote:


Hi all

When debugging, I sometimes add a 'breakpoint()' to my code to examine
various objects.

However, I often want to know how I got there, so I replace the
'breakpoint()' with a '1/0', to force a traceback at that point. Then I
can rerun the previous step using the extra info from the traceback.

Is there a way to combine these into one step, so that, while in the
debugger, I can find out how I got there?



Not sure if it's what you're looking for, but in the debugger, you can
type "bt" to show a backtrace.



That is exactly what I was looking for :-)

Thanks very much.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Debugging technique

2020-10-03 Thread Frank Millman

Hi all

When debugging, I sometimes add a 'breakpoint()' to my code to examine 
various objects.


However, I often want to know how I got there, so I replace the 
'breakpoint()' with a '1/0', to force a traceback at that point. Then I 
can rerun the previous step using the extra info from the traceback.


Is there a way to combine these into one step, so that, while in the 
debugger, I can find out how I got there?


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: list comprehension namespace problem

2020-09-24 Thread Frank Millman

On 2020-09-25 7:46 AM, Chris Angelico wrote:

On Fri, Sep 25, 2020 at 3:43 PM Frank Millman  wrote:


Hi all

I have a problem related (I think) to list comprehension namespaces. I
don't understand it enough to figure out a solution.

In the debugger, I want to examine the contents of the current instance,
so I can type

  (Pdb) dir(self)

and get the result with no problem.

However, it is a long list containing attribute names and method names,
and I only want to see the attribute names. So I tried this -

  (Pdb) [x for x in dir(self) if not callable(getattr(self, x))]
  *** NameError: name 'self' is not defined
  (Pdb)

Q1. Can someone explain what is going on?

Q2. Is there a way to get what I want?



If you put that line of code into your actual source code, does it
work? I think this might be a pdb-specific issue, since normally the
comprehension should have no difficulty seeing names from its
surrounding context.

A minimal case will probably involve the debugger and a function with
a local, unless in some way this depends on 'self' being special.



Yes, is does work from within my source code.

Frank


--
https://mail.python.org/mailman/listinfo/python-list


list comprehension namespace problem

2020-09-24 Thread Frank Millman

Hi all

I have a problem related (I think) to list comprehension namespaces. I 
don't understand it enough to figure out a solution.


In the debugger, I want to examine the contents of the current instance, 
so I can type


(Pdb) dir(self)

and get the result with no problem.

However, it is a long list containing attribute names and method names, 
and I only want to see the attribute names. So I tried this -


(Pdb) [x for x in dir(self) if not callable(getattr(self, x))]
*** NameError: name 'self' is not defined
(Pdb)

Q1. Can someone explain what is going on?

Q2. Is there a way to get what I want?

Thanks

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Pythonic style

2020-09-21 Thread Frank Millman

On 2020-09-21 3:46 PM, Chris Angelico wrote:

On Mon, Sep 21, 2020 at 11:37 PM Tim Chase
 wrote:


On 2020-09-20 18:34, Stavros Macrakis wrote:

Consider a simple function which returns the first element of an
iterable if it has exactly one element, and throws an exception
otherwise. It should work even if the iterable doesn't terminate.
I've written this function in multiple ways, all of which feel a
bit clumsy.

I'd be interested to hear thoughts on which of these solutions is
most Pythonic in style. And of course if there is a more elegant
way to solve this, I'm all ears! I'm probably missing something
obvious!


You can use tuple unpacking assignment and Python will take care of
the rest for you:

   >>> x, = tuple() # no elements
   Traceback (most recent call last):
 File "", line 1, in 
   ValueError: not enough values to unpack (expected 1, got 0)
   >>> x, = (1, )  # one element
   >>> x, = itertools.repeat("hello") # 2 to infinite elements
   Traceback (most recent call last):
 File "", line 1, in 
   ValueError: too many values to unpack (expected 1)

so you can do

   def fn(iterable):
 x, = iterable
 return x

The trailing comma can be hard to spot, so I usually draw a little
extra attention to it with either

   (x, ) = iterable

or

   x, = iterable # unpack one value

I'm not sure it qualifies as Pythonic, but it uses Pythonic features
like tuple unpacking and the code is a lot more concise.


Or:

[x] = iterable

I'd definitely recommend using unpacking as the most obvious way to do
this. Among other advantages, it gives different messages for the "too
many" and "too few" cases.



I used something similar years ago, but I made the mistake of relying on 
the error message in my logic, to distinguish between 'too few' and 'too 
many'. Guess what happened - Python changed the wording of the messages, 
and my logic failed.


After messing about with some alternatives, I ended up with the OP's 
first option (with some added comments), and have stuck with it ever 
since. It is not pretty, but it is readable and unambiguous.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Need tests of turtledemo.colordemo on Windows installations

2020-09-14 Thread Frank Millman

On 2020-09-14 7:07 AM, Frank Millman wrote:

On 2020-09-14 3:18 AM, Terry Reedy wrote:
User Tushar Sadhwani and I both have Win 10 with 3.8.5 installed.  
When he runs

...> py -3.8 -m turtledemo.colormixer
and moves the sliders a reasonable amount, he repeatably gets
Fatal Python error: Cannot recover from stack overflow.
...
https://bugs.python.org/issue41758

I have no problem, regardless of version, PowerShell or Command 
Prompt, installation or repository build.  I suspect that the issue is 
specific to his machine, but before closing, we need more evidence 
either way.




I am running 3.8.2 on Windows 10, and I can run the test with no issues 
at all.


I will upgrade to 3.8.5 later today and try again.

Frank Millman



3.8.5 (64-bit) also runs with no problems.

My machine is a Lenovo T420 laptop.

Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Need tests of turtledemo.colordemo on Windows installations

2020-09-13 Thread Frank Millman

On 2020-09-14 3:18 AM, Terry Reedy wrote:
User Tushar Sadhwani and I both have Win 10 with 3.8.5 installed.  When 
he runs

...> py -3.8 -m turtledemo.colormixer
and moves the sliders a reasonable amount, he repeatably gets
Fatal Python error: Cannot recover from stack overflow.
...
https://bugs.python.org/issue41758

I have no problem, regardless of version, PowerShell or Command Prompt, 
installation or repository build.  I suspect that the issue is specific 
to his machine, but before closing, we need more evidence either way.




I am running 3.8.2 on Windows 10, and I can run the test with no issues 
at all.


I will upgrade to 3.8.5 later today and try again.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Access last element after iteration

2020-07-07 Thread Frank Millman

Hi all

After iterating over a sequence, the final element is still accessible. 
In this case, the variable 'i' still references the integer 4.


Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 
bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> for i in range(5):
... print(i)
...
0
1
2
3
4
>>> print(i)
4
>>>

Is this guaranteed in Python, or should it not be relied on?

If the latter, and you wanted to do something additional using the last 
element, I assume that this would be the way to do it -


>>> for i in range(5):
... print(i)
... j = i
...
0
1
2
3
4
>>> print(j)
4
>>>

Alternatively, this also works, but is this one guaranteed?

>>> for i in range(5):
... print(i)
... else:
... print()
... print(i)
...
0
1
2
3
4

4
>>>

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Bulletproof json.dump?

2020-07-06 Thread Frank Millman

On 2020-07-06 3:08 PM, Jon Ribbens via Python-list wrote:

On 2020-07-06, Frank Millman  wrote:

On 2020-07-06 2:06 PM, Jon Ribbens via Python-list wrote:

While I agree entirely with your point, there is however perhaps room
for a bit more helpfulness from the json module. There is no sensible
reason I can think of that it refuses to serialize sets, for example.
Going a bit further and, for example, automatically calling isoformat()
on date/time/datetime objects would perhaps be a bit more controversial,
but would frequently be useful, and there's no obvious downside that
occurs to me.


I may be missing something, but that would cause a downside for me.

I store Python lists and dicts in a database by calling dumps() when
saving them to the database and loads() when retrieving them.

If a date was 'dumped' using isoformat(), then on retrieval I would not
know whether it was originally a string, which must remain as is, or was
originally a date object, which must be converted back to a date object.

There is no perfect answer, but my solution works fairly well. When
dumping, I use 'default=repr'. This means that dates get dumped as
'datetime.date(2020, 7, 6)'. I look for that pattern on retrieval to
detect that it is actually a date object.


There is no difference whatsoever between matching on the repr output
you show above and matching on ISO-8601 datetimes, except that at least
ISO-8601 is an actual standard. So no, you haven't found a downside.



I don't understand. As you say, ISO-8601 is a standard, so the original 
object could well have been a string in that format. So how do you 
distinguish between an object that started out as a string, and an 
object that started out as a date/datetime object?


Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: Bulletproof json.dump?

2020-07-06 Thread Frank Millman

On 2020-07-06 2:06 PM, Jon Ribbens via Python-list wrote:

On 2020-07-06, Chris Angelico  wrote:

On Mon, Jul 6, 2020 at 8:36 PM Adam Funk  wrote:

Is there a "bulletproof" version of json.dump somewhere that will
convert bytes to str, any other iterables to list, etc., so you can
just get your data into a file & keep working?


That's the PHP definition of "bulletproof" - whatever happens, no
matter how bad, just keep right on going.


While I agree entirely with your point, there is however perhaps room
for a bit more helpfulness from the json module. There is no sensible
reason I can think of that it refuses to serialize sets, for example.
Going a bit further and, for example, automatically calling isoformat()
on date/time/datetime objects would perhaps be a bit more controversial,
but would frequently be useful, and there's no obvious downside that
occurs to me.



I may be missing something, but that would cause a downside for me.

I store Python lists and dicts in a database by calling dumps() when 
saving them to the database and loads() when retrieving them.


If a date was 'dumped' using isoformat(), then on retrieval I would not 
know whether it was originally a string, which must remain as is, or was 
originally a date object, which must be converted back to a date object.


There is no perfect answer, but my solution works fairly well. When 
dumping, I use 'default=repr'. This means that dates get dumped as 
'datetime.date(2020, 7, 6)'. I look for that pattern on retrieval to 
detect that it is actually a date object.


I use the same trick for Decimal objects.

Maybe the OP could do something similar.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: [Beginner] Spliting input

2020-06-25 Thread Frank Millman

On 2020-06-25 2:13 PM, Bischoop wrote:

On 2020-06-25, Andrew Bell  wrote:

Without knowing the problem you're having, it's hard to answer.
This seems generally correct.



Error track:
Traceback (most recent call last):
   File "splitting.py", line 1, in 
   numb1,numb2=input("enter 1st and 2nd no ").split()
   ValueError: not enough values to unpack (expected 2, got 1)



Without arguments, split() splits on whitespace.

If you entered 2 numbers separated by a comma, but no spaces, there is 
no split.


Maybe you meant split(',') which will split on a comma.

Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: Strings: double versus single quotes

2020-05-24 Thread Frank Millman

On 2020-05-24 9:58 AM, DL Neil via Python-list wrote:

On 24/05/20 5:43 PM, Frank Millman wrote:

On 2020-05-23 9:45 PM, DL Neil via Python-list wrote:


My habit with SQL queries is to separate them from other code, cf the 
usual illustration of having them 'buried' within the code, 
immediately before, or even part of, the query call.




I like that idea, as I find that I am embedding more and more SQL in 
my code.


How do you handle parameters? Do you leave placeholders ('?' or '%s') 
in the query, and leave it to the 'importer' of the query to figure 
out what is required?



Yes. Most "connector" software includes a feature which auto-magically 
escapes all variable-data - a valuable safety feature!


I've been experimenting by going further and providing app.devs with 
functions/methods, a mini-API if you will. Given that many?most don't 
like having to deal with SQL, the extra 'insulation' boosts my personal 
popularity...

(and I need as much of that as I can get!)


Ok. I will have to give it some thought.

I generate most of my SQL dynamically, constructing the query 
programmatically using the meta-data in my system.


But now I am constructing some more complex queries, which I can't 
generate automatically yet. I am hoping that a pattern emerges which I 
can use to automate them, but for now I am doing it by hand.


There are a number of parameters required, and it will not be obvious at 
first sight what values are required. If I am going to keep the queries 
in a separate module, I think that I will have to provide some sort of 
accompanying documentation with each query explaining what the required 
parameters are.


Thinking aloud, I may set up a separate module for the queries, but make 
each one a 'function', which specifies what data is required. The caller 
calls the function with the data as an argument, and the function uses 
it to build the parameter list and returns the SQL along with the 
parameters. The function can also contain documentation explaining how 
the query works.


As you say, this has the benefit of separating the SQL from the Python 
code, so I will definitely pursue this idea.


Thanks

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Strings: double versus single quotes

2020-05-23 Thread Frank Millman

On 2020-05-23 9:45 PM, DL Neil via Python-list wrote:


My habit with SQL queries is to separate them from other code, cf the 
usual illustration of having them 'buried' within the code, immediately 
before, or even part of, the query call.




I like that idea, as I find that I am embedding more and more SQL in my 
code.


How do you handle parameters? Do you leave placeholders ('?' or '%s') in 
the query, and leave it to the 'importer' of the query to figure out 
what is required?


Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: "pip" error message

2020-04-21 Thread Frank Millman

On 2020-04-21 12:02 PM, Simone Bravin wrote:


I found that I had downloaded Python from what I would call "automatic 
check version link" and that downloaded the 32-bit version, but my 
notebook have 64-bit, so I changed the version to the 64-bit one.




I have had the same problem in the past. Most software download pages 
seem to detect whether you are running 32 or 64 bit, and default to the 
correct download option.


Python shows a big yellow 'Download the latest version for Windows' 
button, which most people will select by default. However, it actually 
downloads the 32 bit version, without any indication that this is what 
it is doing.


I could have the details wrong - I am not going to repeat the install 
just to check. But this is what I recall from the last time I did this.


This seems like a good time to bring this up, as in another thread we 
are discussing how to improve the download experience on Windows for 
newbies.


Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-25 Thread Frank Millman

On 2020-03-24 8:39 PM, Barry Scott wrote:




On 24 Mar 2020, at 11:54, Frank Millman  wrote:


I decided to concentrate on using Wireshark to detect the difference between a 
Python3.7 session and a Python3.8 session. Already I can see some differences.

There is only one version of my program. I am simply running it with either 'py 
-3.7 ' or 'py -3.8'. And I am ignoring Chrome at this stage, as it is only that 
Edge shows the problem.

First point - Python3.7 also shows a lot of [RST, ACK] lines. My guess is that 
this is caused by my 'protocol violation' of sending a 'Keep-Alive' header and 
then closing the connection. Python3.7 does not suffer from dropping files, so 
I now think this is a sidetrack. I will fix my program when this is all over, 
but for now I don't want to touch it.


Yes your protocol violation is why you see [RST, ACK].

I'm confused you know that the code has a critical bug in it and you have not 
fixed it?
Just send "Connection: close" and I'd assume all will work.



Well, the reason is simply that I wanted to understand why my code that 
worked all the way from 3.4 through 3.7 stopped working in 3.8. I 
realise that my code is faulty, but I still wanted to know what the 
trigger was that caused the bug to appear.


From my testing with Wireshark, I can see that both Edge and Chrome 
create 20 connections to GET 20 files. The difference seems to be that 
Chrome does not attempt to re-use a connection, even though both client 
and server have sent Keep-Alive headers. Edge does attempt to re-use the 
connection.


The difference between 3.7 and 3.8 is that 3.7 sends the data in 
separate packets for the status, each header, and then each chunk, 
whereas 3.8 sends the whole lot in a single packet.


My guess is that 3.7 is slower to send the files, so Edge starts up all 
20 connections before it has finished receiving the first one, whereas 
with 3.8, by the time it has opened a few connections the first file has 
been received, so it tries to re-use the same connection to receive the 
next one. By then I have closed the connection. If I am right, it is 
surprising that my program worked *some* of the time.


The same reasoning would explain why it worked when connecting from a 
remote host. There would be enough delay to force it into the same 
behaviour as 3.7.


It has been an interesting ride, and I have learned a lot. I will now 
look into fixing my program. The easy fix is to just send 'Connection: 
Close', but I will do it properly and implement 'Keep-Alive'.


Thanks all

Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-24 Thread Frank Millman

On 2020-03-24 1:54 PM, Frank Millman wrote:

On 2020-03-23 1:56 PM, Frank Millman wrote:

I have one frustration with Wireshark. I will mention it in case anyone 
has a solution.


I can see that Edge opens multiple connections. I am trying to track the 
activity on each connection separately. I can export the data to csv, 
which makes it easier to work on. But while the TCP lines include the 
source and destination ports, the HTTP lines do not, so I don't know 
which connection they belong to. If I view the data in Wireshark's gui 
it does show the ports, so the data is there somewhere. Does anyone know 
how to include it in the csv output?




I solved my Wireshark problem by exporting the data as text. A much more 
wordy format, and fiddly to parse, but the info I am looking for is there.


I am getting some interesting results now, such as a second GET 
interrupting an existing session. However, my brain is now frazzled, so 
I will continue tomorrow.


Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-24 Thread Frank Millman

On 2020-03-23 1:56 PM, Frank Millman wrote:

On 2020-03-23 12:57 PM, Chris Angelico wrote:

On Mon, Mar 23, 2020 at 8:03 PM Frank Millman  wrote:


On 2020-03-22 12:11 PM, Chris Angelico wrote:
On Sun, Mar 22, 2020 at 8:30 PM Frank Millman  
wrote:


On 2020-03-22 10:45 AM, Chris Angelico wrote:


If you can recreate the problem with a single socket and multiple
requests, that would be extremely helpful. I also think it's highly
likely that this is the case.



I am working on a stripped-down version, but I realise there are a few
things I have not grasped.

Hope you don't mind, but can you scan through what follows and tell me
if I am on the right lines?


No probs!



[...]

Really appreciate the one-on-one tuition. I am learning a lot!




If this all makes sense, I should write two versions of the client
program, one using a single connection, and one using a pool of 
connections.




Possibly! I think you'll most likely see that one of those behaves
perfectly normally, and you only trigger the issue in the other. So
you could move forward with just one test program.



Well, I have got the first one working - single connection - and so far 
it has not gone wrong.


However, it is difficult to be sure that I am comparing apples with 
apples. I have written my test server to handle 'Keep-Alive' correctly, 
but as I mentioned earlier, my live program closes the connection after 
each transfer. So now I have to make my test server do the same, and 
change my test client to react to that and re-open the connection each 
time. I will make the changes and see how that behaves.


Of course now I am in the murky waters of trying to second-guess how 
Edge reacts to that. Presumably that is where Wireshark will be useful. 
I will keep you posted.


Here is a progress report.

I decided to concentrate on using Wireshark to detect the difference 
between a Python3.7 session and a Python3.8 session. Already I can see 
some differences.


There is only one version of my program. I am simply running it with 
either 'py -3.7 ' or 'py -3.8'. And I am ignoring Chrome at this stage, 
as it is only that Edge shows the problem.


First point - Python3.7 also shows a lot of [RST, ACK] lines. My guess 
is that this is caused by my 'protocol violation' of sending a 
'Keep-Alive' header and then closing the connection. Python3.7 does not 
suffer from dropping files, so I now think this is a sidetrack. I will 
fix my program when this is all over, but for now I don't want to touch it.


When I send the response to the initial connection, I write a status 
line, then multiple header lines, then an html file. I then close the 
connection. Python3.7 sends a packet with the status, then a separate 
packet for each header, then a packet with the file. Python3.8 sends a 
packet with the status, then merges everything else into a single packet 
and sends it in one go. I just mention this as an indication that quite 
a lot has changed between my two versions of Python.


I have one frustration with Wireshark. I will mention it in case anyone 
has a solution.


I can see that Edge opens multiple connections. I am trying to track the 
activity on each connection separately. I can export the data to csv, 
which makes it easier to work on. But while the TCP lines include the 
source and destination ports, the HTTP lines do not, so I don't know 
which connection they belong to. If I view the data in Wireshark's gui 
it does show the ports, so the data is there somewhere. Does anyone know 
how to include it in the csv output?


That's all for now. I will keep you posted.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-23 Thread Frank Millman

On 2020-03-23 12:57 PM, Chris Angelico wrote:

On Mon, Mar 23, 2020 at 8:03 PM Frank Millman  wrote:


On 2020-03-22 12:11 PM, Chris Angelico wrote:

On Sun, Mar 22, 2020 at 8:30 PM Frank Millman  wrote:


On 2020-03-22 10:45 AM, Chris Angelico wrote:


If you can recreate the problem with a single socket and multiple
requests, that would be extremely helpful. I also think it's highly
likely that this is the case.



I am working on a stripped-down version, but I realise there are a few
things I have not grasped.

Hope you don't mind, but can you scan through what follows and tell me
if I am on the right lines?


No probs!



[...]

Really appreciate the one-on-one tuition. I am learning a lot!




If this all makes sense, I should write two versions of the client
program, one using a single connection, and one using a pool of connections.



Possibly! I think you'll most likely see that one of those behaves
perfectly normally, and you only trigger the issue in the other. So
you could move forward with just one test program.



Well, I have got the first one working - single connection - and so far 
it has not gone wrong.


However, it is difficult to be sure that I am comparing apples with 
apples. I have written my test server to handle 'Keep-Alive' correctly, 
but as I mentioned earlier, my live program closes the connection after 
each transfer. So now I have to make my test server do the same, and 
change my test client to react to that and re-open the connection each 
time. I will make the changes and see how that behaves.


Of course now I am in the murky waters of trying to second-guess how 
Edge reacts to that. Presumably that is where Wireshark will be useful. 
I will keep you posted.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-23 Thread Frank Millman

On 2020-03-22 12:11 PM, Chris Angelico wrote:

On Sun, Mar 22, 2020 at 8:30 PM Frank Millman  wrote:


On 2020-03-22 10:45 AM, Chris Angelico wrote:


If you can recreate the problem with a single socket and multiple
requests, that would be extremely helpful. I also think it's highly
likely that this is the case.



I am working on a stripped-down version, but I realise there are a few 
things I have not grasped.


Hope you don't mind, but can you scan through what follows and tell me 
if I am on the right lines?


Both the client and the server can send a header with 'Keep-alive', but 
what does it actually mean?


If the client sends it, does that mean that it wants the server to keep 
the connection open, and only close it when the client closes it from 
the other end?


Conversely, if the server sends it, does it mean that it wants the 
client to keep the connection open? If so, under what condition would 
the server close the connection (other than a timeout). Should the 
server only send 'Keep-alive' if it receives 'Keep-alive'?


In my program, when I send a file in response to a GET, I send a header 
with 'Keep-alive', then I send the file, then I close the connection. 
This now seems wrong. It could even be the cause of my bug, but then why 
has it only appeared now? Both Edge and Chrome send 'Keep-alive' headers.


If I am thinking along the right lines, then the exchange should go like 
this -


Client sends request, with 'Keep-alive' header.

Server sends response, but does not close the connection. The server 
should reply with a 'Keep-alive' header. If it does not, the client will 
close the connection, in which case the server will also close.


Assuming that they both send 'Keep-alive', the onus is on the client to 
close the connection when it has no more requests. The server should 
have a timeout in case the client goes away.


Assuming that the above is correct, the client will rely on 
'Content-length' to determine when it has received the entire request. 
If the client has more than one request, it will send the first, wait 
for the response, when fully received as per the 'Content-length' it 
will send the next one, until all requests have been sent and all 
responses received, at which point it will close the connection.


All this assumes only one connection. Alternatively the client could 
open multiple connections for the requests. In that case it would make 
sense to use 'Connection: Close', so that the server can close the 
connection straight away, making it available for reuse.


If this all makes sense, I should write two versions of the client 
program, one using a single connection, and one using a pool of connections.


All comments appreciated!

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-22 Thread Frank Millman

On 2020-03-22 1:01 PM, Chris Angelico wrote:

On Sun, Mar 22, 2020 at 12:45 AM Frank Millman  wrote:


Hi all

I have a strange intermittent bug.

The role-players -
  asyncio on Python 3.8 running on Windows 10
  Microsoft Edge running as a browser on the same machine

The bug does not occur with Python 3.7.
It does not occur with Chrome or Firefox.
It does not occur when MS Edge connects to another host on the network,
running the same Python program (Python 3.8 on Fedora 31).


What exact version of Python 3.7 did you test? I'm looking through the
changes to asyncio and came across this one, which may have some
impact.


Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 
64 bit (AMD64)] on win32


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-22 Thread Frank Millman

On 2020-03-22 11:00 AM, Barry Scott wrote:




On 22 Mar 2020, at 07:56, Frank Millman  wrote:

On 2020-03-21 8:04 PM, Barry Scott wrote:

I'd look at the network traffic with wireshark to see if there is anything 
different between edge and the other browsers.


You are leading me into deep waters here :-)  I have never used Wireshark 
before. I have now downloaded it and am running it - it generates a *lot* of 
data, most of which I do not understand yet!


You can tell wireshark to only capture on one interface and to only capture 
packets for port 80.
(Captureing HTTPS means you cannot decode the packets without going deeper I 
recall)

Then you can tell wireshark to decode the captured data for http to drop a lot 
of the lower level details.



Thanks. I am more or less doing that. Interestingly the [RST,ACK] 
messages appear on the tcp packets, so if I filter on http I do not see 
them.






One thing immediately stands out. When I run it with MS Edge and Python3.8, it 
shows a lot of lines highlighted in red, with the symbols [RST,ACK]. They do 
not appear when running Chrome, and they do not appear when running Python3.7.


As Chris said that should not happen.



As I replied to Chris, they appear in packets sent *from* Python *to* Edge.



I have another data point. I tried putting an asyncio.sleep() after sending 
each file. A value of 0.01 made no difference, but a value of 0.1 makes the 
problem go away.


What is the async wait to wait for the transmit buffers to drain?



Not sure what you are asking. I am just doing what it says in the docs -

=

write(data)
The method attempts to write the data to the underlying socket 
immediately. If that fails, the data is queued in an internal write 
buffer until it can be sent.


The method should be used along with the drain() method:

stream.write(data)
await stream.drain()

=

coroutine drain()
Wait until it is appropriate to resume writing to the stream. Example:

writer.write(data)
await writer.drain()
This is a flow control method that interacts with the underlying IO 
write buffer. When the size of the buffer reaches the high watermark, 
drain() blocks until the size of the buffer is drained down to the low 
watermark and writing can be resumed. When there is nothing to wait for, 
the drain() returns immediately.


=

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-22 Thread Frank Millman

On 2020-03-22 10:45 AM, Chris Angelico wrote:

On Sun, Mar 22, 2020 at 6:58 PM Frank Millman  wrote:

I'd look at the network traffic with wireshark to see if there is anything 
different between edge and the other browsers.



You are leading me into deep waters here :-)  I have never used
Wireshark before. I have now downloaded it and am running it - it
generates a *lot* of data, most of which I do not understand yet!

One thing immediately stands out. When I run it with MS Edge and
Python3.8, it shows a lot of lines highlighted in red, with the symbols
[RST,ACK]. They do not appear when running Chrome, and they do not
appear when running Python3.7.


Interesting. RST means "Reset" and is sent when the connection is
closed. Which direction were these packets sent (Edge to Python or
Python to Edge)? You can tell by the source and destination ports -
one of them is going to be the port Python is listening on (eg 80 or
443), so if the destination port is 80, it's being sent *to* Python,
and if the source port is 80, it's being sent *from* Python.



They are all being sent *from* Python *to* Edge.


I have another data point. I tried putting an asyncio.sleep() after
sending each file. A value of 0.01 made no difference, but a value of
0.1 makes the problem go away.


Interesting also.

Can you recreate the problem without Edge? It sounds like something's
going on with concurrent transfers, so it'd be awesome if you can
replace Edge with another Python program, and then post both programs.



Do you mean write a program that emulates a browser - make a connection, 
receive the HTML page, send a GET request for each file, and receive the 
results?


I will give it a go!


Also of interest: Does the problem go away if you change "Connection:
Keep-Alive" to "Connection: Close" in your headers?



Yes, the problem does go away.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Intermittent bug with asyncio and MS Edge

2020-03-22 Thread Frank Millman

On 2020-03-21 8:04 PM, Barry Scott wrote:




On 21 Mar 2020, at 13:43, Frank Millman  wrote:

Hi all

I have a strange intermittent bug.

The role-players -
asyncio on Python 3.8 running on Windows 10
Microsoft Edge running as a browser on the same machine

The bug does not occur with Python 3.7.
It does not occur with Chrome or Firefox.
It does not occur when MS Edge connects to another host on the network, running 
the same Python program (Python 3.8 on Fedora 31).

The symptoms -
On receiving a connection, I send an HTML page to the browser,
which has 20 lines like this -



...

Intermittently, one or other of the script files is not received by MS Edge.


[...]
>> I don't know whether the problem lies with Python or MS Edge, but as 
it does not happen with Python 3.7, I am suspecting that something 
changed in 3.8 which does not match MS Edge's expectations.


I'd look at the network traffic with wireshark to see if there is anything 
different between edge and the other browsers.



You are leading me into deep waters here :-)  I have never used 
Wireshark before. I have now downloaded it and am running it - it 
generates a *lot* of data, most of which I do not understand yet!


One thing immediately stands out. When I run it with MS Edge and 
Python3.8, it shows a lot of lines highlighted in red, with the symbols 
[RST,ACK]. They do not appear when running Chrome, and they do not 
appear when running Python3.7.


I have another data point. I tried putting an asyncio.sleep() after 
sending each file. A value of 0.01 made no difference, but a value of 
0.1 makes the problem go away.


I will keep digging, but I thought I would post this information now in 
case it helps with diagnosis.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Intermittent bug with asyncio and MS Edge

2020-03-21 Thread Frank Millman

Hi all

I have a strange intermittent bug.

The role-players -
asyncio on Python 3.8 running on Windows 10
Microsoft Edge running as a browser on the same machine

The bug does not occur with Python 3.7.
It does not occur with Chrome or Firefox.
It does not occur when MS Edge connects to another host on the network, 
running the same Python program (Python 3.8 on Fedora 31).


The symptoms -
On receiving a connection, I send an HTML page to the browser,
which has 20 lines like this -



...

Intermittently, one or other of the script files is not received by MS Edge.

I have checked the Network tab in Developer Tools in MS Edge. It shows 
the first few requests getting a Status 200 OK, then some are shown as 
'Pending'. This seems to be where the problem occurs.


I am hoping that someone can give me some hints about how to debug this.

My function to send the script file looks like this -

async def send_file(writer, fname):

status = 'HTTP/1.1 200 OK\r\n'
writer.write(status.encode())

headers = []
headers.append(('CONNECTION', 'keep-alive'))
headers.append(('DATE',
email.utils.formatdate(usegmt=True)))
headers.append(('SERVER',
   f'Python {sys.version.split()[0]} asyncio'))
headers.append(('Content-type', 'text/javascript'))
headers('Transfer-Encoding', 'chunked'))
for key, val in headers:
writer.write(f'{key}: {val}\r\n'.encode())
writer.write('\r\n'.encode())
await writer.drain()

with open(fname 'rb') as fd:
chunk = fd.read(8192)
while chunk:
writer.write(hex(len(chunk))[2:].encode() + b'\r\n')
writer.write(chunk + b'\r\n')
await writer.drain()
chunk = fd.read(8192)
writer.write(b'0\r\n\r\n')
await writer.drain()

writer.close()
await writer.wait_closed()

I have asked the same question on StackOverflow, from an MS Edge 
perspective -


https://stackoverflow.com/questions/60785767/ms-edge-randomly-does-not-load-script

I don't know whether the problem lies with Python or MS Edge, but as it 
does not happen with Python 3.7, I am suspecting that something changed 
in 3.8 which does not match MS Edge's expectations.


Any hints much appreciated.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio question (rmlibre)

2020-02-28 Thread Frank Millman



On 2020-02-28 1:37 AM, rmli...@riseup.net wrote:
> What resources are you trying to conserve?
>
> If you want to try conserving time, you shouldn't have to worry about
> starting too many background tasks. That's because asyncio code was
> designed to be extremely time efficient at handling large numbers of
> concurrent async tasks.
>

Thanks for the reply.

That is exactly what I want, and in an earlier response Greg echoes what 
what you say here - background tasks are lightweight and are ideal for 
my situation.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: python

2020-02-22 Thread Frank Millman

On 2020-02-23 9:26 AM, luka beria wrote:

hi guys i have another question.how to  find the minimum and maximum numbers in 
this list without using the min () and max () functions use   while loop



Hi Luka

Did you read the reply to your previous question from Dennis Lee Bieber?

If not, here it is again -

"""
This smells like a homework assignment. We do NOT provide solutions to
homework. In particular, we will not provide you with an algorithm to 
solve such a simple problem.


We WILL provide help with Python syntax and semantics, but YOU need to
provide a starting point of what you've tried, what the results are, and
what you expected for the results.

Suggestion: pretend you are the computer. Use a piece of paper to track
current MAX and current MIN. You get one item from the list at a time --
how do you determine if it is to become a new MAX, a new MIN, or is ignored
as neither.

Now -- write a program does just that...

"""

Try his suggestion, and come back here if you get stuck.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Why is passing loop argument to asyncio.Event deprecated?

2020-02-22 Thread Frank Millman

Hi all

Why is 'explicit passing of a loop argument to asyncio.Event' deprecated 
(see What's new in Python 3.8)?


I use this in my project. I can find a workaround, but it is not elegant.

I can explain my use case if requested, but I was just curious to find 
out the reason.


Thanks

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio question

2020-02-21 Thread Frank Millman

On 2020-02-21 11:13 PM, Greg Ewing wrote:

On 21/02/20 7:59 pm, Frank Millman wrote:
My first attempt was to create a background task for each session 
which runs for the life-time of the session, and 'awaits' its queue. 
It works, but I was concerned about having a lot a background tasks 
active at the same time.


The whole point of asyncio is to make tasks very lightweight, so you
can use as many of them as is convenient without worries. One task
per client sounds like the right thing to do here.



Perfect. Thanks so much.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Asyncio question

2020-02-20 Thread Frank Millman

Hi all

I use asyncio in my project, and it works very well without my having to 
understand what goes on under the hood. It is a multi-user client/server 
system, and I want it to scale to many concurrent users. I have a 
situation where I have to decide between two approaches, and I want to 
choose the least resource-intensive, but I find it hard to reason about 
which, if either, is better.


I use HTTP. On the initial connection from a client, I set up a session 
object, and the session id is passed to the client. All subsequent 
requests from that client include the session id, and the request is 
passed to the session object for handling.


It is possible for a new request to be received from a client before the 
previous one has been completed, and I want each request to be handled 
atomically, so each session maintains its own asyncio.Queue(). The main 
routine gets the session id from the request and 'puts' the request in 
the appropriate queue. The session object 'gets' from the queue and 
handles the request. It works well.


The question is, how to arrange for each session to 'await' its queue. 
My first attempt was to create a background task for each session which 
runs for the life-time of the session, and 'awaits' its queue. It works, 
but I was concerned about having a lot a background tasks active at the 
same time.


Then I came up with what I thought was a better idea. On the initial 
connection, I create the session object, send the response to the 
client, and then 'await' the method that sets up the session's queue. 
This also works, and there is no background task involved. However, I 
then realised that the initial response handler never completes, and 
will 'await' until the session is closed.


Is this better, worse, or does it make no difference? If it makes no 
difference, I will lean towards the first approach, as it is easier to 
reason about what is going on.


Thanks for any advice.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Change in behaviour Python 3.7 > 3.8

2020-02-07 Thread Frank Millman

On 2020-02-07 1:06 PM, Barry Scott wrote:




On 7 Feb 2020, at 05:27, Frank Millman  wrote:

@Barry
I agree that __del__() is rarely useful, but I have not come up with an 
alternative to achieve what I want to do. My app is a long-running server, and 
creates many objects on-the-fly depending on user input. They should be 
short-lived, and be removed when they go out of scope, but I can concerned that 
I may leave a dangling reference somewhere that keeps one of them alive, 
resulting in a memory leak over time. Creating a _del__() method and displaying 
a message to say 'I am being deleted' has proved effective, and has in fact 
highlighted a few cases where there was a real problem.


I have faced the same problem with leaking of objects.

What I did was use the python gc to find all the objects.

First I call gc.collect() to clean up all the objects what can be deleted.
Then I call gc.get_objects() to get a list of all the objects the gc is 
tracking.
I process that list into a collections.Counter() with counts of each type of 
object
that I call a snapshot.

By creating the snapshots at interesting times in the services life I can diff 
the
snapshots and see how the object usage changes over time.

The software I work on has an admin HTTP interface used internally that I use 
to request
snapshots white the service is running.



This is really useful. I will experiment with this and try to 
incorporate it into my project.


Thanks very much.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Change in behaviour Python 3.7 > 3.8

2020-02-06 Thread Frank Millman

On 2020-02-06 2:58 PM, Frank Millman wrote:

[...]


I have a module (A) containing common objects shared by other modules. I 
have a module (B) which imports one of these common objects - a set().



[...]


This has worked for years, but now when the __del__ method is called, 
the common object, which was a set(), has become None.


My assumption is that Module A gets cleaned up before Module B, and when 
Module B tries to access the common set() object it no longer exists.


I have a workaround, so I am just reporting this for the record.


Thanks to all for the replies.

@Serhiy
I import the common object *from* Module A.

@Dennis
Thanks for the references. I knew that __del__() should not be relied 
on, but I have not seen the reasons spelled out so clearly before.


@Barry
I agree that __del__() is rarely useful, but I have not come up with an 
alternative to achieve what I want to do. My app is a long-running 
server, and creates many objects on-the-fly depending on user input. 
They should be short-lived, and be removed when they go out of scope, 
but I can concerned that I may leave a dangling reference somewhere that 
keeps one of them alive, resulting in a memory leak over time. Creating 
a _del__() method and displaying a message to say 'I am being deleted' 
has proved effective, and has in fact highlighted a few cases where 
there was a real problem.


My solution to this particular issue is to explicitly delete the global 
instance at program end, so I do not rely on the interpreter to clean it 
up. It works.


Thanks again

Frank
--
https://mail.python.org/mailman/listinfo/python-list


Change in behaviour Python 3.7 > 3.8

2020-02-06 Thread Frank Millman

Hi all

I have noticed a change in behaviour in Python 3.8 compared with 
previous versions of Python going back to at least 2.7. I am pretty sure 
that it is not a problem, and is caused by my relying on a certain 
sequence of events at shutdown, which of course is not guaranteed. 
However, any change in behaviour is worth reporting, just in case it was 
unintended, so I thought I would mention it here.


I have a module (A) containing common objects shared by other modules. I 
have a module (B) which imports one of these common objects - a set().


Module B defines a Class, and creates a global instance of this class 
when the module is created. This instance is never explicitly deleted, 
so I assume it gets implicitly deleted at shutdown. It has a __del__() 
method (only for temporary debugging purposes, so will be removed for 
production) and the __del__ method uses the set() object imported from 
Module A.


This has worked for years, but now when the __del__ method is called, 
the common object, which was a set(), has become None.


My assumption is that Module A gets cleaned up before Module B, and when 
Module B tries to access the common set() object it no longer exists.


I have a workaround, so I am just reporting this for the record.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: lxml - minor problem appending new element

2020-02-03 Thread Frank Millman

On 2020-02-03 10:39 AM, Peter Otten wrote:

Frank Millman wrote:


This is a minor issue, and I have found an ugly workaround, but I
thought I would mention it.


Like this?

children = list(xml)
for y in children:
 print(etree.tostring(y))
 if y.get('z') == 'c':
 child = etree.Element('y', attrib={'z': 'd'})
 xml.append(child)
 children.append(child)

It doesn't look /that/ ugly to me.
  


That is not bad. My actual solution was to revert to the non-pythonic 
method of iterating over a list -


  pos =  0
  while pos < len(xml):
y = xml[pos]
print(etree.tostring(y))
if y.get('z') == 'c':
  xml.append(etree.Element('y', attrib={'z': 'd'}))
pos += 1

That way, if I append to xml, I automatically pick up the appended element.


In Python I can iterate through a list, and on a certain condition
append a new item to the list, which is then included in the iteration.


Personally I follow the rule "never mutate a list you are iterating over",
even for appends, where the likelihood of problems is small:

items = ["a"]
for item in items:
if item == "a": items.append("a")
  


I did feel a bit uneasy doing it, but once I had got it working it did 
not feel too bad. I did not test for appending from the last item, so 
that bug has just bitten me now, but I will run with my workaround 
unless/until lxml is fixed.




Is there any chance that this can be looked at, or is it just the way it
works?


File a bug report and see if the author is willing to emulate the list
behaviour.
  


Will do.


BTW, I see that ElementTree in the standard library does not have this
problem.


Maybe uses a list under the hood.



Thanks for the advice.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


lxml - minor problem appending new element

2020-02-02 Thread Frank Millman

Hi all

I usually send lxml queries to the lxml mailing list, but it appears to 
be not working, so I thought I would try here.


This is a minor issue, and I have found an ugly workaround, but I 
thought I would mention it.


In Python I can iterate through a list, and on a certain condition 
append a new item to the list, which is then included in the iteration.


>>> x = ['a', 'b', 'c']
>>> for y in x:
...   print(y)
...   if y == 'b':
... x.append('d')
...
a
b
c
d
>>> x
['a', 'b', 'c', 'd']
>>>

The same thing works in lxml -

>>> lmx = ''
>>> xml = etree.fromstring(lmx)
>>> for y in xml:
...   print(etree.tostring(y))
...   if y.get('z') == 'b':
... xml.append(etree.Element('y', attrib={'z': 'd'}))
...
b''
b''
b''
b''
>>> etree.tostring(xml)
b''

However, if it happens that the condition is met on the last item in the 
list, Python still works, but lxml does not include the appended item in 
the iteration. In the following, the only change is checking for 'c' 
instead of 'b'.


>>> x = ['a', 'b', 'c']
>>> for y in x:
...   print(y)
...   if y == 'c':
... x.append('d')
...
a
b
c
d
>>> x
['a', 'b', 'c', 'd']
>>>

>>> lmx = ''
>>> xml = etree.fromstring(lmx)
>>> for y in xml:
...   print(etree.tostring(y))
...   if y.get('z') == 'c':
... xml.append(etree.Element('y', attrib={'z': 'd'}))
...
b''
b''
b''
>>> etree.tostring(xml)
b''

As you can see, the last element is correctly appended, but is not 
included in the iteration.


Is there any chance that this can be looked at, or is it just the way it 
works?


BTW, I see that ElementTree in the standard library does not have this 
problem.


Thanks

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Print statement

2020-01-28 Thread Frank Millman

On 2020-01-28 12:14 PM, L A Smit wrote:

Please help me with this.

squares =input("\nSquares: ")

print(float((squares) *float(.15)) *(1.3))

Cant print answer.

   print(float((squares) * float(.15)) *(1.3))
TypeError: can't multiply sequence by non-int of type 'float'



You have some superfluous brackets around 'squares' and '1.3', which 
hinder readability.


Remove them and you get -

float(squares * float(.15)) * 1.3

Now you can see that you have the brackets in the wrong place - you are 
trying to multiply 'squares', which at this stage is still a string, by 
float(.15).


You can multiply a string by an integer, but not by a float -

>>> 'abc' * 3
'abcabcabc'
>>>

>>> 'abc' * 1.5
Traceback (most recent call last):
  File "", line 1, in 
TypeError: can't multiply sequence by non-int of type 'float'
>>>

You probably meant
float(squares) * float(.15)

or more simply
float(squares) * .15

HTH

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Fun with IO

2020-01-21 Thread Frank Millman

On 2020-01-21 6:17 PM, Maxime S wrote:

Hi,

Le ven. 17 janv. 2020 à 20:11, Frank Millman  a écrit :



It works perfectly. However, some pdf's can be large, and there could be
concurrent requests, so I wanted to minimise the memory footprint. So I
tried passing the client_writer directly to the handler -

  await pdf_handler(client_writer)
  client_writer.write(b'\r\n')

It works! ReportLab accepts client_writer as a file-like object, and
writes to it directly. I cannot use chunking, so I just let it do its
thing.

Can anyone see any problem with this?



If the socket is slower than the PDF generation (which is probably always
the case, unless you have a very fast network), it will still have to be
buffered in memory (in this case in the writer buffer). Since writer.write
is non-blocking but is not a coroutine, it has to buffer. There is an
interesting blog post about that here that I recommend reading:
https://lucumr.pocoo.org/2020/1/1/async-pressure/



Thanks for the comments and for the link - very interesting.

Following the link led me to another essay -

https://github.com/guevara/read-it-later/issues/4558

I only understood a portion of it, but it forces you to question your 
assumptions, which is always a good thing.


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Sandboxing eval()

2020-01-21 Thread Frank Millman

On 2020-01-21 3:14 PM, inhahe wrote:

I have written a simple parser/evaluator that is sufficient for my
simple requirements, and I thought I was safe.

Then I saw this comment in a recent post by Robin Becker of ReportLab -

  "avoiding simple things like ' '*(10**200) seems quite difficult"

I realised that my method is vulnerable to this  and, like Robin, I have
not come up with an easy way to guard against it.

Frank Millman



Just use floats instead of integers.



I like that idea. I will probably use Decimal instead of float, but the 
principle is the same.


Thanks for the suggestion.

Frank

--
https://mail.python.org/mailman/listinfo/python-list


Fun with IO

2020-01-17 Thread Frank Millman

Hi all

I just tried something that I did not think had a snowball's chance of 
working, but to my surprise it did. I thought I would share it, partly 
for interest, and partly in case anyone can foresee any problems with it.


I use ReportLab to generate pdf files. I do not write them to disk, but 
use BytesIO to store them in memory. There are two main scenarios - send 
an email and attach a pdf (which I have not addressed yet), and stream a 
pdf via HTTP for display in a browser (which is the subject of this post).


I use asyncio, which accepts a request from a client and passes a 
client_reader and a client_writer to my handler. This is how I handle a 
pdf request at present -


with io.BytesIO() as pdf_fd:
await pdf_handler(pdf_fd)
pdf_fd.seek(0)  # rewind
client_writer.write(pdf_fd.read())
client_writer.write(b'\r\n')

I actually write it in 'chunks', but omitted that to keep it simple.

It works perfectly. However, some pdf's can be large, and there could be 
concurrent requests, so I wanted to minimise the memory footprint. So I 
tried passing the client_writer directly to the handler -


await pdf_handler(client_writer)
client_writer.write(b'\r\n')

It works! ReportLab accepts client_writer as a file-like object, and 
writes to it directly. I cannot use chunking, so I just let it do its thing.


Can anyone see any problem with this?

Thanks

Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: name 'sys' is not defined

2019-12-29 Thread Frank Millman

On 2019-12-30 7:20 AM, safiq...@gmail.com wrote:

Deal all,
Could you please help me how can I avoid this problem
my Jupyter Notebook code was
help pls


[snip]


NameError: name 'sys' is not defined



I know nothing about Jupyter Notebook but somewhere, usually at the top, 
you have to add this line -


import sys

HTH

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Lists And Missing Commas

2019-12-23 Thread Frank Millman

On 2019-12-24 6:20 AM, Tim Daneliuk wrote:

On 12/23/19 7:52 PM, DL Neil wrote:


WebRef: https://docs.python.org/3/reference/lexical_analysis.html



Yep, that explains it, but it still feels non-regular to me.  From a pointy 
headed academic
POV, I'd like to see behavior consistent across types. Again ... what do I know?



From the Zen, 'Practicality beats purity'.

From the docs -

"""
String literals that are part of a single expression and have only 
whitespace between them will be implicitly converted to a single string 
literal. That is, ("spam " "eggs") == "spam eggs".

"""

I do not see it as 'concatenation', rather as a way of constructing a 
single string from a number of smaller chunks. The docs talk about 
'whitespace', but I would guess that the use of a single space is 
uncommon. More likely is the use of a newline.


I use this from time to time when constructing long string literals -

long_string = (
"this is the first chunk "
"this is the second chunk "
"etc etc"
)

My 0.02c

Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: datetime gotcha

2019-12-12 Thread Frank Millman

On 2019-12-11 10:51 PM, Skip Montanaro wrote:

Why is a dtm instance also an instance of dt?


The datetime type is, in fact, a subclass of the date type:


import datetime
datetime.date.__bases__

(,)

datetime.datetime.__bases__

(,)

datetime.time.__bases__

(,)

Skip



Thanks for that.

I found a workaround.

>>> from datetime import date as dt, datetime as dtm
>>> type(dtm.now()) is dtm
True
>>> type(dtm.now()) is dt
False
>>>

I will run with this.

Frank


--
https://mail.python.org/mailman/listinfo/python-list


datetime gotcha

2019-12-10 Thread Frank Millman

Hi all

It took me a while to track down a bug that stemmed from this -

>>> from datetime import date as dt, time as tm, datetime as dtm
>>> x = dt.today()
>>> isinstance(x, dt)
True
>>> isinstance(x, dtm)
False
>>> y = dtm.now()
>>> isinstance(y, dt)
True   <--- ??
>>> isinstance(y, dtm)
True
>>> isinstance(y, tm)
False
>>>

Why is a dtm instance also an instance of dt?

From the docs, "A datetime object is a single object containing all the 
information from a date object and a time object."


If it was using multiple inheritance, a dtm should also be an instance 
of tm, but it is not.


This is using Python 3.7.2.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Instantiating sub-class from super

2019-10-24 Thread Frank Millman

On 2019-10-19 12:37 AM, DL Neil via Python-list wrote:

On 16/10/19 6:33 PM, Frank Millman wrote:

On 2019-10-14 10:55 PM, DL Neil via Python-list wrote:
Is there a technique or pattern for taking a (partially-) populated 
instance of a class, and re-creating it as an instance of one of its 
sub-classes?


Here is a link to an article entitled 'Understanding Hidden Subtypes'. 
It dates back to 2004, but I think it is still relevant. It addresses 
precisely the issues that you raise, but from a data-modelling 
perspective, not a programming one.


http://tdan.com/understanding-hidden-subtypes/5193

I found it invaluable, and applied the concepts in my own 
business/accounting application. Having created the ability to make 
subtypes visible and explicit, I found all kinds of unexpected uses 
for them.


The article seems to be missing a couple of images (Figure 1 and 
Figure 2) showing the data relationships. I downloaded the original 
article onto my computer years ago, and my local copy does have the 
images, so if you would like to see them let me know and I will upload 
my version somewhere to make it accessible.


Superb!

Yes please Frank - I've also approached it from the data/DB side, and 
thus presumably why I was puzzling over how one implements in Python.


(alternatively, email a PDF/similar directly)


Hi

I have just got back from a few days break and have only seen your 
message now.


Did you see the message I posted on the 17th with a download link? If 
not, would you like me to post it again?


Frank

--
https://mail.python.org/mailman/listinfo/python-list


Re: Instantiating sub-class from super

2019-10-17 Thread Frank Millman

On 2019-10-16 7:33 AM, Frank Millman wrote:


Here is a link to an article entitled 'Understanding Hidden Subtypes'. 
It dates back to 2004, but I think it is still relevant. It addresses 
precisely the issues that you raise, but from a data-modelling 
perspective, not a programming one.


http://tdan.com/understanding-hidden-subtypes/5193

I found it invaluable, and applied the concepts in my own 
business/accounting application. Having created the ability to make 
subtypes visible and explicit, I found all kinds of unexpected uses for 
them.


The article seems to be missing a couple of images (Figure 1 and Figure 
2) showing the data relationships. I downloaded the original article 
onto my computer years ago, and my local copy does have the images, so 
if you would like to see them let me know and I will upload my version 
somewhere to make it accessible.




I received a couple of requests for a link to the original article, so I 
uploaded it to filebin. Here is the link -


https://filebin.net/1ia5kcq2sbp57t1o

The file will be deleted automatically in one week. I don't know if that 
is good netiquette. Should I use a site where it never expires? Google 
Drive? Recommendations welcome.


Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Instantiating sub-class from super

2019-10-15 Thread Frank Millman

On 2019-10-14 10:55 PM, DL Neil via Python-list wrote:
Is there a technique or pattern for taking a (partially-) populated 
instance of a class, and re-creating it as an instance of one of its 
sub-classes?



In a medically-oriented situation, we have a Person() class, and start 
collecting information within an instance (person = Person(), etc).


During the data-collection process the person's sex may become obvious, 
eg few males have become/been pregnant.


We could stick with Person() and implement specific methods therein, 
rather than separate Man and Woman sub-classes, but...


It seemed better (at the design-level) to have Man( Person ) and Woman( 
Person ) sub-classes to contain the pertinent attributes, source more 
detailed and specific questions, and collect such data; by gender.


In coding-practice, once gender becomes apparent, how should the 
instance of the Man() or Woman() sub-class be created - and established 
with the ID and other attributes previously collected as a Person instance?


This attempt seems hack-y:

 man = Man()
 man.__dict__.update( person.__dict__ )


Is there a pythonic tool for such, or is the task outlined 
fundamentally-inappropriate?




Here is a link to an article entitled 'Understanding Hidden Subtypes'. 
It dates back to 2004, but I think it is still relevant. It addresses 
precisely the issues that you raise, but from a data-modelling 
perspective, not a programming one.


http://tdan.com/understanding-hidden-subtypes/5193

I found it invaluable, and applied the concepts in my own 
business/accounting application. Having created the ability to make 
subtypes visible and explicit, I found all kinds of unexpected uses for 
them.


The article seems to be missing a couple of images (Figure 1 and Figure 
2) showing the data relationships. I downloaded the original article 
onto my computer years ago, and my local copy does have the images, so 
if you would like to see them let me know and I will upload my version 
somewhere to make it accessible.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: sqlalchemy & #temp tables

2019-10-07 Thread Frank Millman

On 2019-10-07 5:30 PM, Albert-Jan Roskam wrote:

Hi,

I am using sqlalchemy (SA) to access a MS SQL Server database (python 3.5, Win 
10). I would like to use a temporary table (preferably #local, but ##global 
would also be an option) to store results of a time-consuming query. In other 
queries I'd like to access the temporary table again in various places in my 
Flask app. How do I do that, given that SA closes the connection after each 
request?

I can do:
with engine.connect() as con:
 con.execute('select * into #tmp from tbl')
 con.execute('select  * from #tmp')

... but that's limited to the scope of the context manager.

Oh, I don't have rights to create a 'real' table. :-(

Thanks!

Albert-Jan



This does not answer your question directly, but FWIW this is what I do.

I do not use SA, but I have written my app to support Sql Server, 
PostgreSQL and sqlite3 as backend databases. However, no matter which 
one is in use, I also use sqlite3 as an in-memory database to store 
temporary information. It took me a little while to get it all working 
smoothly, but now it works well.


Of course this may not work for you if you have a large volume of temp 
data, but it may be worth trying.


Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Python in The Economist

2019-09-25 Thread Frank Millman
The latest Technology Quarterly in The Economist is about "The Internet 
Of Things".


Python gets a mention in an article on "How to build a disposable 
microchip". It is quite a long article, so here are the relevant extracts.


"The goal is to produce a robust, bendable, mass-producible computer, 
complete with sensors and the ability to communicate with the outside 
world, for less than $0.01 apiece. A prototype version, shown off at 
Arm's headquarters in Cambridge, looks like a stiffer-than-usual piece 
of tape festooned with circuit traces."


"The chip uses a simple form of machine learning called a Bayesian 
classifier. Flexibility of use was sacrificed: to keep thinks as cheap 
and simple as possible the algorithm is etched directly into the 
plastic, meaning the chips are not reprogrammable."


"Since chip design is expensive, and chip designers scarce, he and his 
team have been working on software tools to simplify that task. The idea 
is to describe a new algorithm in Python, a widely used programming 
language, and then have software turn it into a circuit diagram that can 
be fed into Pragmatic's chipmaking machines. That approach has attracted 
interest from DARPA ..."


Hope this is of interest.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: phyton

2019-09-10 Thread Frank Millman

On 2019-09-10 2:29 PM, tim.gast--- via Python-list wrote:

Op dinsdag 10 september 2019 13:03:46 UTC+2 schreef tim...@quicknet.nl:

Hi everybody,

For school i need to write the right code to get the following outcome.
Can someone help me with this
I can't find a solution to link the word high to 1.21.

11 print(add_vat(101, 'high'))
12 print(add_vat(101, 'low'))

Outcome:

122.21
110.09

Thanks!


my_dict('high':21,'low':5)

def add_vat(amount, vat_rate):
   berekening = amount * (1+vat_rate)
   return round(berekening,2)

print(add_vat(101, 'high'))

outcome:
   File "", line 3
 def add_vat(amount, vat_rate({'high':21,'low':5})):
 ^
SyntaxError: invalid syntax



First point - 122.21 is 101 + 21%, so 'high' could be 21, but 110.09 is 
101 + 9%, so I think 'low' should be 9.


Second point, I sympathise, but you do need to understand the basics of 
dictionaries before you can start using them. Check the tutorial, and 
experiment at the ipython prompt. I am using the normal python 
interpreter here, but the principle is the same -


>>> my_dict = dict()
>>> my_dict
{}
>>> my_dict = {}  # this does the same, but is shorter
>>> my_dict
{}
>>> my_dict['high'] = 21
>>> my_dict
{'high': 21}
>>>

Try that, and report back with any questions

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: ``if var'' and ``if var is not None''

2019-09-01 Thread Frank Millman

On 2019-09-01 8:12 AM, Hongyi Zhao wrote:

Hi,

The following two forms are always equivalent:

``if var'' and ``if var is not None''

Regards



Not so. Here is an example -

>>> var = []
>>> bool(var)
False
>>> bool(var is not None)
True
>>>

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Compare zip lists where order is important

2019-08-29 Thread Frank Millman

On 2019-08-29 5:24 AM, Sayth Renshaw wrote:

Hi

Trying to find whats changed in this example. Based around work and team 
reschuffles.

So first I created my current teams and then my shuffled teams.

people = ["Tim","Bill","Sally","Ally","Fred","Fredricka"]
team_number = [1,1,2,2,3,3]

shuffle_people = ["Fredricka","Bill","Sally","Tim","Ally","Fred"]
shuffle_team_number = [1,1,2,2,3,3]

Then combine.

teams = list(zip(people,team_number))
shuffle_teams = list(zip(shuffle_people, shuffle_team_number))

Then I am attempting to compare for change.

[i for i, j in zip(teams, shuffle_teams) if i != j]

#Result
[('Tim', 1), ('Ally', 2), ('Fred', 3), ('Fredricka', 3)]

#Expecting to see

[('Fredricka', 1),('Tim', 2)]

What's a working way to go about this?



This would have worked if you sorted your lists first -

>>> [i for i, j in zip(sorted(teams), sorted(shuffle_teams)) if i != j]
[('Ally', 2), ('Fredricka', 3), ('Tim', 1)]

Except you wanted to see the results from shuffle_teams, so instead of 
'i for i, j ...', use 'j for i, j ...' -


>>> [j for i, j in zip(sorted(teams), sorted(shuffle_teams)) if i != j]
[('Ally', 3), ('Fredricka', 1), ('Tim', 2)]

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: How should we use global variables correctly?

2019-08-23 Thread Frank Millman

On 2019-08-23 8:43 AM, Windson Yang wrote:

I also want to know what is the difference between "using 'global
variables' in a py module" and "using a variable in class". For example:

In global.py:

 foo = 1
 def bar():
 global foo
 return foo + 1

In class.py

  class Example:
 def __init__(self):
 self.foo = 1
 def bar()
 return self.foo + 1

Expect the syntax, why using class variable self.foo would be better (or
more common)? I think the 'global' here is relative, foo is global in
global.py and self.foo is global in Example class. If the global.py is
short and clean enough (didn't have a lot of other class), they are pretty
much the same. Or I missed something?



One difference is that you could have many instances of Example, each 
with its own value of 'foo', whereas with a global 'foo' there can only 
be one value of 'foo' for the module.


It would make sense to use the 'global' keyword if you have a module 
with various functions, several of which refer to 'foo', but only one of 
which changes the value of 'foo'.


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Enumerate - int object not subscriptable

2019-08-20 Thread Frank Millman

On 2019-08-20 2:00 PM, Sayth Renshaw wrote:

Hi

I want to do basic math with a list.

a = [1, 2, 3, 4, 5, 6, 7, 8]

for idx, num in enumerate(a):
 print(idx, num)

This works, but say I want to print the item value at the next index as well as 
the current.

for idx, num in enumerate(a):
 print(num[idx + 1], num)

I am expecting 2, 1.

But am receiving

TypeError: 'int' object is not subscriptable

Why?



I think you want a[idx+1], not num[idx+1].

Bear in mind that you will get IndexError for the last item in the list.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Accumulate , Range and Zeros

2019-07-13 Thread Frank Millman

On 2019-07-13 11:54 AM, Abdur-Rahmaan Janhangeer wrote:

Greetings,

Given this snippet

from itertools import *
import operator


x = [1, 2, 3] # [0, 1, 2, 3, ..., 10]

y = accumulate(x, operator.mul)

print(list(y))

why does x = list(range(5)) produces only zeros?



That is an easy one.

By default, range() starts from 0. Anything multiplied by 0 equals 0. So 
you can multiply as many numbers as you like, if the first one is 0, the 
rest will also be 0.


QED

Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: How Do You Replace Variables With Their Values?

2019-07-10 Thread Frank Millman

On 2019-07-11 12:43 AM, CrazyVideoGamez wrote:

How do you replace a variable with its value in python 3.7.2? For example, say 
I have:

dinner = {'Starters':['Fried Calamari', 'Potted crab'],'Main Course':['Fish', 
'Meat'], 'Desert':['Cake', 'Banana Split']}

# Don't ask where I got the dinner from

for meal in dinner.keys():
 meal = list(dinner[meal])

But I only get one list called "meal" and I'm just changing it with the code 
above (you can find that by printing it out). How can I make separate lists called 
'Starters', 'Main Course', and 'Desert'?



1. Iterating over a dictionary returns each key. So instead of 'for meal 
in dinner.keys()' you can just say 'for meal in dinner'.


2. It is not a good idea to use the variable name 'meal' for two 
purposes. You use it to get each key, and then it gets over-ridden with 
the result of 'list(dinner[meal])'. Then on the next iteration it gets 
over-ridden again with the next key.


3. The result of 'dinner[meal]' is already a list, so there is no need 
to say 'list(dinner[meal])'. Technically there is a difference - your 
approach creates a new list, instead of just creating a reference to the 
original one, but I doubt if that was your intention.


4. There is potentially more than one list, but for each iteration you 
over-ride the previous one, so at the end, only the last one remains. 
The solution is to create a 'list of lists'.


Putting all this together -

courses = []
for course in dinner:
courses.append(dinner[course])

This gives you a list, called 'courses', containing three lists, one for 
each 'course' containing the options for that course.


However, in the process, you have lost the names of the courses, namely 
'Starters', 'Main course', and 'Desert'.


So to answer your original question "How can I make separate lists 
called 'Starters', 'Main Course', and 'Desert'?", the code that you 
started with is exactly what you asked for.


I think you were asking how to create a variable called 'Starters' 
containing the list of starters. It can be done, using the built-in 
function 'setattr()', but I don't think that would be useful. If you 
knew in advance that one of the options was called 'Starters', you could 
just say Starters = ['Fried Calamari', 'Potted crab']. But if you did 
not know that in advance, how would you know what your variable was called?


Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: pyodbc -> MS-SQL Server Named Instance ?

2019-07-02 Thread Frank Millman

On 2019-07-02 3:41 PM, Adam Tauno Williams wrote:

On Tue, 2019-07-02 at 07:36 +0200, Frank Millman wrote:

On 2019-07-01 10:13 PM, Adam Tauno Williams wrote:

I am trying to connect to a Named Instance on an MS-SQL server
using pyODBC.

This is what I use -

  conn = pyodbc.connect(
  driver='sql server',
  server=r'localhost\sqlexpress',
  database=self.database,
  user=self.user,
  password=self.pwd,
  trusted_connection=True)

SQL Server is running on the same host as my python program, so it
may  be a simpler setup than yours.


What ODBC driver are you using?


[...]
I did reply to this yesterday but for some reason it did not appear. For 
the record, here it is again -


I am not using an ODBC driver at all. My connection string 'just works'.

This is the version of Sql Server I am using -

C:\Users\User>sqlcmd
1> select @@version
2> go


Microsoft SQL Server 2017 (RTM-GDR) (KB4494351) - 14.0.2014.14 (X64)
Apr  5 2019 09:18:51
Copyright (C) 2017 Microsoft Corporation
Express Edition (64-bit) on Windows 10 Pro 10.0  (Build 
17134: )


Frank
--
https://mail.python.org/mailman/listinfo/python-list


Re: pyodbc -> MS-SQL Server Named Instance ?

2019-07-01 Thread Frank Millman

On 2019-07-01 10:13 PM, Adam Tauno Williams wrote:

I am trying to connect to a Named Instance on an MS-SQL server using
pyODBC.

The ODBC driver works, as I can connection without issue to a non-
named-instance SQL-Server used by another application.

What is the DSN (connection) string magick to connect to a named
instance?

I can connect from my JDBC client (DbVisualizer) by specifying the
instance name and port 1434.




This is what I use -

conn = pyodbc.connect(
driver='sql server',
server=r'localhost\sqlexpress',
database=self.database,
user=self.user,
password=self.pwd,
trusted_connection=True)

SQL Server is running on the same host as my python program, so it may 
be a simpler setup than yours.


Frank Millman
--
https://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate strings with iteration in a loop?

2019-05-21 Thread Frank Millman

On 2019-05-21 9:42 AM, Madhavan Bomidi wrote:

Hi,

I need to create an array as below:

tempStr = year+','+mon+','+day+','+str("{:6.4f}".format(UTCHrs[k]))+','+ \
str("{:9.7f}".format(AExt[k,0]))+','+str({:9.7f}".format(AExt[k,1]))+','+ \
str("{:9.7f}".format(AExt[k,2]))+','+str("{:9.7f}".format(AExt[k,3]))+','+ \
str("{:9.7f}".format(AExt[k,4]))+','+str("{:9.7f}".format(AExt[k,5]))+','+ \
str("{:9.7f}".format(AExt[k,6]))+','+str("{:9.7f}".format(AExt[k,7]))+','+ \
str("{:9.7f}".format(AExt[k,8]))+','+str("{:9.7f}".format(AExt[k,9]))


k is a row index

Can some one suggest me how I can iterate the column index along with row index 
to concatenate the string as per the above format?

Thanks in advance



The following (untested) assumes that you are using a reasonably 
up-to-date Python that has the 'f' format operator.


tempStr = f'{year},{mon},{day},{UTCHrs[k]:6.4f}'
for col in range(10):
tempStr += f',{AExt[k, col]:9.7f}'

HTH

Frank Millman



--
https://mail.python.org/mailman/listinfo/python-list


Stack Overflow Developer Survey

2019-04-14 Thread Frank Millman

Hi all

Stack Overflow have just published the results of their 2019 Developer 
Survey. Here is the link -


https://insights.stackoverflow.com/survey/2019?utm_source=Iterable_medium=email_campaign=dev-survey-2019

They summarise some of the top 'takeaways'. This is first on the list -

"""
Python, the fastest-growing major programming language, has risen in the 
ranks of programming languages in our survey yet again, edging out Java 
this year and standing as the second most loved language (behind Rust).

"""

I thought it was worth a mention.

Frank Millman

--
https://mail.python.org/mailman/listinfo/python-list


Re: Implement C's Switch in Python 3

2019-02-03 Thread Frank Millman
"Sayth Renshaw"  wrote in message 
news:73a1c64c-7fb1-4fc8-98a2-b6939e82a...@googlegroups.com...



chooseFrom = { day : nthSuffix(day) for day in range(1,32)}
chooseFrom

{1: '1st', 2: '2nd', 3: '3rd', 4: '4th', 5: '5th', 6: '6th', 7: '7th', 8:
'8th', 9: '9th', 10: '10th', 11: '11th', 12: '12th', 13: '13th', 14: '14th',
15: '15th', 16: '16th', 17: '17th', 18: '18th', 19: '19th', 20: '20th', 21:
'21st', 22: '22nd', 23: '23rd', 24: '24th', 25: '25th', 26: '26th', 27:
'27th', 28: '28th', 29: '29th', 30: '30th', 31: '31st'}

chooseFrom[1]

'1st'

chooseFrom[11]

'11th'

chooseFrom[21]

'21st'

Not having a default case as in switch forced you to write out all 
possible combinations.



I think the intent and readbility of switch statements is a bit nicer.


I have not been following this thread in detail, but how about this -

choose = {1: 'st', 2: 'nd', 3: 'rd', 21: 'st', 22: 'nd', 23: 'rd', 31: 
'st'}

for x in range(1, 32):

...   print('{}{}'.format(x, choose.get(x, 'th')), end = ' ')
...
1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 
18th 19th 20th 21st 22nd 23rd 24th 25th 26th 27th 28th 29th 30th 31st >>>


Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: Exercize to understand from three numbers which is more high

2019-01-28 Thread Frank Millman

"^Bart"  wrote in message news:q2mghh$ah6$1...@gioia.aioe.org...


> 1. The last two lines appear to be indented under the 'if number3 < ' 
> line. I think you want them to be unindented so that they run every 
> time.


I'm sorry but I didn't completely understand what you wrote about the last 
two lines, please could you write how my code could be fixed?




The last four lines of your program look like this -

if number3 < number2 and number3 < number1:
numbermin = number3

print("Number min is: ",numbermin)

numbermiddle = (number1+number2+number3)-(numbermax+numbermin)

print("Number middle is: ",numbermiddle)

The last three lines all fall under the 'if number3 < number2' line, so they 
will only be executed if that line evaluates to True.


I think that you want the last two lines to be executed every time. If so, 
they should be lined up underneath the 'if', not the 'print'.




> if a == 1:
> do something
> elif a == 2:
> do something else

Finally I understood the differences between if and elif! Lol! :D



It is actually short for 'else if', but I guess it is not obvious if you 
have not seen it before!


Frank


--
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   >