Re: test-ignore

2024-02-15 Thread Tony Oliver via Python-list
On Thursday 15 February 2024 at 21:16:22 UTC, E.D.G. wrote:
> Test - ignore February 15, 2024 
> 
> Test post to see if my Newsgroup post program is working.

Aim your test messages at alt.test, please.
-- 
https://mail.python.org/mailman/listinfo/python-list


Is there a way to implement the ** operator on a custom object

2024-02-08 Thread Tony Flury via Python-list
I know that mappings by default support the ** operator, to unpack the 
mapping into key word arguments.


Has it been considered implementing a dunder method for the ** operator 
so you could unpack an object into a key word argument, and the 
developer could choose which keywords would be generated (or could even 
generate 'virtual' attributes).


--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python-pickle error

2023-05-09 Thread Tony Flury via Python-list

Charles,

by your own admission, you deleted your pkl file,

And your code doesn't write that pkl file (pickle.dumps(...) doesn't 
write a file it creates a new string and at no point will it write to 
the file :


What you need is this :

import pickle
number=2
my_pickled_object=pickle.dumps(number)
with open('file.pkl', 'w') as file:
file.write(my_pickled_object)
print("this is my pickled object",{my_pickled_object},)

del number # you can do this if you really want to test pickle.

with open('file.pkl', 'r') as file:
number=pickle.load(file)

my_unpickled_object=pickle.loads(my_pickled_object)
print("this is my unpickled object",{my_unpickled_object},)

Note :  that the whole point of the pickle format is that you don't need 
to open and write/read files in binary format.



On 19/04/2023 17:14, charles wiewiora wrote:

Hello,
I am experincing problems with the pickle moducle
the folowing code was working before,

import pickle
number=2
my_pickeld_object=pickle.dumps(number)
print("this is my pickled object",{my_pickeld_object},)
with open('file.pkl', 'rb') as file:
 number=pickle.load(file)
my_unpickeled_object=pickle.loads(my_pickeld_object)
print("this is my unpickeled object",{my_unpickeled_object},)

but now i get error

Traceback (most recent call last):
   File "C:\Users\lukwi\Desktop\python\tester2.py", line 5, in 
 with open('file.pkl', 'rb') as file:
FileNotFoundError: [Errno 2] No such file or directory: 'file.pkl'

im get this problem after this,
a .pkl came into my Python script files
i though this could be a spare file made from python becauce i was doing this 
first,

import pickle
number=2
my_pickeld_object=pickle.dumps(number)
print("this is my pickled object",{my_pickeld_object},)
with open('file.pkl', 'rb') as file:
 number=pickle.load(file)

so i stupidly deleted the file

do you know how to fix this?
i reinstalled it but it didn't work
this is on widnows and on version 3.11.3 on python

thank you


--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Single line if statement with a continue

2022-12-18 Thread Tony Oliver
On Saturday, 17 December 2022 at 23:58:11 UTC, avi.e...@gmail.com wrote:
> Is something sort of taboo when using something like a computer language to 
> write a program?

With what else would you write a program?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Starting using Python

2022-01-27 Thread Tony Flury via Python-list



On 03/01/2022 12:45, Joao Marques wrote:

Good morning: I have a very simple question: I want to start writing
programs in Python so I went to the Microsoft Store and installed
Python3.9. No problem so far. I would prefer to have a gui interface, an
interface that I can use file-->Open and File-->Save as, as I see it on
different videos. How can I get it? Because my problem is to run the
programs I have already written and saved on a *.py file in my own working
directory, not in the Python's CWD directory.
Can you please help?
I am running Windows 10 Pro version 20H2

Regards,
Joao



The simplest Python GUI editor is IDLE, which should come installed with 
with Python. On windows just open the start menu and type 'IDLE'


You can save your python files anywhere - you shouldn't need to save it 
in Python's working directory. Python doesn't impose any particular file 
system until you start implement features such as packages which are not 
something a beginner should ever worry about.


--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Puzzling behaviour of Py_IncRef

2022-01-26 Thread Tony Flury via Python-list


On 26/01/2022 22:41, Barry wrote:



Run python and your code under a debugger and check the ref count of 
the object as you step through the code.


Don’t just step through your code but also step through the C python code.
That will allow you to see how this works at a low level.
Setting a watch point on the ref count will allow you run the code and 
just break as the ref count changes.


That is what I do when a see odd c api behaviour.

Barry



Thanks - I have tried a few times on a few projects to run a debugger in 
mixed language mode and never had any success.


I will have to try again.


As posted in the original message - immediately before the call to 
the C function/method sys.getrefcount reports the count to be 2 
(meaning it is actually a 1).


Inside the C function the ref count is incremented and the Py_REFCNT 
macro reports the count as 3 inside the C function as expected (1 for 
the name in the Python code, 1 for the argument as passed to the C 
function, and 1 for the increment), so outside the function one would 
expect the ref count to now be 2 (since the reference caused by 
calling the function is then reversed).


However - Immediately outside the C function and back in the Python 
code sys.getrefcount reports the count to be 2 again - meaning it is 
now really 1. So that means that the refcount has been decremented 
twice in-between the return of the C function and the execution of 
the immediate next python statement. I understand one of those 
decrements - the parameter's ref count is incremented on the way in 
so the same object is decremented on the way out (so that calls don't 
leak references) but I don't understand where the second decrement is 
coming from.


Again there is nothing in the Python code that would cause that 
decrement - the decrement behavior is in the Python runtime.



--
Anthony Flury
email :anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


--
Anthony Flury
email :anthony.fl...@btinternet.com


--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Puzzling behaviour of Py_IncRef

2022-01-26 Thread Tony Flury via Python-list



On 26/01/2022 08:20, Chris Angelico wrote:

On Wed, 26 Jan 2022 at 19:04, Tony Flury via Python-list
 wrote:

So according to that I should increment twice if and only if the calling
code is using the result - which you can't tell in the C code - which is
very odd behaviour.

No, the return value from your C function will *always* have a
reference taken. Whether the return value is "used" or just dropped,
there's always going to be one ref used by the returning itself.

The standard way to return a value is always to incref it, then return
the pointer. That is exactly equivalent to Python saying "return
".

Incrementing twice is ONLY because you want to leak a reference.

ChrisA


Chris,

You keep saying I am leaking a reference - my original code (not the POC 
in the email) wasn't intending to leak a reference, it was incrementing 
the reference count in order to accurately count references, from other 
objects and i needed to double increment there so that the reference 
count remained correct outside of the C code.


I did try to be clear - my intention was never to leak a reference (I 
have been writing s/w long enough to know leaks are bad) - my POC code 
in the original message was the only code which deliberately leaked a 
reference in order to simply illustrate the problem.


I do appreciate the help you have tried to give - so thank you.

--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Puzzling behaviour of Py_IncRef

2022-01-26 Thread Tony Flury via Python-list


On 26/01/2022 01:29, MRAB wrote:

On 2022-01-25 23:50, Tony Flury via Python-list wrote:


On 25/01/2022 22:28, Barry wrote:


On 25 Jan 2022, at 14:50, Tony Flury via 
Python-list  wrote:




On 20/01/2022 23:12, Chris Angelico wrote:
On Fri, 21 Jan 2022 at 10:10, Greg 
Ewing  wrote:

On 20/01/22 12:09 am, Chris Angelico wrote:

At this point, the refcount has indeed been increased.


   return self;
  }

And then you say "my return value is this object".

So you're incrementing the refcount, then returning it without
incrementing the refcount. Your code is actually equivalent to 
"return

self".

Chris, you're not making any sense. This is C code, so there's no
way that "return x" can change the reference count of x.

Yeah, I wasn't clear there. It was equivalent to *the Python code*
"return self". My apologies.


  > The normal thing to do is to add a reference to whatever you're
  > returning. For instance, Py_RETURN_NONE will incref None and 
then

  > return it.
  >

The OP understands that this is not a normal thing to do. He's
trying to deliberately leak a reference for the purpose of 
diagnosing

a problem.

It would be interesting to see what the actual refcount is after
calling this function.
After calling this without a double increment in the function the 
ref count is still only 1 - which means that the 'return self' 
effectively does a double decrement. My original message includes 
the Python code which calls this 'leaky' function and you can see 
that despite the 'leaky POC' doing an increment ref count drops 
back to one after the return.


You are right this is not a normal thing to do, I am trying to 
understand the behaviour so my library does the correct thing in 
all cases - for example - imagine you have two nodes in a tree :


A --- > B

And your Python code has a named reference to A, and B also 
maintains a reference to A as it's parent.


In this case I would expect A to have a reference count of 2 
(counted as 3 through sys.getrefcount() - one for the named 
reference in the Python code - and one for the link from B back to 
A; I would also expect B to have a reference count here of 1 (just 
the reference from A - assuming nothing else referenced B).


My original code was incrementing the ref counts of A and B and 
then returning A. within the Python test code A had a refcount of 1 
(and not the expected 2), but the refcount from B was correct as 
far as I could tell.




Yes, and that's why I was saying it would need a *second* incref.

ChrisA
Thank you to all of you for trying to help - I accept that the only 
way to make the code work is to do a 2nd increment.


I don't understand why doing a 'return self' would result in a 
double decrement - that seems utterly bizzare behaviour - it 
obviously works, but why.

The return self in C will not change the ref count.

I would suggest setting a break point in your code and stepping out 
of the function and seeing that python’s code does to the ref count.


Barry


Barry,

something odd is going on because the Python code isn't doing anything
that would cause the reference count to go from 3 inside the C function
to 1 once the method call is complete.

As far as I know the only things that impact the reference counts are :

   * Increments due to assigning a new name or adding it to a container.
   * Increment due to passing the object to a function (since that binds
 a new name)
   * Decrements due to deletion of a name
   * Decrement due to going out of scope
   * Decrement due to being removed from a container.

None of those things are happening in the python code.

As posted in the original message - immediately before the call to the C
function/method sys.getrefcount reports the count to be 2 (meaning it is
actually a 1).

Inside the C function the ref count is incremented and the Py_REFCNT
macro reports the count as 3 inside the C function as expected (1 for
the name in the Python code, 1 for the argument as passed to the C
function, and 1 for the increment), so outside the function one would
expect the ref count to now be 2 (since the reference caused by calling
the function is then reversed).

However - Immediately outside the C function and back in the Python code
sys.getrefcount reports the count to be 2 again - meaning it is now
really 1. So that means that the refcount has been decremented twice
in-between the return of the C function and the execution of the
immediate next python statement. I understand one of those decrements -
the parameter's ref count is incremented on the way in so the same
object is decremented on the way out (so that calls don't leak
references) but I don't understand where the second decrement is coming
from.

Again there is nothing in the Python code that would cause that
decrement - the decrement behavior is in the Python runtime.


The function returns a result, an object.

The calling code is discarding the res

Re: Puzzling behaviour of Py_IncRef

2022-01-25 Thread Tony Flury via Python-list


On 25/01/2022 22:28, Barry wrote:



On 25 Jan 2022, at 14:50, Tony Flury via Python-list  
wrote:



On 20/01/2022 23:12, Chris Angelico wrote:

On Fri, 21 Jan 2022 at 10:10, Greg Ewing  wrote:
On 20/01/22 12:09 am, Chris Angelico wrote:

At this point, the refcount has indeed been increased.


   return self;
  }

And then you say "my return value is this object".

So you're incrementing the refcount, then returning it without
incrementing the refcount. Your code is actually equivalent to "return
self".

Chris, you're not making any sense. This is C code, so there's no
way that "return x" can change the reference count of x.

Yeah, I wasn't clear there. It was equivalent to *the Python code*
"return self". My apologies.


  > The normal thing to do is to add a reference to whatever you're
  > returning. For instance, Py_RETURN_NONE will incref None and then
  > return it.
  >

The OP understands that this is not a normal thing to do. He's
trying to deliberately leak a reference for the purpose of diagnosing
a problem.

It would be interesting to see what the actual refcount is after
calling this function.

After calling this without a double increment in the function the ref count is 
still only 1 - which means that the 'return self' effectively does a double 
decrement. My original message includes the Python code which calls this 
'leaky' function and you can see that despite the 'leaky POC' doing an 
increment ref count drops back to one after the return.

You are right this is not a normal thing to do, I am trying to understand the 
behaviour so my library does the correct thing in all cases - for example - 
imagine you have two nodes in a tree :

A --- > B

And your Python code has a named reference to A, and B also maintains a 
reference to A as it's parent.

In this case I would expect A to have a reference count of 2 (counted as 3 
through sys.getrefcount() - one for the named reference in the Python code - 
and one for the link from B back to A; I would also expect B to have a 
reference count here of 1 (just the reference from A - assuming nothing else 
referenced B).

My original code was incrementing the ref counts of A and B and then returning 
A. within the Python test code A had a refcount of 1 (and not the expected 2), 
but the refcount from B was correct as far as I could tell.



Yes, and that's why I was saying it would need a *second* incref.

ChrisA

Thank you to all of you for trying to help - I accept that the only way to make 
the code work is to do a 2nd increment.

I don't understand why doing a 'return self' would result in a double decrement 
- that seems utterly bizzare behaviour - it obviously works, but why.

The return self in C will not change the ref count.

I would suggest setting a break point in your code and stepping out of the 
function and seeing that python’s code does to the ref count.

Barry


Barry,

something odd is going on because the Python code isn't doing anything 
that would cause the reference count to go from 3 inside the C function 
to 1 once the method call is complete.


As far as I know the only things that impact the reference counts are :

 * Increments due to assigning a new name or adding it to a container.
 * Increment due to passing the object to a function (since that binds
   a new name)
 * Decrements due to deletion of a name
 * Decrement due to going out of scope
 * Decrement due to being removed from a container.

None of those things are happening in the python code.

As posted in the original message - immediately before the call to the C 
function/method sys.getrefcount reports the count to be 2 (meaning it is 
actually a 1).


Inside the C function the ref count is incremented and the Py_REFCNT 
macro reports the count as 3 inside the C function as expected (1 for 
the name in the Python code, 1 for the argument as passed to the C 
function, and 1 for the increment), so outside the function one would 
expect the ref count to now be 2 (since the reference caused by calling 
the function is then reversed).


However - Immediately outside the C function and back in the Python code 
sys.getrefcount reports the count to be 2 again - meaning it is now 
really 1. So that means that the refcount has been decremented twice 
in-between the return of the C function and the execution of the 
immediate next python statement. I understand one of those decrements - 
the parameter's ref count is incremented on the way in so the same 
object is decremented on the way out (so that calls don't leak 
references) but I don't understand where the second decrement is coming 
from.


Again there is nothing in the Python code that would cause that 
decrement - the decrement behavior is in the Python runtime.





--
Anthony Flury
email :anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


--
Anthony Flury
email :anthony.fl...@btinternet.com
--
https://mail.python.org/mailman/listinfo/python-list


Re: Puzzling behaviour of Py_IncRef

2022-01-25 Thread Tony Flury via Python-list



On 20/01/2022 23:12, Chris Angelico wrote:

On Fri, 21 Jan 2022 at 10:10, Greg Ewing  wrote:

On 20/01/22 12:09 am, Chris Angelico wrote:

At this point, the refcount has indeed been increased.


   return self;
  }

And then you say "my return value is this object".

So you're incrementing the refcount, then returning it without
incrementing the refcount. Your code is actually equivalent to "return
self".

Chris, you're not making any sense. This is C code, so there's no
way that "return x" can change the reference count of x.

Yeah, I wasn't clear there. It was equivalent to *the Python code*
"return self". My apologies.


  > The normal thing to do is to add a reference to whatever you're
  > returning. For instance, Py_RETURN_NONE will incref None and then
  > return it.
  >

The OP understands that this is not a normal thing to do. He's
trying to deliberately leak a reference for the purpose of diagnosing
a problem.

It would be interesting to see what the actual refcount is after
calling this function.


After calling this without a double increment in the function the ref 
count is still only 1 - which means that the 'return self' effectively 
does a double decrement. My original message includes the Python code 
which calls this 'leaky' function and you can see that despite the 
'leaky POC' doing an increment ref count drops back to one after the return.


You are right this is not a normal thing to do, I am trying to 
understand the behaviour so my library does the correct thing in all 
cases - for example - imagine you have two nodes in a tree :


A --- > B

And your Python code has a named reference to A, and B also maintains a 
reference to A as it's parent.


In this case I would expect A to have a reference count of 2 (counted as 
3 through sys.getrefcount() - one for the named reference in the Python 
code - and one for the link from B back to A; I would also expect B to 
have a reference count here of 1 (just the reference from A - assuming 
nothing else referenced B).


My original code was incrementing the ref counts of A and B and then 
returning A. within the Python test code A had a refcount of 1 (and not 
the expected 2), but the refcount from B was correct as far as I could tell.




Yes, and that's why I was saying it would need a *second* incref.

ChrisA


Thank you to all of you for trying to help - I accept that the only way 
to make the code work is to do a 2nd increment.


I don't understand why doing a 'return self' would result in a double 
decrement - that seems utterly bizzare behaviour - it obviously works, 
but why.




--
Anthony Flury
email : anthony.fl...@btinternet.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Puzzling behaviour of Py_IncRef

2022-01-19 Thread Tony Flury via Python-list


On 19/01/2022 11:09, Chris Angelico wrote:

On Wed, Jan 19, 2022 at 10:00 PM Tony Flury via Python-list
 wrote:

Extension function :

 static PyObject *_Node_test_ref_count(PyObject *self)
 {
  printf("\nIncrementing ref count for self - just for the hell
 of it\n");
  printf("\n before self has a ref count of %ld\n", Py_REFCNT(self));
  Py_INCREF(self);
  printf("\n after self has a ref count of %ld\n", Py_REFCNT(self));
  fflush(stdout);

At this point, the refcount has indeed been increased.


  return self;
 }

And then you say "my return value is this object".

The normal thing to do is to add a reference to whatever you're
returning. For instance, Py_RETURN_NONE will incref None and then
return it.

So you're incrementing the refcount, then returning it without
incrementing the refcount. Your code is actually equivalent to "return
self".

In order to actually leak a reference, you'd need to incref it twice.

ChrisA



Chris - I am still puzzled - does  doing 'return self' automatically 
decrement the ref count of the object ?, and why is that the desired 
behaviour ? Effectively it results in a decrement of two, since at the 
exit of the function the ref count is only 1 (as witnessed by the 
subsequent call to assertEqual).


(I am not suggesting that it should be changed - I understand that would 
be a breaking change !).


You say I am returning it without incrementing, but I am explicitly 
incrementing it before the return.




--
https://mail.python.org/mailman/listinfo/python-list


Puzzling behaviour of Py_IncRef

2022-01-19 Thread Tony Flury via Python-list
I am writing a C extension module for an AVL tree, and I am trying to 
ensure reference counting is done correctly. I was having a problem with 
the reference counting so I worked up this little POC of the problem, 
and I hope someone can explain this.


Extension function :

   static PyObject *_Node_test_ref_count(PyObject *self)
   {
    printf("\nIncrementing ref count for self - just for the hell
   of it\n");
    printf("\n before self has a ref count of %ld\n", Py_REFCNT(self));
    Py_INCREF(self);
    printf("\n after self has a ref count of %ld\n", Py_REFCNT(self));
    fflush(stdout);
    return self;
   }

As you can see this function purely increments the reference count of 
the instance.


/Note: I understand normally this would be the wrong this to do, but 
this is a POC of the issue, not live code. In the live code I am 
attaching a 2nd nodes to each other, and the live code therefore 
increments the ref-count for both objects - so even if the Python code 
deletes it's reference the reference count for the instance should still 
be 1 in order to ensure it doesn't get garbage collected./


This function is exposed as the test_ref method.

This is the test case :

    def test_000_009_test_ref_count(self):
    node = _Node("Hello")
    self.assertEqual(sys.getrefcount(node), 2)
    node.test_ref()
    self.assertEqual(sys.getrefcount(node), 3)

The output of this test case is :

test_000_009_test_ref_count (__main__.TestNode) ...
Incrementing ref count for self - just for the hell of it

 before self has a ref count of 2

 after self has a ref count of 3
FAIL

==
FAIL: test_000_009_test_ref_count (__main__.TestNode)
--
Traceback (most recent call last):
  File 
"/home/tony/Development/python/orderedtree/tests/test_orderedtree.py", 
line 62, in test_000_009_test_ref_count

    self.assertEqual(sys.getrefcount(node), 3)
AssertionError: 2 != 3

So I understand why the first assert will be true - when the 
sys.getrefcount() function is called the ref count is incremented 
temporarily (as a borrowed reference), so there are now two references - 
the 'node' variable, and the borrowed reference in the function call.


We then call the 'test_ref' method, and again that call causes a 
borrowed reference (hence the ref count being 2 initially within the 
method). The 'test_ref' method increments the reference of the instance 
- as you can see from the output we now have a ref count of 3 - (that 
count is the 'node' variable in the test case, the borrowed reference 
due to the method call, and the artificial increment from the 'ref_test' 
method).


When the 'ref_test' method exits I would expect the ref count of the 
instance to now be 2 (one for the 'node' variable, and one as a result 
of the artificial increment increment').


I would therefore expect the 2nd assertEqual in the test case to 
succeed. - in this case the borrowed reference within sys.getfrefcount() 
should cause the count to be 3.


As you see though that 2nd assertEqual fails - suggesting that the 
refcount of 'node' is actually only 1 when the 'test_ref' method exits.


Can someone explain why the 'test_ref' method fails to change the 
refcount of the 'node' instance.


--
https://mail.python.org/mailman/listinfo/python-list


[issue12756] datetime.datetime.utcnow should return a UTC timestamp

2022-01-10 Thread Tony Rice


Tony Rice  added the comment:

I would argue that PEP20 should win over backward compatibility, in addition to 
the points I hinted at above, 

practicality beats purity

--

___
Python tracker 
<https://bugs.python.org/issue12756>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12756] datetime.datetime.utcnow should return a UTC timestamp

2022-01-10 Thread Tony Rice


Tony Rice  added the comment:

This enhancement request should be reconsidered.  

Yes it is the documented behavior but that doesn't mean it's the right 
behavior. Functions should work as expected not just in the context of the 
module they are implemented in but the context of the problem they are solving.

The suggested workaround of essentially nesting the specified UTC time via 
datetime.now(timezone.utc) is ugly rather than beautiful, complex rather than 
simple, and nested instead of flat.

The suggestion that now is preferred over isnow loses sight that UTC is not 
like other timezones.

A lot has changed since Python 2.7 was released in 2010. It is the default 
timezone of cloud infrastructure.

--
nosy: +rtphokie

___
Python tracker 
<https://bugs.python.org/issue12756>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46319] datetime.utcnow() should return a timezone aware datetime

2022-01-09 Thread Tony Rice


New submission from Tony Rice :

datetime.datetime.utcnow()

returns a timezone naive datetime, this is counter-intuitive since you are 
logically dealing with a known timezone. I suspect this was implemented this 
way for fidelity with the rest of datetime.datetime (which returns timezone 
naive datetime objects).

The workaround (see below) is to replace the missing tzinfo.

Recommendation:
By default datetime.datetime.utcnow() should return a timezone aware datetime 
(with tzinfo of UTC of course) or at least offer this behavoir as an option, 

e.g.:

datetime.datetime.utcnow(timezone-aware=True)

Workaround:
dt = datetime.utcnow().replace(tzinfo=timezone.utc)

--
components: Library (Lib)
messages: 410160
nosy: rtphokie
priority: normal
severity: normal
status: open
title: datetime.utcnow() should return a timezone aware datetime
type: behavior
versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46319>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45916] documentation link error

2021-11-30 Thread Tony Zhou


Tony Zhou  added the comment:

ok i see, I found the pdf. thank you for that anyway

--

___
Python tracker 
<https://bugs.python.org/issue45916>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: pyinstaller wrong classified as Windows virus

2021-11-28 Thread Tony Flury via Python-list
Have you tried using Nuitka - rather than pyInstalller - it means you 
distribute a single executable and the Python run time library (which 
they probably have already), and it has the advantage that it is a bit 
quicker than standard python.


Rather than bundle the source code and interpreter in single executable, 
Nuitka actually compiles the Python source code to native machine code 
(via a set of C files), and  this native executable uses the Python 
runtime library to implement the python features.It does rely on you 
having a Windows C compiler available.



On 25/11/2021 17:10, Ulli Horlacher wrote:

Chris Angelico  wrote:


Unfortunately, if you're not going to go to the effort of getting your
executables signed

I cannot sign my executables (how can I do it anyway?), because Windows
deletes my executable as soon as I have compiled them! They exist only
for a few seconds and then they are gone.



another reason to just distribute .py files.

I cannot do that because my users do not have Python installed and they
are not allowed to do it.


--
https://mail.python.org/mailman/listinfo/python-list


[issue45916] documentation link error

2021-11-28 Thread Tony Zhou

New submission from Tony Zhou :

3.10.0 Documentation » The Python Tutorial » 15. Floating Point Arithmetic: 
Issues and Limitationsin 
in the link "The Perils of Floating Point" brings user to https://www.hmbags.tw/
I don't think this is right. please check

--
messages: 407200
nosy: cookiez6
priority: normal
severity: normal
status: open
title: documentation link error
type: security

___
Python tracker 
<https://bugs.python.org/issue45916>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Selenium py3.8+ DepreciationWarnings - where to find doc to update code?

2021-10-13 Thread Tony Oliver


On Wednesday, 13 October 2021 at 16:16:46 UTC+1, jkk wrote:
> Selenium 3.141+ 
> python 3.8+ 
> ubuntu 20.04 or windows 10 
> 
> I'm trying to upgrade code from py3.6+ to py3.8+ and I'm getting several 
> DepreciationWarnings. 
> 
> Can someone point me to where I can find the documentation that explains how 
> to to remedy these warnings. What are the new preferred coding practices? 
> 
> For example, here is a "DepreciationWarning" that I figured out: 
> 
> py3.6+ 
> from selenium import webdriver 
> browser = browser = webdriver.Firefox() 
> browser.get(url) 
> tables = browser.find_elements_by_tag_name("table") 
> 
> 
> py3.8+ 
> from selenium import webdriver 
> from selenium.webdriver.common.by import By 
> browser = browser = webdriver.Firefox() 
> browser.get(url) 
> tables = browser.find_elements(By.TAG_NAME, "table") 
> or 
> tables = browser.find_elements(By.XPATH, "//table")

I cannot help you with your immediate problem but am intrigued to discover what 
your “browser = browser = “ idiom does that differs from “browser = 
“.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Understanding the working mechanis of python unary arithmetic operators.

2021-10-02 Thread Tony Oliver
On Saturday, 2 October 2021 at 13:48:39 UTC+1, hongy...@gmail.com wrote:
> On Saturday, October 2, 2021 at 4:59:54 PM UTC+8, ju...@diegidio.name wrote: 
> > On Saturday, 2 October 2021 at 10:34:27 UTC+2, hongy...@gmail.com wrote: 
> > > See the following testings: 
> > > 
> > > In [24]: a=3.1415926535897932384626433832795028841971 
> > > In [27]: -a 
> > > Out[27]: -3.141592653589793 
> > You've never heard of floating-point? Double precision has 53 significant 
> > bits of mantissa, corresponding approximately to 16 decimal digits. 
> > 
> >  
> > > In [17]: ~-+1 
> > > Out[17]: 0 
> > << The unary ~ (invert) operator yields the bitwise inversion of its 
> > integer argument. The bitwise inversion of x is defined as -(x+1). It only 
> > applies to integral numbers or to custom objects that override the 
> > __invert__() special method. >> 
> > 
> >  
> > > I'm very puzzled by these operators. Any hints will be highly 
> > > appreciated. 
> > Try and read the proverbial manual: that's truly a fundamental skill...
> Thank you for your explanation. Then what about the following questions?: 
> 
> 1. Should `+' and `-' be classified as binary operators or unary operators?

Both.  See sections 6.6 and 6.7 of the documentation at
https://docs.python.org/3/reference/expressions.html

> As we all know, `a + b', and `a - b' are the normal ways we do basic 
> arithmetic operations. 

Really?  Don't you ever write something like "x = -y"?
Or do you habitually write "x = 0 - y" or "x = 0.0 - y"?

> 2. See the following testings: 
> 
> In [20]: bool(int(True)) 
int(True) -> 1
bool(1) -> True
> Out[20]: True 
> 
> In [21]: bool(~int(True)) 
int(True) -> 1
~1 -> -2
bool(-2) -> True
> Out[21]: True 
> 
> In [22]: bool(~~int(True)) 
int(True) -> 1
~1 -> -2  # these two operations
~(-2) -> 1# cancel each other out
bool(1) -> True
> Out[22]: True 
> 
> In [23]: bool(~~~int(True)) 
Because two consecutive bit-inversions cancel each other out;
this is just a complicated re-statement of operation [21], above
> Out[23]: True 
> 
> In [24]: bool(int(False)) 
int(False) -> 0
bool(0) -> False
> Out[24]: False 
> 
> In [25]: bool(~int(False)) 
int(False) -> 0
~0 -> -1
bool(-1) -> True
> Out[25]: True 
> 
> In [26]: bool(~~int(False)) 
Again, two consecutive inversions cancel each other out
so this is just an over-complicated re-statement of [24]
> Out[26]: False 
> 
> In [27]: bool(~~~int(False)) 
Likewise, this is the equivalent of re-stating [25]
> Out[27]: True 
> 
> Why can’t/shouldn't we get something similar results for both `True' and 
> `False' in the above testings? 

Sorry, I can't parse that.
-- 
https://mail.python.org/mailman/listinfo/python-list


tkinter

2021-08-21 Thread Tony Genter
   Tkinter stopped working overnight from 8/20/2021 to 8/21/2021.  Last night
   I was working on tutorials to work on a GUI and this morning every file
   that uses tkinter is broken stating that no module `tkinter' exists.



   Please let me know if there is some sort of problem.  I am removing visual
   studios, all versions of python, sublime text, atom, etc and reinstalling
   all of it to try and resolve the issue.





   Thanks,



   Talat



   Sent from [1]Mail for Windows





References

   Visible links
   1. https://go.microsoft.com/fwlink/?LinkId=550986
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue43291] elementary multiplication by 0.01 error

2021-02-21 Thread Tony


New submission from Tony :

on the >>> prompt type:
>>>717161 * 0.01
7171.6101

the same goes for
>>>717161.0 * 0.01
7171.6101

You can easily find more numbers with similar problem:
for i in range(100):
if len(str(i * 0.01)) > 12:
print(i, i * 0.01)

I am sure, that this problem was found before and circumvented by:
>>>717161 / 100
7171.61

but this is hardly the way, one wants to rely on the code.

This is the python version I use:
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit 
(AMD64)] on win32

--
messages: 387485
nosy: tonys_0
priority: normal
severity: normal
status: open
title: elementary multiplication by 0.01 error
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue43291>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RE: PSYCOPG2

2021-02-13 Thread Tony Ogilvie
Thank you

I have tried Sublime 3 and the same thing happens. I do not think I have
another version of Python on my PC. I am trying to look through my files to
find out.

Regards

Tony


-Original Message-
From: Mladen Gogala  
Sent: 13 February 2021 05:35
To: python-list@python.org
Subject: Re: PSYCOPG2

On Fri, 12 Feb 2021 18:29:48 +, Tony Ogilvie wrote:

> I am trying to write a program to open a PostgesSQL 13 database using 
> psycopg2. All seems to work if I write direct to Python but if I write 
> the script into IDLE it does not work with the IDLE Shell 3.9.1 
> reporting an error of no attribute 'connect'.
> 
> 
>  
> I have tried many options to try and get this to work.
> 
> 
>  
> Regards
> 
> 
>  
> Tony

It looks like your idle is not using the same interpreter that you are using
when writing direct code. Anyway, my advice would be to ditch Idle and use
VSCode. It's fabulous.



--
Mladen Gogala
Database Consultant
https://dbwhisperer.wordpress.com


-- 
https://mail.python.org/mailman/listinfo/python-list


PSYCOPG2

2021-02-12 Thread Tony Ogilvie
I am trying to write a program to open a PostgesSQL 13 database using
psycopg2. All seems to work if I write direct to Python but if I write the
script into IDLE it does not work with the IDLE Shell 3.9.1 reporting an
error of no attribute 'connect'.

 

I have tried many options to try and get this to work.

 

Regards

 

Tony

 

 



 

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue43001] python3.9.1 test_embed test_tabnanny failed

2021-02-11 Thread Tony Martin Berbel

Tony Martin Berbel  added the comment:

My system crashed completely. I reinstalled Ubuntu. Sorry I couldn't help
more ... :(
___

MARTIN BERBEL, Tony
GSM: +32 (0) 477 / 33.12.48

--

Le mer. 10 févr. 2021 à 04:06, Tony Martin Berbel 
a écrit :

>
> Tony Martin Berbel  added the comment:
>
> I had the same error
> I ran the make test command with >
> But I don't know where to look for the log file
>
> --
> nosy: +wingarmac
>
> ___
> Python tracker 
> <https://bugs.python.org/issue43001>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue43001>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43001] python3.9.1 test_embed test_tabnanny failed

2021-02-09 Thread Tony Martin Berbel


Tony Martin Berbel  added the comment:

I found lastlog and attached it !

--
Added file: https://bugs.python.org/file49800/lastlog

___
Python tracker 
<https://bugs.python.org/issue43001>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43001] python3.9.1 test_embed test_tabnanny failed

2021-02-09 Thread Tony Martin Berbel


Tony Martin Berbel  added the comment:

I had the same error
I ran the make test command with > 
But I don't know where to look for the log file

--
nosy: +wingarmac

___
Python tracker 
<https://bugs.python.org/issue43001>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43160] argparse: add extend_const action

2021-02-08 Thread Tony Lykke


Tony Lykke  added the comment:

Sorry, there's a typo in my last comment.

--store --foo a
Namespace(foo=['a', 'b', 'c'])

from the first set of examples should have been

--store --foo c
Namespace(foo=['a', 'b', 'c'])

--

___
Python tracker 
<https://bugs.python.org/issue43160>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43160] argparse: add extend_const action

2021-02-08 Thread Tony Lykke

Tony Lykke  added the comment:

Perhaps the example I added to the docs isn't clear enough and should be 
changed because you're right, that specific one can be served by store_const. 
Turns out coming up with examples that are minimal but not too contrived is 
hard! Let me try again with a longer example that hopefully shows more clearly 
how the existing action's behaviours differ from my patch.

parser = argparse.ArgumentParser()
parser.add_argument("--foo", action="append", default=[])
    parser.add_argument("--append", action="append_const", dest="foo", 
const=["a", "b"])
    parser.add_argument("--store", action="store_const", dest="foo", 
const=["a", "b"])

When run on master the following behaviour is observed:

--foo a --foo b --foo c
Namespace(foo=['a', 'b', 'c'])
--foo c --append
Namespace(foo=['c', ['a', 'b']])
--foo c --store
Namespace(foo=['a', 'b'])
--store --foo a
Namespace(foo=['a', 'b', 'c'])

If we then add the following:

parser.add_argument("--extend", action="extend_const", dest="foo", 
const=["a", "b"])

and then run it with my patch the following can be observed:

--foo c --extend
Namespace(foo=['c', 'a', 'b'])
--extend --foo c
Namespace(foo=['a', 'b', 'c'])

store_const is actually a pretty close fit, but the way it makes order 
significant (specifically in that it will silently drop prev values) seems like 
it'd be rather surprising to users and makes it a big enough footgun for this 
use case that I don't think it's a satisfactory alternative.

> I suspect users of your addition will get a surprise if they aren't careful 
> to provide a list or tuple 'const'

I did consider that, but I don't think they'd get any more of a surprise than 
for doing the same with list.extend vs list.append.

--

___
Python tracker 
<https://bugs.python.org/issue43160>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43160] argparse: add extend_const action

2021-02-07 Thread Tony Lykke


Change by Tony Lykke :


--
keywords: +patch
pull_requests: +23269
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24478

___
Python tracker 
<https://bugs.python.org/issue43160>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43160] argparse: add extend_const action

2021-02-07 Thread Tony Lykke


New submission from Tony Lykke :

I submitted this to the python-ideas mailing list early last year: 
https://mail.python.org/archives/list/python-id...@python.org/thread/7ZHY7HFFQHIX3YWWCIJTNB4DRG2NQDOV/.
 Recently I had some time to implement it (it actually turned out to be pretty 
trivial), so thought I'd put forward a PR.

Here's the summary from the mailing list submission:

I have found myself a few times in a position where I have a repeated argument 
that uses the append action, along with some convenience arguments that append 
a specific const to that same dest (eg:  --filter-x being made equivalent to 
--filter x via append_const). This is particularly useful in cli apps that 
expose some kind of powerful-but-verbose filtering capability, while also 
providing shorter aliases for common invocations. I'm sure there are other use 
cases, but this is the one I'm most familiar with.

The natural extension to this filtering idea are convenience args that set two 
const values (eg: --filter x --filter y being equivalent to --filter-x-y), but 
there is no extend_const action to enable this.

While this is possible (and rather straight forward) to add via a custom 
action, I feel like this should be a built-in action instead. append has 
append_const, it seems intuitive and reasonable to expect extend to have 
extend_const too (my anecdotal experience the first time I came across this 
need was that I simply tried using extend_const without checking the docs, 
assuming it already existed).

Here's an excerpt from the docs I drafted for this addition that hopefully 
convey the intent and use case clearly.

+* ``'extend_const'`` - This stores a list, and extends each argument value to 
the list.
+  The ``'extend_const'`` action is typically useful when you want to provide 
an alias
+  that is the combination of multiple other arguments. For example::
+
+>>> parser = argparse.ArgumentParser()
+>>> parser.add_argument('--str', dest='types', action='append_const', 
const=str)
+>>> parser.add_argument('--int', dest='types', action='append_const', 
const=int)
+>>> parser.add_argument('--both', dest='types', action='extend_const', 
const=(str, int))
+>>> parser.parse_args('--str --int'.split())
+Namespace(types=[, ])
+>>> parser.parse_args('--both'.split())
+Namespace(types=[, ])

--
components: Library (Lib)
messages: 386614
nosy: rhettinger, roganartu
priority: normal
severity: normal
status: open
title: argparse: add extend_const action
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue43160>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43130] Should this construct throw an exception?

2021-02-04 Thread Tony Ladd


Tony Ladd  added the comment:

Dennis

Thanks for the explanation. Sorry to post a fake report. Python is relentlessly 
logical but sometimes confusing.

--

___
Python tracker 
<https://bugs.python.org/issue43130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43130] Should this construct throw an exception?

2021-02-04 Thread Tony Ladd


New submission from Tony Ladd :

The expression "1 and 2" evaluates to 2. Actually for most combinations of data 
type it returns the second object. Of course its a senseless construction (a 
beginning student made it) but why no exception?

--
components: Interpreter Core
messages: 386496
nosy: tladd
priority: normal
severity: normal
status: open
title: Should this construct throw an exception?
type: behavior

___
Python tracker 
<https://bugs.python.org/issue43130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42173] Drop Solaris support

2020-10-30 Thread Tony Albers


Tony Albers  added the comment:

No no no, please don't.

Apart from FreeBSD, illumos distros are the only really hard-core UNIX OS'es 
still freely available, the features taken into account. 
SMF, dtrace and several hypervisor types makes illumos really stand out.

I understand that there are resources that need to be assigned to maintaining 
Python on illumos/SunOS, but please reach out to their communities, maybe 
someone can help.

--
nosy: +tbalbers

___
Python tracker 
<https://bugs.python.org/issue42173>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Embedding version in command-line program

2020-10-17 Thread Tony Flury via Python-list



On 07/10/2020 12:06, Loris Bennett wrote:

Hi,

I have written a program, which I can run on the command-line thus

   mypyprog --version

and the get the version, which is currently derived from a variable in
the main module file.

However, I also have the version in an __init__.py file and in a
pyproject.toml (as I'm using poetry, otherwise this would be in
setup.py).

What's the best way of reducing these three places where the version is
defined to a single one?

Cheers,

Loris
My top level package always has a version.py file which defines 
__version__, __author__ etc. I am not sure if that helps in your .toml 
file though - is it executed or does it have the ability to read files 
when it creates the distributable ?

--
https://mail.python.org/mailman/listinfo/python-list


Simple question - end a raw string with a single backslash ?

2020-10-13 Thread Tony Flury via Python-list
I am trying to write a simple expression to build a raw string that ends 
in a single backslash. My understanding is that a raw string should 
ignore attempts at escaping characters but I get this :


>>> a = r'end\'
  File "", line 1
    a = r'end\'
  ^
   SyntaxError: EOL while scanning string literal

I interpret this as meaning that the \' is actually being interpreted as 
a literal quote - is that a bug ?


If I try to escaped the backslash I get a different problem:

>>> a = r'end\\'
>>> a
   'end'
>>> print(a)
   end\\
>>> len(a)
   5
>>> list(a)
   ['e', 'n', 'd', '\\', '\\']

So you can see that our string (with the escaped backslash)  is now 5 
characters with two literal backslash characters


The only solution I have found is to do this :

>>> a = r'end' + chr(92)
>>> a
   'end\\'
>>> list(a)
   ['e', 'n', 'd', '\\']

or


>>> a = r'end\\'[:-1]
>>> list(a)
   ['e', 'n', 'd', '\\']

Neither of which are nice.



--
https://mail.python.org/mailman/listinfo/python-list


[issue41829] sqlite3.Row always read as tuple when supplied to executemany

2020-09-21 Thread Tony Wu


Change by Tony Wu :


--
nosy: +ghaering

___
Python tracker 
<https://bugs.python.org/issue41829>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41829] sqlite3.Row always read as tuple when supplied to executemany

2020-09-21 Thread Tony Wu


New submission from Tony Wu :

Supplying a sequence or sqlite3.Row objects to sqlite3.Connection.executemany 
will cause the Row objects to be interpreted as Sequences instead of Mappings 
even if the statement to be executed uses named parameter substitution.

That is, values in the Rows are accessed using their numerical indices instead 
of column names.

This script demonstrate how this is unexpected behavior.

Issue found in Python 3.8.5 and 3.7.6.

--
components: Extension Modules
files: row_substitution.py
messages: 377295
nosy: tony.wu
priority: normal
severity: normal
status: open
title: sqlite3.Row always read as tuple when supplied to executemany
type: behavior
versions: Python 3.8
Added file: https://bugs.python.org/file49471/row_substitution.py

___
Python tracker 
<https://bugs.python.org/issue41829>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41754] Webbrowser Module Cannot Find xdg-settings on OSX

2020-09-10 Thread Tony DiLoreto


New submission from Tony DiLoreto :

The following code does not work on many OSX installations of Python via 
homebrew:

>>> import webbrowser
>>> webbrowser.open("http://www.google.com;)

And throws the following error stack trace:

  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/webbrowser.py",
 line 26, in register
register_standard_browsers()
  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/webbrowser.py",
 line 551, in register_standard_browsers
raw_result = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py",
 line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py",
 line 489, in run
with Popen(*popenargs, **kwargs) as process:
  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py",
 line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
  File 
"/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/subprocess.py",
 line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
NotADirectoryError: [Errno 20] Not a directory: 'xdg-settings'



The only workaround right now is to modify webbrowser.py via the instructions 
here: https://github.com/jupyter/notebook/issues/3746#issuecomment-489259515.

Thank you for resolving.

--
components: Library (Lib), macOS
messages: 376672
nosy: ned.deily, ronaldoussoren, tony.diloreto
priority: normal
severity: normal
status: open
title: Webbrowser Module Cannot Find xdg-settings on OSX
type: crash
versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue41754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Any ideas for a new language inspired to Python?

2020-09-05 Thread Tony Flury via Python-list

On 08/08/2020 18:18, Marco Sulla wrote:




Thank you, some features are interesting, even if I prefer the Python syntax.

What about the compiler? Is it better to "compile" to C or to
bytecode? How can I generate a bytecode that can be compiled by gcc?
Can I skip the AST generation for now, or it will be a great problem
later?


Most modern compilers use an AST - it is simply an internal 
representation of the syntax, and for most compilers it it is an 
intermediate step before code generation.


I think you mean skipping the bytecode generation and generating 
straight to C/machine code.


--

Tony Flury

--
https://mail.python.org/mailman/listinfo/python-list


[issue41246] IOCP Proactor same socket overlapped callbacks

2020-08-29 Thread Tony


Tony  added the comment:

bump

--

___
Python tracker 
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails

2020-08-29 Thread Tony


Tony  added the comment:

bump

--

___
Python tracker 
<https://bugs.python.org/issue41533>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41279] Add a StreamReaderBufferedProtocol

2020-08-29 Thread Tony


Tony  added the comment:

bump

--
title: Convert StreamReaderProtocol to a BufferedProtocol -> Add a 
StreamReaderBufferedProtocol

___
Python tracker 
<https://bugs.python.org/issue41279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-17 Thread Tony Reix


Tony Reix  added the comment:

Hi Stefan,
In your message https://bugs.python.org/issue41540#msg375462 , you said:
 "However, instead of freezing the machine, the process gets a proper SIGKILL 
almost instantly."
That's probably due to a very small size of the Paging Space of the AIX machine 
you used for testing. With very small PS, the OS quickly reaches the step where 
PS and memory are full and it tries to kill possible culprits (but often 
killing innocent processes, like my bash shell). However, with a large PS (size 
of the Memory, or half), it takes some time for the OS to consume the PS, and, 
during this time (many seconds if not minutes), the OS looks like frozen and it 
takes many seconds or minutes for a "kill -9 PID" to take effect.

About -bmaxdata, I always used it for extending default memory of a 32bit 
process, but I never used it for reducing the possible memory of a 64bit 
process since some users may want to use python with hundreds of GigaBytes of 
memory. And the python executable used for tests is the same one that is 
delivered to users.

About PSALLOC=early , I confirm that it perfectly fixes the issue. So, we'll 
use it when testing Python.
Our customers should use it or use ulimit -d .
But using -bmaxdata for building python process in 64bit would reduce the 
possibilities of the python process.
In the future, we'll probably improve the compatibility with Linux so that this 
(rare) case no more appear.

BTW, on AIX, we have only 12 test cases failing out of about 32,471 test cases 
run in 64bit, with probably only 5 remaining serious failures. Both with GCC 
and XLC. Not bad. Less in 32bit. Now studying these few remaining issues and 
the still skipped tests.

--

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-13 Thread Tony Reix


Tony Reix  added the comment:

I forgot to say that this behavior was not present in stable version 3.8.5 . 
Sorry.

On 2 machines AIX 7.2, testing Python 3.8.5 with:
+ cd /opt/freeware/src/packages/BUILD/Python-3.8.5
+ ulimit -d unlimited
+ ulimit -m unlimited
+ ulimit -s unlimited
+ export 
LIBPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/64bit:/usr/lib64:/usr/lib:/opt/lib
+ export PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/64bit/Modules
+ ./python Lib/test/regrtest.py -v test_decimal
...
gave:

507 tests in 227 items.
507 passed and 0 failed.
Test passed.

So, this issue with v3.10 (master) appeared to me as a regression. However, 
after hours debugging the issue, I forgot to say it in this defect, sorry.

(Previously, I was using limits for -d -m and -s : max 4GB. However, that 
appeared to be an issue when running tests with Python test option -M12Gb, 
which requires up and maybe more than 12GB of my 16GB memory machine in order 
to be able to run a large part of the Python Big Memory tests. And thus I 
unlimited these 3 resources, with no problem at all with version 3.8.5 .)

--

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-13 Thread Tony Reix


Tony Reix  added the comment:

Is it a 64bit AIX ? Yes, AIX is 64bit by default and only since ages, but it 
manages 32bit applications as well as 64bit applications.

The experiments were done with 64bit Python executables on both AIX and Linux.

The AIX machine has 16GB Memory and 16GB Paging Space.

The Linux Fdora32/x86_64 machine has 16GB Memory and 8269820 Paging Space 
(swapon -s).

Yes, I agree that the behavior of AIX malloc() under "ulimit -d unlimited" 
is... surprising. And the manual of malloc() does not talk about this.

Anyway, does the test:   if (size > (size_t)PY_SSIZE_T_MAX)  was aimed to 
prevent calling malloc() with such a huge size? If yes, that does not work.

--

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-13 Thread Tony Reix


Tony Reix  added the comment:

Hi Pablo,
I'm only surprised that the maximum size generated in the test is always lower 
than the PY_SSIZE_T_MAX. And this appears both on AIX and on Linux, which both 
compute the same values.

On AIX, it appears (I've just discovered this now) that malloc() does not 
ALWAYS check that there is enough memory to allocate before starting to claim 
memory (and thus paging space). This appears when Data Segment size is 
unlimited.

On Linux/Fedora, I had no limit too. But it behaves differently and malloc() 
always checks that the size is correct.

--

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-13 Thread Tony Reix


Tony Reix  added the comment:

Some more explanations.

On AIX, the memory is controlled by the ulimit command.
"Global memory" comprises the physical memory and the paging space, associated 
with the Data Segment.

By default, both Memory and Data Segment are limited:
# ulimit -a
data seg size   (kbytes, -d) 131072
max memory size (kbytes, -m) 32768
...

However, it is possible to remove the limit, like:
# ulimit -d unlimited

Now, when the "data seg size" is limited, the malloc() routine checks if enough 
memory/paging-space are available, and it immediately returns a NULL pointer.

But, when the "data seg size" is unlimited, the malloc() routine first tries to 
allocate and quickly consumes the paging space, which is much slower than 
acquiring memory since it consumes disk space. And it nearly hangs the OS. 
Thus, in that case, it does NOT check if enough memory of data segments are 
available. Bad.

So, this issue appears on AIX only if we have:
# ulimit -d unlimited

Anyway, the test:
if (size > (size_t)PY_SSIZE_T_MAX)
in:
Objects/obmalloc.c: PyMem_RawMalloc()
seems weird to me since the max of size is always lower than PY_SSIZE_T_MAX .

--
nosy:  -facundobatista, mark.dickinson, pablogsal, rhettinger, skrah

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX

2020-08-13 Thread Tony Reix


New submission from Tony Reix :

Python master of 2020/08/11

Test test_maxcontext_exact_arith (test.test_decimal.CWhitebox) checks that 
Python correctly handles a case where an object of size 421052631578947376 is 
created.

maxcontext = Context(prec=C.MAX_PREC, Emin=C.MIN_EMIN, Emax=C.MAX_EMAX)

Both on Linux and AIX, we have:
Context(prec=99,
rounding=ROUND_HALF_EVEN,
Emin=-99,
 Emax=99, capitals=1, clamp=0, flags=[], 
traps=[InvalidOperation, DivisionByZero, Overflow])

The test appears in:
  Lib/test/test_decimal.py
5665 def test_maxcontext_exact_arith(self):
and the issue (on AIX) exactly appears at:
self.assertEqual(Decimal(4) / 2, 2)

The issue is due to code in: Objects/obmalloc.c :
void *
PyMem_RawMalloc(size_t size)
{
/*
 * Limit ourselves to PY_SSIZE_T_MAX bytes to prevent security holes.
 * Most python internals blindly use a signed Py_ssize_t to track
 * things without checking for overflows or negatives.
 * As size_t is unsigned, checking for size < 0 is not required.
 */
if (size > (size_t)PY_SSIZE_T_MAX)
return NULL;
return _PyMem_Raw.malloc(_PyMem_Raw.ctx, size);

Both on Fedora/x86_64 and AIX, we have:
 size:421052631578947376
 PY_SSIZE_T_MAX: 9223372036854775807
thus: size < PY_SSIZE_T_MAX and _PyMem_Raw.malloc() is called.

However, on Linux, the malloc() returns a NULL pointer in that case, and then 
Python handles this and correctly runs the test.
However, on AIX, the malloc() tries to allocate the requested memory, and the 
OS gets stucked till the Python process is killed by the OS.

Either size is too small, or PY_SSIZE_T_MAX is not correctly computed:
./Include/pyport.h :
  /* Largest positive value of type Py_ssize_t. */
  #define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1))

Anyway, the following code added in PyMem_RawMalloc() before the call to 
_PyMem_Raw.malloc() , which in turns calls malloc() :
if (size == 421052631578947376)
{
    printf("TONY: 421052631578947376: --> PY_SSIZE_T_MAX: %ld \n", 
PY_SSIZE_T_MAX);
return NULL;
}
does fix the issue on AIX.
However, it is simply a way to show where the issue can be fixed.
Another solution (fix size < PY_SSIZE_T_MAX) is needed.

--
components: C API
messages: 375302
nosy: T.Rex
priority: normal
severity: normal
status: open
title: Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
type: crash
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails

2020-08-12 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20974
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21847

___
Python tracker 
<https://bugs.python.org/issue41533>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails

2020-08-12 Thread Tony


New submission from Tony :

When calling a function a stack is allocated via va_build_stack.

There is a leak that happens if do_mkstack fails in it.

--
messages: 375267
nosy: tontinton
priority: normal
severity: normal
status: open
title: Bugfix: va_build_stack leaks the stack if do_mkstack fails

___
Python tracker 
<https://bugs.python.org/issue41533>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-08-04 Thread Tony Reix


Tony Reix  added the comment:

I do agree that the example with memchr is not correct.

About your suggestion, I've done it. With 32. And that works fine.
All 3 values are passed by value.


# cat Pb-3.8.5.py
#!/usr/bin/env python3

from ctypes import *

mine = CDLL('./MemchrArgsHack2.so')

class MemchrArgsHack2(Structure):
_fields_ = [("s",   c_char_p),
("c_n", c_ulong * 2)]

memchr_args_hack2 = MemchrArgsHack2()
memchr_args_hack2.s = b"abcdef"
memchr_args_hack2.c_n[0] = ord('d')
memchr_args_hack2.c_n[1] = 7

print( "sizeof(MemchrArgsHack2): ", sizeof(MemchrArgsHack2) )

print( CFUNCTYPE(c_char_p, MemchrArgsHack2, c_void_p)   (('my_memchr', 
mine)) (memchr_args_hack2, None) )


# cat MemchrArgsHack2.c
#include 
#include 

struct MemchrArgsHack2
{
char *s;
unsigned long c_n[2];
};

extern char *my_memchr(struct MemchrArgsHack2 args)
{
printf("s   element  : char pointer: %p %s\n", args.s, args.s);
printf("c_n element 0: a Long:   %ld\n",   args.c_n[0]);
printf("c_n element 1: a Long:   %ld\n",   args.c_n[1]);

return(args.s +3);
}



TONY Modules/_ctypes/stgdict.c: MAX_STRUCT_SIZE=32
sizeof(MemchrArgsHack2):  24
TONY: libffi: src/powerpc/ffi_darwin.c : ffi_prep_cif_machdep()
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
s->size:24 s->type:13 : FFI_TYPE_STRUCT
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
FFI_TYPE_STRUCT Before s->size:24
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size: 8
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
s->size:16 s->type:13 : FFI_TYPE_STRUCT
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
FFI_TYPE_STRUCT Before s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:11 : FFI_TYPE_UINT64
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size: 8
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:11 : FFI_TYPE_UINT64
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After 
ALIGN s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
p->size:16 s->size:24
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After 
ALIGN s->size:24
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX
TONY: libffi: cif->abi:  1  -(long)cif->bytes : -144  cif->flags :  8  
ecif.rvalue : fffd210  fn: 9001000a0227760  FFI_FN(ffi_prep_args) : 
9001000a050a108

s   element  : char pointer: a154d40 abcdef
c_n element 0: a Long:   100
c_n element 1: a Long:   7<<<<  Correct value appears.
b'def'

With the regular version (16), I have:

sizeof(MemchrArgsHack2):  24
TONY: libffi: src/powerpc/ffi_darwin.c : ffi_prep_cif_machdep()
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
s->size:24 s->type:13 : FFI_TYPE_STRUCT
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
FFI_TYPE_STRUCT Before s->size:24
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size: 8
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After 
ALIGN s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX
TONY: libffi: cif->abi:  1  -(long)cif->bytes : -144  cif->flags :  8  
ecif.rvalue : fffd210  fn: 9001000a0227760  FFI_FN(ffi_prep_args) : 
9001000a050a108

s   element  : char pointer: a154d40 abcdef
c_n element 0: a Long:   100
c_n element 1: a Long:   0<<< Python pushed nothing for this.

--

___
Python tracker

[issue38628] Issue with ctypes in AIX

2020-08-03 Thread Tony Reix


Tony Reix  added the comment:

After more investigations, we (Damien and I) think that there are several 
issues in Python 3.8.5 :

1) Documentation.
  a) AFAIK, the only place in the Python ctypes documentation where it talks 
about how arrays in a structure are managed appears at: 
https://docs.python.org/3/library/ctypes.html#arrays
  b) the size of the structure in the example given here is much greater than 
in our case.
  c) The documentation does NOT talk that a structure <= 16 bytes and a 
structure greater than 16 bytes are managed differently. That's a bug in the 
documentation vs the code.

2) Tests
  Looking at tests, there are NO test about our case.

3) There is a bug in Python
  About the issue here, we see with gdb that Python provides libffi with a 
description saying that our case is passed as pointers. However, Python does 
NOT provides libffi with pointers for the array c_n, but with values.

4) libffi obeys Python directives given in description, thinking that it deals 
with 2 pointers, and thus it pushes only 2 values in registers R3 and R4.

=
Bug in Python:
-
1) gdb
(gdb) b ffi_call

Breakpoint 1 at 0x900016fab80: file ../src/powerpc/ffi_darwin.c, line 919.

(gdb) run

Starting program: /home2/freeware/bin/python3 /tmp/Pb_damien2.py

Thread 2 hit Breakpoint 1, ffi_call (cif=0xfffd108,

fn=@0x9001000a0082640: 0x91b0d60 ,

rvalue=0xfffd1d0, avalue=0xfffd1c0)

(gdb) p *(ffi_cif *)$r3

$9 = {abi = FFI_AIX, nargs = 2, arg_types = 0xfffd1b0, rtype = 
0xa435cb8, bytes = 144, flags = 8}

(gdb) x/2xg 0xfffd1b0

0xfffd1b0:  0x0a43ca48  0x08001000a0002a10

(gdb) p *(ffi_type *)0x0a43ca48

$11 = {size = 16, alignment = 8, type = 13, elements = 0xa12eed0}   <= 
13==FFI_TYPE_STRUCT size == 16 on AIX!!! == 24 on Linux

(gdb) p *(ffi_type *)0x08001000a0002a10

$12 = {size = 8, alignment = 8, type = 14, elements = 0x0} <= FFI_TYPE_POINTER


(gdb) x/3xg *(long *)$r6

0xa436050:  0x0a152200  0x0064

0xa436060:  0x0007  <= 7 is present in avalue[2]

(gdb) x/s 0x0a152200

0xa152200:  "abcdef"

-
2) prints in libffi: AIX : aix_adjust_aggregate_sizes()

TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
s->size:24 s->type:13 : FFI_TYPE_STRUCT
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() 
FFI_TYPE_STRUCT Before s->size:24
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size: 8
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 
8 s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After 
ALIGN s->size:16
TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 
8 s->type:14 : FFI_TYPE_POINTER
TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX
TONY: libffi: cif->abi:  1  -(long)cif->bytes : -144  cif->flags :  8  
ecif.rvalue : fffd200  fn: 9001000a0227760  FFI_FN(ffi_prep_args) : 
9001000a050a108
s   element  : char pointer: a153d40 abcdef
c_n element 0: a Long:   100  0X64 = 100  instead of a pointer
c_n element 1: a Long:   0  libffi obeys description given by Python 
and pushes to R4 only what it thinks is a pointer (100 instead), and nothing in 
R5



Summary:
- Python documentation is uncomplete vs the code
- Python code gives libffi a description about pointers
  but Python code provides libffi with values.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-27 Thread Tony Reix


Tony Reix  added the comment:

Fedora32/x86_64 : Python v3.8.5 : optimized : uint type.

If, instead of using ulong type, the Pb.py program makes use of uint, the issue 
is different: see below.
This means that the issue depends on the length of the data.

BUILD=optimized
TYPE=int
export 
LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized:/usr/lib64:/usr/lib
export 
PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/Modules
./Pb-3.8.5-int-optimized.py
b'def'
None
None

# cat ./Pb-3.8.5-int-optimized.py
#!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/python

# #!/opt/freeware/src/packages/BUILD/Python-3.8.5/python
#   #!/usr/bin/env python3

from ctypes import *

libc = CDLL('/usr/lib64/libc-2.31.so')

class MemchrArgsHack(Structure):
_fields_ = [("s", c_char_p), ("c", c_uint), ("n", c_uint)]

memchr_args_hack = MemchrArgsHack()
memchr_args_hack.s = b"abcdef"
memchr_args_hack.c = ord('d')
memchr_args_hack.n = 7

class MemchrArgsHack2(Structure):
_fields_ = [("s", c_char_p), ("c_n", c_uint * 2)]

memchr_args_hack2 = MemchrArgsHack2()
memchr_args_hack2.s = b"abcdef"
memchr_args_hack2.c_n[0] = ord('d')
memchr_args_hack2.c_n[1] = 7

print( CFUNCTYPE(c_char_p, c_char_p, c_uint, c_uint, c_void_p)(('memchr', 
libc))(b"abcdef", c_uint(ord('d')), c_uint(7), None))
print( CFUNCTYPE(c_char_p, MemchrArgsHack, c_void_p)(('memchr', 
libc))(memchr_args_hack, None))
print( CFUNCTYPE(c_char_p, MemchrArgsHack2, c_void_p)(('memchr', 
libc))(memchr_args_hack2, None))

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-27 Thread Tony Reix


Change by Tony Reix :


--
versions: +Python 3.8 -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-27 Thread Tony Reix


Tony Reix  added the comment:

Fedora32/x86_64 : Python v3.8.5 has been built.
Issue is still there, but different in debug or optimized mode.
Thus, change done in https://bugs.python.org/issue22273 did not fix this issue.

./Pb-3.8.5-debug.py :
#!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug/python
...

i./Pb-3.8.5-optimized.py :
#!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/python


BUILD=debug
export 
LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug:/usr/lib64:/usr/lib
export 
PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug/Modules
./Pb-3.8.5-debug.py
b'def'
None
None

BUILD=optimized
export 
LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized:/usr/lib64:/usr/lib
export 
PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/Modules
+ ./Pb-3.8.5-optimized.py
b'def'
Pb-3.8.5.sh: line 6: 103569 Segmentation fault  (core dumped) 
./Pb-3.8.5-$BUILD.py

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-27 Thread Tony Reix


Tony Reix  added the comment:

After adding traces and after rebuilding Python and libffi with -O0 -g -gdwarf, 
it appears that, still in 64bit, the bug is still there, but that ffi_call_AIX 
is called now instead of ffi_call_DARWIN from ffi_call() routine of 
../src/powerpc/ffi_darwin.c (lines 915...).
???

# ./Pb.py
TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX
TONY: libffi: cif->abi: 1  -(long)cif->bytes : -144  cif->flags : 8  
ecif.rvalue : fffd1f0  fn: 9001000a0082640  FFI_FN(ffi_prep_args) : 
9001000a0483be8
b'def'
TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX
TONY: libffi: cif->abi: 1  -(long)cif->bytes : -144  cif->flags : 8  
ecif.rvalue : fffd220  fn: 9001000a0082640  FFI_FN(ffi_prep_args) : 
9001000a0483be8
b'def'
TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX
TONY: libffi: cif->abi: 1  -(long)cif->bytes : -144  cif->flags : 8  
ecif.rvalue : fffd220  fn: 9001000a0082640  FFI_FN(ffi_prep_args) : 
9001000a0483be8
None

In 32bit with same build environment, a different code is run since the traces 
are not printed.

Thus, 32bit and 64bit are managed very differently.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-27 Thread Tony Reix


Tony Reix  added the comment:

On AIX 7.2, with libffi compiled with -O0 -g, I have:

1) Call to memchr thru memchr_args_hack
#0  0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o)
#1  0x0900058487a0 in ffi_call_DARWIN () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#2  0x090005847eec in ffi_call (cif=0xfff, fn=0xca90, 
rvalue=0xfff, avalue=0xca80) at ../src/powerpc/ffi_darwin.c:31
#3  0x0900058f9900 in ?? ()
#4  0x0900058ebb6c in ?? ()
#5  0x09000109fc18 in _PyObject_MakeTpCall () from 
/opt/freeware/lib64/libpython3.8.so

r3 0xa3659e0720575940382841312
r4 0x64 100
r5 0x7  7
(gdb) x/s $r3
0xa3659e0:  "abcdef"

2) Call to memchr thru memchr_args_hack2
#0  0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o)
#1  0x0900058487a0 in ffi_call_DARWIN () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#2  0x090005847eec in ffi_call (cif=0xfff, fn=0xca90, 
rvalue=0xfff, avalue=0xca80) at ../src/powerpc/ffi_darwin.c:31
#3  0x0900058f9900 in ?? ()
#4  0x0900058ebb6c in ?? ()
#5  0x09000109fc18 in _PyObject_MakeTpCall () from 
/opt/freeware/lib64/libpython3.8.so

r3 0xa3659e0720575940382841312
r4 0x64 100
r5 0x0  0

So, it looks like, when libffi is not compiled with -O but with -O0 -g, that in 
64bit ffi_call_DARWIN() is call in both cases (memchr_args_hack and 
memchr_args_hack2).
However, as seen previously, it was not the case with libffi built with -O .

Moreover, we have in source code:
  switch (cif->abi)
{
case FFI_AIX:
  ffi_call_AIX(, -(long)cif->bytes, cif->flags, ecif.rvalue, fn,
   FFI_FN(ffi_prep_args));
  break;
case FFI_DARWIN:
  ffi_call_DARWIN(, -(long)cif->bytes, cif->flags, ecif.rvalue, fn,
  FFI_FN(ffi_prep_args), cif->rtype);

Why calling ffi_call_DARWIN instead of ffi_call_AIX ?

Hummm Will rebuild libffi and python both with gcc -O0 -g -gdwarf and look at 
details.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

# pwd
/opt/freeware/src/packages/BUILD/libffi-3.2.1

# grep -R ffi_closure_ASM *
powerpc-ibm-aix7.2.0.0/.libs/libffi.exp: ffi_closure_ASM
powerpc-ibm-aix7.2.0.0/include/ffitarget.h:void * code_pointer;   /* 
Pointer to ffi_closure_ASM */
src/powerpc/aix_closure.S:.globl ffi_closure_ASM
src/powerpc/darwin_closure.S:.globl _ffi_closure_ASM
src/powerpc/ffi_darwin.c: extern void ffi_closure_ASM (void);
  *((unsigned long *)[2]) = 
(unsigned long) ffi_closure_ASM; /* function  */
src/powerpc/ffitarget.h:  void * code_pointer;  /* Pointer to 
ffi_closure_ASM */

# grep -R ffi_call_AIX *
powerpc-ibm-aix7.2.0.0/.libs/libffi.exp:  ffi_call_AIX
src/powerpc/aix.S:.globl ffi_call_AIX
src/powerpc/ffi_darwin.c: extern void ffi_call_AIX(extended_cif 
*, long, unsigned, unsigned *,

In 64bit, I see that: ffi_darwin.c  is compiled and used for building 
libffi.so.6 .
Same in 32bit.

The code of file src/powerpc/ffi_darwin.c seems to be able to handle both 
FFI_AIX and FFI_DARWIN , dynamically based on cif->abi .

The code looks like VERY complex!

The hypothesis is that the 64bit code has a bug vs the 32bit version.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

AIX: difference between 32bit and 64bit.

After the second print, the stack is:

32bit:
#0  0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1  0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2  0xd438effc in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6)
#3  0xd14979bc in ?? ()
#4  0xd148995c in ?? ()
#5  0xd20fd5d8 in _PyObject_MakeTpCall () from /opt/freeware/lib/libpython3.8.so

64bit:
#0  0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o)
#1  0x090001217f00 in ffi_closure_ASM () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#2  0x090001217aac in ffi_prep_closure_loc () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#3  0x09d30900 in ?? ()
#4  0x09d22b6c in ?? ()
#5  0x09ebbc18 in _PyObject_MakeTpCall () from 
/opt/freeware/lib64/libpython3.8.so

So, the execution does not run in the same ffi routines in 32bit and in 64bit. 
Bug ?

It should be interesting to do the same with Python3 and libffi built with -O0 
-g maybe.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

On AIX in 32bit, we have:

Thread 2 hit Breakpoint 2, 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
(gdb) where
#0  0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1  0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2  0xd438effc in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6)

(gdb) i register
r0 0xd01407e0   3490973664
r1 0x2ff20f80   804392832
r2 0xf07a3cc0   4034542784
r3 0xb024c558   2955199832
r4 0x64 100
r5 0x7  7
r6 0x0  0
...

(gdb) x/s 0xb024c558
0xb024c558: "abcdef"

r5 is OK.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

On Fedora/PPC64LE, where it is OK, the same debug with gdb gives:

(gdb) where
#0  0x77df03b0 in __memchr_power8 () from /lib64/libc.so.6
#1  0x7fffea167680 in ?? () from /lib64/libffi.so.6
#2  0x7fffea166284 in ffi_call () from /lib64/libffi.so.6
#3  0x7fffea1a7fdc in _ctypes_callproc () from 
/usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-ppc64le-linux-gnu.so
..
(gdb) i register
r0 0x7fffea167614  140737120728596
r1 0x7fffc490  140737488340112
r2 0x7fffea187f00  140737120861952
r3 0x7fffea33a140  140737122640192
r4 0x6464  25700
r5 0x7 7
r6 0x0 0
r7 0x7fffea33a147  140737122640199
r8 0x7fffea33a140  140737122640192

(gdb) x/s 0x7fffea33a140
0x7fffea33a140: "abcdef"

r3: string
r4 : 0x6464 : "d" ??
r5: 7 : length of the string !!!

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

On Fedora/x86_64, in order to get the core, one must do:
  coredumpctl -o /tmp/core dump /usr/bin/python3.8

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

On AIX:

root@castor4## gdb /opt/freeware/bin/python3
...
(gdb) run -m pdb Pb.py
...
(Pdb) n
b'def'
> /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(35)()
-> print(
(Pdb) n
> /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(36)()
-> CFUNCTYPE(c_char_p, MemchrArgsHack2,
(Pdb)
Thread 2 received signal SIGINT, Interrupt.
[Switching to Thread 1]
0x0916426c in __fd_select () from /usr/lib/libc.a(shr_64.o)
(gdb) b ffi_call
Breakpoint 1 at 0x1217918
(gdb) c
...
(Pdb) n

Thread 2 hit Breakpoint 1, 0x090001217918 in ffi_call () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
(gdb) where
#0  0x090001217918 in ffi_call () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#1  0x090001217780 in ffi_prep_cif_machdep () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
#2  0x090001216fb8 in ffi_prep_cif_var () from 
/opt/freeware/lib/libffi.a(libffi.so.6)
..

(gdb) b memchr
Breakpoint 2 at 0x91b0d60
(gdb) c
Continuing.

Thread 2 hit Breakpoint 2, 0x091b0d60 in memchr () from 
/usr/lib/libc.a(shr_64.o)
(gdb) i register
r0 0x91b0d60648518346343124320
r1 0xfffc8d01152921504606832848
r2 0x9001000a008e8b8648535941212334264
r3 0xa3669e0720575940382845408
r4 0x64 100
r5 0x0  0
r6 0x9001000a04ee730648535941216921392
r7 0x0  0
...
(gdb) x/s $r3
0xa3669e0:  "abcdef"

So:
 - the string is passed as r3.
 - r4 contains "d" = 0x64=100
 - but the size 7 is missing

Anyway, it seems that ffi does not pass the pointer, but values. However, the 
length 7 is missing. Not in r5, and nowhere in the other registers.

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

Fedora32/x86_64

[root@destiny10 tmp]# gdb /usr/bin/python3.8 core
...
Core was generated by `python3 ./Pb.py'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6
Missing separate debuginfos, use: dnf debuginfo-install 
python3-3.8.3-2.fc32.x86_64
(gdb) where
#0  0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6
#1  0x7f898982caf0 in ffi_call_unix64 () from /lib64/libffi.so.6
#2  0x7f898982c2ab in ffi_call () from /lib64/libffi.so.6
#3  0x7f8989851ef1 in _ctypes_callproc.cold () from 
/usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so
#4  0x7f898985ba2f in PyCFuncPtr_call () from 
/usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so
#5  0x7f8989d6c7a1 in _PyObject_MakeTpCall () from 
/lib64/libpython3.8.so.1.0
#6  0x7f8989d69111 in _PyEval_EvalFrameDefault () from 
/lib64/libpython3.8.so.1.0
#7  0x7f8989d62ec4 in _PyEval_EvalCodeWithName () from 
/lib64/libpython3.8.so.1.0
#8  0x7f8989dde109 in PyEval_EvalCodeEx () from /lib64/libpython3.8.so.1.0
#9  0x7f8989dde0cb in PyEval_EvalCode () from /lib64/libpython3.8.so.1.0
#10 0x7f8989dff028 in run_eval_code_obj () from /lib64/libpython3.8.so.1.0
#11 0x7f8989dfe763 in run_mod () from /lib64/libpython3.8.so.1.0
#12 0x7f8989cea81b in PyRun_FileExFlags () from /lib64/libpython3.8.so.1.0
#13 0x7f8989cea19d in PyRun_SimpleFileExFlags () from 
/lib64/libpython3.8.so.1.0
#14 0x7f8989ce153c in Py_RunMain.cold () from /lib64/libpython3.8.so.1.0
#15 0x7f8989dd1bf9 in Py_BytesMain () from /lib64/libpython3.8.so.1.0
#16 0x7f8989fb7042 in __libc_start_main () from /lib64/libc.so.6
#17 0x557a1f3c407e in _start ()

--

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38628] Issue with ctypes in AIX

2020-07-24 Thread Tony Reix


Tony Reix  added the comment:

On Fedora32/PPC64LE (5.7.9-200.fc32.ppc64le), with little change:
  libc = CDLL('/usr/lib64/libc.so.6')
I get the correct answer:
b'def'
b'def'
b'def'
# python3 --version
Python 3.8.3
libffi : 3.1-24


On Fedora32/x86_64 (5.7.9-200.fc32.x86_64), with a little change:
  libc = CDLL('/usr/lib64/libc-2.31.so')
that crashes:
b'def'
Segmentation fault (core dumped)
# python3 --version
Python 3.8.3
libffi : 3.1-24


AIX : libffi-3.2.1
On AIX 7.2, with Python 3.8.5 compiled with XLC v13, in 64bit:
b'def'
b'def'
None
On AIX 7.2, with Python 3.8.5 compiled with GCC 8.4, in 64bit:
b'def'
b'def'
None

On AIX 7.2, with Python 3.8.5 compiled with XLC v13, in 32bit:
  ( libc = CDLL('libc.a(shr.o)') )
b'def'
b'def'
b'def'
On AIX 7.2, with Python 3.8.5 compiled with GCC 8.4, in 32bit:
b'def'
b'def'
b'def'

Preliminary conclusions:
 - this is a 64bit issue on AIX and it is independent of the compiler
 - it is worse on Fedora/x86_64
 - it works perfectly on Fedora/PPC64LE
what a mess.

--
nosy: +T.Rex

___
Python tracker 
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-20 Thread Tony


Tony  added the comment:

If the error is not resolved yet, I would prefer if we revert this change then.

The new PR is kinda big I don't know when it will be merged.

--

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-19 Thread Tony


Tony  added the comment:

Ok so I checked and the PR I am currently having a CR on fixes this issue: 
https://github.com/python/cpython/pull/21446

Do you want me to make a different PR tomorrow that fixes this specific issue 
to get it faster to master or is it ok to wait a bit?

--

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-19 Thread Tony


Tony  added the comment:

I see, I'll start working on a fix soon

--

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-16 Thread Tony


Tony  added the comment:

By the way if we will eventually combine StreamReader and StreamWriter won't 
this function (readinto) be useful then?

Maybe we should consider adding it right now.

Tell me your thoughts on this.

--

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-16 Thread Tony


Tony  added the comment:

> Which brings me to the most important point: what we need it not coding it 
> (yet), but rather drafting the actual proposal and posting it to 
> https://discuss.python.org/c/async-sig/20.  Once a formal proposal is there 
> we can proceed with the implementation.

Posted: https://discuss.python.org/t/discussion-on-a-new-api-for-asyncio/4725

By the way I know it's unrelated but I want a CR on 
https://github.com/python/cpython/pull/21446 I think it's also very important.

--

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-16 Thread Tony


Tony  added the comment:

Ok actually that sounds really important, I am interested.

But to begin doing something like this I need to know what's the general design.

Is it simply combining stream reader and stream writer into a single object and 
changing the write() function to always wait the write (thus deprecating drain) 
and that's it?

If there is not more to it I can probably do this pretty quickly, I mean it 
seems easy on the surface.

If there is more to it then I would like a more thorough explanation. Maybe we 
should chat about this.

--

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-15 Thread Tony


Tony  added the comment:

Ah it's trio...

--

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-15 Thread Tony


Tony  added the comment:

ok.

Im interested in learning about the new api.
Is it documented somewhere?

--

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41279] Convert StreamReaderProtocol to a BufferedProtocol

2020-07-15 Thread Tony


Change by Tony :


--
pull_requests: +20633
pull_request: https://github.com/python/cpython/pull/21491

___
Python tracker 
<https://bugs.python.org/issue41279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-15 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20634
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21491

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41305] Add StreamReader.readinto()

2020-07-15 Thread Tony


New submission from Tony :

Add a StreamReader.readinto(buf) function.

Exactly like StreamReader.read() with *n* being equal to the length of buf.

Instead of allocating a new buffer, copy the read buffer into buf.

--
messages: 373702
nosy: tontinton
priority: normal
severity: normal
status: open
title: Add StreamReader.readinto()

___
Python tracker 
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41246] IOCP Proactor same socket overlapped callbacks

2020-07-11 Thread Tony


Tony  added the comment:

I feel like the metadata is not really a concern here. I like when there is no 
code duplication :)

--

___
Python tracker 
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41279] Convert StreamReaderProtocol to a BufferedProtocol

2020-07-11 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20594
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21446

___
Python tracker 
<https://bugs.python.org/issue41279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-11 Thread Tony


Change by Tony :


--
pull_requests: +20593
pull_request: https://github.com/python/cpython/pull/21446

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41279] Convert StreamReaderProtocol to a BufferedProtocol

2020-07-11 Thread Tony


New submission from Tony :

This will greatly increase performance, from my internal tests it was about 
150% on linux.

Using read_into instead of read will make it so we do not allocate a new buffer 
each time data is received.

--
messages: 373526
nosy: tontinton
priority: normal
severity: normal
status: open
title: Convert StreamReaderProtocol to a BufferedProtocol

___
Python tracker 
<https://bugs.python.org/issue41279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-11 Thread Tony


Change by Tony :


--
pull_requests: +20589
pull_request: https://github.com/python/cpython/pull/21442

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41270] NamedTemporaryFile is not its own iterator.

2020-07-11 Thread Tony


Change by Tony :


--
pull_requests: +20590
pull_request: https://github.com/python/cpython/pull/21442

___
Python tracker 
<https://bugs.python.org/issue41270>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-10 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20588
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21439

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41270] NamedTemporaryFile is not its own iterator.

2020-07-10 Thread Tony


Change by Tony :


--
nosy: +tontinton
nosy_count: 3.0 -> 4.0
pull_requests: +20585
pull_request: https://github.com/python/cpython/pull/21439

___
Python tracker 
<https://bugs.python.org/issue41270>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41273] asyncio: proactor read transport: use recv_into instead of recv

2020-07-10 Thread Tony


New submission from Tony :

Using recv_into instead of recv in the transport _loop_reading will speed up 
the process.

>From what I checked it's about 120% performance increase.

This is only because there should not be a new buffer allocated each time we 
call recv, it's really wasteful.

--
messages: 373483
nosy: tontinton
priority: normal
severity: normal
status: open
title: asyncio: proactor read transport: use recv_into instead of recv

___
Python tracker 
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41247] asyncio.set_running_loop() cache running loop holder

2020-07-08 Thread Tony


Change by Tony :


--
pull_requests: +20555
pull_request: https://github.com/python/cpython/pull/21406

___
Python tracker 
<https://bugs.python.org/issue41247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()

2020-07-08 Thread Tony


Tony  added the comment:

bump

--

___
Python tracker 
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41247] asyncio.set_running_loop() cache running loop holder

2020-07-08 Thread Tony


Change by Tony :


--
title: asyncio module better caching for set and get_running_loop -> 
asyncio.set_running_loop() cache running loop holder

___
Python tracker 
<https://bugs.python.org/issue41247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41247] asyncio.set_running_loop() cache running loop holder

2020-07-08 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20550
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21401

___
Python tracker 
<https://bugs.python.org/issue41247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41247] asyncio module better caching for set and get_running_loop

2020-07-08 Thread Tony


New submission from Tony :

There is a cache variable for the running loop holder, but once 
set_running_loop is called the variable was set to NULL so the next time 
get_running_loop would have to query a dictionary to receive the running loop 
holder.

I thought why not always cache the latest set_running_loop?

The only issue I thought of here is in the details of the implementation: I 
have too little experience in python to know if there could be a context switch 
to get_running_loop while set_running_loop is running.

If a context switch is possible there then this issue would be way harder to 
solve, but it is still solvable.

--
messages: 37
nosy: tontinton
priority: normal
severity: normal
status: open
title: asyncio module better caching for set and get_running_loop

___
Python tracker 
<https://bugs.python.org/issue41247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41246] IOCP Proactor same socket overlapped callbacks

2020-07-08 Thread Tony


Change by Tony :


--
keywords: +patch
pull_requests: +20547
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21399

___
Python tracker 
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41246] IOCP Proactor same socket overlapped callbacks

2020-07-08 Thread Tony


New submission from Tony :

In IocpProactor I saw that the callbacks to the functions recv, recv_into, 
recvfrom, sendto, send and sendfile all give the same callback function for 
when the overlapped operation is done.

I just wanted cleaner code so I made a static function inside the class that I 
give to each of these functions as the overlapped callbacks.

--
messages: 373324
nosy: tontinton
priority: normal
severity: normal
status: open
title: IOCP Proactor same socket overlapped callbacks

___
Python tracker 
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Solved: Re: Missing python curses functions?

2020-06-29 Thread Tony Flury via Python-list
Maybe you should raise a bug (bugs.python.org) and flag that this 
function is missing.


It could be that it can be introduced by whoever is maintaining the 
existing code.


On 20/05/2020 08:31, Alan Gauld via Python-list wrote:

On 19/05/2020 20:53, Alan Gauld via Python-list wrote:


One of the functions discussed that does not appear to have
a Python equivalent is attr_get() which gets the current
attributes.

OK, Using inch() I've written the following function:


def attr_get(win):
 """ return current window attributes.
 If a character exists at the bottom right position it will be lost!
 """
 y,x = win.getmaxyx() # how many lines in the window?
 win.insch(y-1,0,' ') # insert a space at bottom left
 ch = win.inch(y-1,0) # now read the char (including attributes)
 win.delch(y-1,0) # remove the char from window
 return ch

And it can be used like:

import curses
scr = curses.initscr()
# uncomment next line to test
# scr.attrset(curses.A_UNDERLINE)

atts = attr_get(scr)
if atts & cur.A_UNDERLINE:
 scr.addstr("Underline is on")
else:
 scr.addstr("Underline is off")

scr.refresh()
scr.getch()
curses.endwin()

Just in case its useful to anyone else...

--
https://mail.python.org/mailman/listinfo/python-list


[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()

2020-06-26 Thread Tony


Tony  added the comment:

poke

--

___
Python tracker 
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pycharm Won't Do Long Underscore

2020-06-24 Thread Tony Kaloki
Thanks for all your explanations, everyone. Hopefully, I'll know better next 
time I come across a similar case. Now, to try and understand the rest of 
Python...

Get Outlook for Android<https://aka.ms/ghei36>


From: Python-list  on behalf 
of MRAB 
Sent: Wednesday, June 24, 2020 7:28:52 PM
To: python-list@python.org 
Subject: Re: Pycharm Won't Do Long Underscore

On 2020-06-24 18:59, Chris Angelico wrote:
> On Thu, Jun 25, 2020 at 3:51 AM Dennis Lee Bieber  
> wrote:
>>
>> On Tue, 23 Jun 2020 20:49:36 +, Tony Kaloki 
>> declaimed the following:
>>
>> >Alexander,
>> >   Thank you so much! It worked! Thank you. One question: 
>> > in your reply, are you saying that Python would have treated the two 
>> > separate underscores the same way as a long  underscore i.e. it's a 
>> > stylistic choice rather than a functional necessity?
>>
>> There is no "long underscore" in the character set. If there were,
>> Python would not know what to do with it as it was created back when ASCII
>> and ISO-Latin-1 were the common character sets. (Interesting: Windows
>> Character Map utility calls the underscore character "low line").
>
> That's what Unicode calls it - charmap is probably using that name.
>
>> Many word processors are configured to change sequences of hyphens:
>> - -- --- into - – — (hyphen, en-dash, em-dash)... But in this case, those
>> are each single characters in the character map (using Windows-Western,
>> similar to ISO-Latin-1): hyphen is x2D, en-dash is x96, em-dash is x97
>> (note that en-/em-dash are >127, hence would not be in pure ASCII)
>
> Hyphen is U+002D, en dash is U+2013, em dash is 2014. :)
>
Not quite. :-)

Hyphen is U+2010.

U+002D is hyphen-minus; it's does double-duty, for historical reasons.
--
https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()

2020-06-24 Thread Tony


Tony  added the comment:

This still leaves the open issue of UDPServer not shutting down immediately 
though

--

___
Python tracker 
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()

2020-06-24 Thread Tony


Tony  added the comment:

Just want to note that this fixes an issue in all TCPServers and not only 
http.server

--
title: BaseServer's server_forever() shutdown immediately when calling 
shutdown() -> TCPServer's server_forever() shutdown immediately when calling 
shutdown()

___
Python tracker 
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pycharm Won't Do Long Underscore

2020-06-23 Thread Tony Kaloki
Alexander,
   Thank you so much! It worked! Thank you. One question: in 
your reply, are you saying that Python would have treated the two separate 
underscores the same way as a long  underscore i.e. it's a stylistic choice 
rather than a functional necessity?
   In any case, thanks again for your quick and easy to follow - even for me - 
reply.
Tony

Get Outlook for Android<https://aka.ms/ghei36>


From: Alexander Neilson 
Sent: Tuesday, June 23, 2020 9:28:37 PM
To: Tony Kaloki 
Cc: python-list@python.org 
Subject: Re: Pycharm Won't Do Long Underscore

Hi Tony

The “long underscore” (often called Dunder as “double underscore”) is actually 
two underscores as you are seeing shown in PyCharm.

However the display of it as one long underscore is a ligature (special font 
display to communicate clearer) and to enable these in PyCharm go to the 
settings dialog (depending on windows or Mac this could be in different 
locations) and select Editor > Font

In that screen select “enable font ligatures” and if your font supports it 
(like the default JetBrains Mono does) that will start to display the double 
underscores as a single long underscore.

Regards
Alexander

Alexander Neilson
Neilson Productions Limited
021 329 681
alexan...@neilson.net.nz

> On 24/06/2020, at 07:57, Tony Kaloki  wrote:
>
> 
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
>
> From: Tony Kaloki<mailto:tkal...@live.co.uk>
> Sent: 23 June 2020 19:45
> To: python-list@python.org<mailto:python-list@python.org>
> Subject: Pycharm Won't Do Long Underscore
>
>
> Hi Guys,
>   I’ve just begun to learn basic computer programming by 
> downloading Python and Pycharm and following Youtube tutorials. But I’ve come 
> across a problem that’s stopped me in my tracks.
> When I try to do a long underscore __  for classes in Pycharm, it only 
> gives me two separate single underscores _ _. This is only in Pycharm, no 
> problems anywhere else. Could you tell me how to fix this, because I can’t 
> find any answers on the web and I’m not sure if I can go any further in my 
> learning without being able to get long underscores.
>Sorry if I’m just being really dense, but like I said I’m an absolute 
> beginner. Thanks for your time,
> Tony
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown()

2020-06-23 Thread Tony


Change by Tony :


--
pull_requests: +20260
pull_request: https://github.com/python/cpython/pull/21094

___
Python tracker 
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   >