Is CONTINUE_LOOP still a thing?

2020-06-09 Thread Adam Preble
I got to the point of trying to implement continue in my own interpreter 
project and was surprised when my for-loop just used some jumps to manage its 
control flow. Actually, I hoped for something else; I don't have logic in my 
code generation to track jump positions. I kind of hoped there was some 
CONTINUE opcode with some extra logic I could add at run time to just kind of 
do it.

(that is my own problem and I know there is no such thing as a free lunch, but 
it's 2AM and I want to hope!)

Well, I found CONTINUE_LOOP, which applies for for-loops, but 3.6.8 sure 
doesn't emit it for pretty basic stuff:

>>> def for_continue():
...   a = 0
...   for i in range(0, 3, 1):
... if i == 2:
...   continue
... a += i
...   else:
... a += 10
...   return a
...
>>> for_continue()
11
>>> dis(for_continue)
  2   0 LOAD_CONST   1 (0)
  2 STORE_FAST   0 (a)

  3   4 SETUP_LOOP  46 (to 52)
  6 LOAD_GLOBAL  0 (range)
  8 LOAD_CONST   1 (0)
 10 LOAD_CONST   2 (3)
 12 LOAD_CONST   3 (1)
 14 CALL_FUNCTION3
 16 GET_ITER
>>   18 FOR_ITER22 (to 42)
 20 STORE_FAST   1 (i)

  4  22 LOAD_FAST1 (i)
 24 LOAD_CONST   4 (2)
 26 COMPARE_OP   2 (==)
 28 POP_JUMP_IF_FALSE   32

  5  30 JUMP_ABSOLUTE   18

  6 >>   32 LOAD_FAST0 (a)
 34 LOAD_FAST1 (i)
 36 INPLACE_ADD
 38 STORE_FAST   0 (a)
 40 JUMP_ABSOLUTE   18
>>   42 POP_BLOCK

  8  44 LOAD_FAST0 (a)
 46 LOAD_CONST   5 (10)
 48 INPLACE_ADD
 50 STORE_FAST   0 (a)

  9 >>   52 LOAD_FAST0 (a)
 54 RETURN_VALUE

The place where a CONTINUE_LOOP could have made sense would be at address 30 
for that JUMP_ABSOLUTE. That'll go back to a FOR_ITER, as CONTINUE_LOOP implies 
it *must* do. I'm just guessing that at some point, somebody concluded there 
wasn't anything special about having that opcode over absolute jumps and it got 
abandoned. I wanted to check if my notions were correct or if there's some 
gotcha where having that over other things makes sense.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?

2020-05-29 Thread Adam Preble
On Friday, May 29, 2020 at 7:30:32 AM UTC-5, Eryk Sun wrote:
> On 5/28/20, Adam Preble  wrote:
> Sometimes a user will open a script via "open with" and browse to
> python.exe or py.exe. This associates .py files with a new progid that
> doesn't pass the %* command-line arguments.
> 
> The installed Python.File progid should be listed in the open-with
> list, and, if the launcher is installed, the icon should have the
> Python logo with a rocket on it. Select that, lock it in by selecting
> to always use it, and open the script. This will only be wrong if a
> user or misbehaving program modified the Python.File progid and broke
> its "open" action.

Thank you for responding! The computers showing the problem are remote to me 
and I won't be able to access one for a few days, but I will be making it a 
point in particular to check their associations before continuing without 
anything else with it.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?

2020-05-28 Thread Adam Preble
On Thursday, May 28, 2020 at 7:57:04 PM UTC-5, Terry Reedy wrote:
> The OP is so far choosing to not use an installer with those fixes.  By 
> not doing so, he is missing out on the maybe 2000 non-security fixes and 
> some enhancements that likely would benefit him more than maybe 50 
> mostly obscure fixes added between 3.6.8 and 3.6.10*.  If a rare user 
> such as Adam also chooses to not compile the latter, that is his choice.

I was going to just stay mute about why I was even looking at 3.6.10, but I
felt I should weigh in after some of the other responses. I think somebody
would find the issues interesting.

We had found what looked like a bug in the Python Launcher where it would
eat command line arguments meant for the script. I would find some stuff 
missing from sys.argv in a script that just imports sys and prints out sys.argv 
if I ran it directly in cmd.exe as "script.py." If I ran it as "python 
script.py" then everything was good as usual. So I figured while sorting out 
what was wrong that I should try the latest 3.6 interpreter since it would be a 
safe bet.

Our organization finally lifted Sisyphus' rock over the 2.7 hump earlier in the 
year by moving to 3.6. So imagine my surprise when I found the latest 3.6 
releases were just source tarballs. This left me with a dilemma and I'm still 
working through it. I haven't filed an issue about this because I haven't 
completed my own due diligence on the problem by trying it on a "latest."

For the sake of this particular problem, I think I can just use 3.8.3 for
exploration, but I'm worrying about my wider organization. I can't count on 3.8 
because of some module dependencies our organization's software. 3.7 has a 
similar issue. So I figured I'd actually just build the thing and see what I 
can do.

I did manage to build it, but there was surprisingly a few quirks. I caused
some of it. For example, I didn't care about most of the externals before,
but I made sure to include them if I was create a release for others. A few
thousand people would be using this and I'm the one that would be
accountable if it went bust. So I made sure all the major externals were
incorporated, and a lot of those were messing up. Generally, the externals
would download, but some would not get moved/renamed to their final name,
and then the build would fail when trying to find them. So I wound up with
an installation that seemed to run my own code just fine in trials, but I
would be terrified to post into it our organization's software stack.

I'm now concerned about how long we have with 3.6 because people clearly
want us to move on even beyond that. I look online and the official support
window for it ends at the end of next year, but it looks like the real
support window for that on Windows has already ended. So our organization
may have miscalculated this. What does that mean if we managed to make it
to 3.8 in a few months? We can't do it right now due to a few missing
modules, but now we have to question if we'll only get a year out of 3.8
before we're doing this all over again.
-- 
https://mail.python.org/mailman/listinfo/python-list


Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?

2020-05-23 Thread Adam Preble
I wanted to update from 3.6.8 on Windows without necessarily moving on to 3.7+ 
(yet), so I thought I'd try 3.6.9 or 3.6.10. 

All I see for both are source archives:

https://www.python.org/downloads/release/python-369/
https://www.python.org/downloads/release/python-3610/

So, uh, I theoretically did build a 3.6.10 .exe installer, but I don't really 
trust I did everything right. Is there an officially sourced installation?
-- 
https://mail.python.org/mailman/listinfo/python-list


Import machinery for extracting non-modules from modules (not using import-from)

2020-05-04 Thread Adam Preble
The (rightful) obsession with modules in PEP-451 and the import machinery hit 
me with a gotcha when I was trying to implement importing .NET stuff that 
mimicked IronPython and Python.NET in my interpreter project.

The meat of the question:
Is it important that the spec loader actually return a module? Can it just 
return... stuff? I know a from X import Y is the normal means for this, but if 
the loader knows better, can it just do it?

A normal process is something like:
import X

A bunch of finders line up to see if they know anything about X. If they don't, 
they return None. Assume it's found. That finder will return a module spec for 
how to load it.

A little later, that spec is instructed to load the module.

If X wasn't a module, you can expect to see something like:
ModuleNotFoundError: No module named 'X'; 'X' is not a package

...you were supposed to do 'from something import X'. I'm actually trying to 
figure out if there's a way with normal Python modules where I can even be in a 
situation to just blandly trying to import X without a package in front of it.

With IronPython--and I'm pretty sure Python.NET, there are situations where you 
CAN do this. The paths for .NET 'packages' are the .NET namespaces (a slightly 
different usage of the term). Say I want the machine name. It would be typical 
to get that with System.Environment.MachineName. MachineName is a static field 
in Environment. System.Environment is a namespace in mscorlib (in classic .NET 
framework).

The .NET namespace can be null. In that case it's just in the root namespace or 
something. Let's say I have a .dll I've made known to IronPython or Python.NET 
using its clr.AddReference, and I want to toy with some class defined without a 
namespace called "Crazy." This is totally fine:
import Crazy

I really can't follow what either one is doing here, and I don't know how well 
they're even latching on the PEP-451. So there's the main question: is it 
important that the spec loader actually return a module? Can it just return... 
stuff? I know a from X import Y is the normal means for this, but if the loader 
knows better, can it just do it?
-- 
https://mail.python.org/mailman/listinfo/python-list


How does the import machinery handle relative imports?

2020-04-23 Thread Adam Preble
I'm fussing over some details of relative imports while trying to mimic Python 
module loading in my personal project. This is getting more into corner cases, 
but I can spare time to talk about it while working on more normal stuff.

I first found this place:
https://manikos.github.io/how-pythons-import-machinery-works

And eventually just started looking at PEP 451. Neither is really explaining 
relative imports. I decided to try this garbage:

from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("..import_star", 
SourceFileLoader("package_test.import_star", 
r"C:\coding\random_python_projects\package_test\import_star.py"))
print(spec)
mod = module_from_spec(spec)
print(mod)
spec.loader.exec_module(mod)

...exec_module ultimately fails to do the job. Note the syntax so that I can 
actually perform a relative import hahaha:

C:\Python36\python.exe -m package_test.second_level.import_upwards
ModuleSpec(name='..import_star', 
loader=<_frozen_importlib_external.SourceFileLoader object at 
0x0226E914B080>, origin='')
'>
Traceback (most recent call last):
  File "C:\Python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
  File "C:\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
  File 
"C:\coding\random_python_projects\package_test\second_level\import_upwards.py", 
line 15, in 
spec.loader.exec_module(mod)
  File "", line 674, in exec_module
  File "", line 750, in get_code
  File "", line 398, in 
_check_name_wrapper
ImportError: loader for package_test.import_star cannot handle ..import_star


Yeah I don't think I'm doing this right! At this point I'm just trying to 
figure out where I feed in the relative path. Is that all deduced in advance of 
creating finding the spec? Can I even give the finders a relative path like 
that?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why generate POP_TOP after an "import from?"

2020-04-18 Thread Adam Preble
On Saturday, April 18, 2020 at 1:15:35 PM UTC-5, Alexandre Brault wrote:
>  >>> def f():
> ... â  â  from sys import path, argv ...

So I figured it out and all but I wanted to ask about the special characters in 
that output. I've seen that a few times and never figured out what's going on 
and if I need to change how I'm reading these. Or say:

 â â â â â â â â â â â â  12 STORE_FASTâ â â â â â â â â â â â â â  1 (argv

I don't know if you're seeing all these letter a's. I'm guessing something 
goofy with Unicode spaces or something?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why generate POP_TOP after an "import from?"

2020-04-18 Thread Adam Preble
On Friday, April 17, 2020 at 1:37:18 PM UTC-5, Chris Angelico wrote:
> The level is used for package-relative imports, and will basically be
> the number of leading dots (eg "from ...spam import x" will have a
> level of 3). You're absolutely right with your analysis, with one
> small clarification:

Thanks for taking that on too. I haven't set up module hierarchy yet so I'm not 
in a position to handle levels, but I have started parsing them and generating 
the opcodes. Is it sufficient to just use the number of dots as an indication 
of level?

As a side note, I suppose it's sufficient to just *peek* at the stack rather 
than pop the module and push it again. I'm guessing that's what the Python 
interpreter is doing.

> In theory, I suppose, you could replace the POP_TOP with a STORE_FAST
> into "sys", and thus get a two-way import that both grabs the module
> and also grabs something out of it. Not very often wanted, but could
> be done if you fiddle with the bytecode.

I'm trying to follow along for academic purposes. I'm guessing you mean that 
would basically optimize:

from sys import path
import sys

It would definitely be a fringe thing to do...
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why generate POP_TOP after an "import from?"

2020-04-17 Thread Adam Preble
On Friday, April 17, 2020 at 1:22:18 PM UTC-5, Adam Preble wrote:

> At this point, my conceptual stack is empty. If I POP_TOP then I have nothing 
> to pop and the world would end. Yet, it doesn't. What am I missing?

Check out this guy replying to himself 10 minutes later.

I guess IMPORT_FROM pushes the module back on to the stack afterwards so that 
multiple import-from's can be executed off of it. This is then terminated with 
a POP_TOP:

>>> def import_from_multi():
... from sys import path, bar
...
>>> dis(import_from_multi)
  2   0 LOAD_CONST   1 (0)
  2 LOAD_CONST   2 (('path', 'bar'))
  4 IMPORT_NAME  0 (sys)
  6 IMPORT_FROM  1 (path)
  8 STORE_FAST   0 (path)
 10 IMPORT_FROM  2 (bar)
 12 STORE_FAST   1 (bar)
 14 POP_TOP
 16 LOAD_CONST   0 (None)
 18 RETURN_VALUE
-- 
https://mail.python.org/mailman/listinfo/python-list


Why generate POP_TOP after an "import from?"

2020-04-17 Thread Adam Preble
Given this in Python 3.6.8:

from dis import dis

def import_from_test():
   from sys import path

>>> dis(import_from_test)
  2   0 LOAD_CONST   1 (0)
  2 LOAD_CONST   2 (('path',))
  4 IMPORT_NAME  0 (sys)
  6 IMPORT_FROM  1 (path)
  8 STORE_FAST   0 (path)
 10 POP_TOP
 12 LOAD_CONST   0 (None)
 14 RETURN_VALUE

I don't understand why there's a POP_TOP there that I don't get for an 
import_name grammatical statement.

IMPORT_NAME needs to eat the top two entries of the stack for level and the 
from-list. BTW I don't know what level is for either since my science projects 
have always had it be zero, but that's another question.

IMPORT_NAME will the push the module on to the stack.

IMPORT_FROM will import path from the module on the stack, and push that result 
on the stack.

STORE_FAST will store path for use, finally "modifying the namespace."

At this point, my conceptual stack is empty. If I POP_TOP then I have nothing 
to pop and the world would end. Yet, it doesn't. What am I missing?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How does the super type present itself and do lookups?

2020-03-23 Thread Adam Preble
On Thursday, March 19, 2020 at 5:02:46 PM UTC-5, Greg Ewing wrote:
> On 11/03/20 7:02 am, Adam Preble wrote:
> > Is this foo attribute being looked up in an override of __getattr__, 
> > __getattribute__, or is it a reserved slot that's internally doing this? 
> > That's what I'm trying to figure out.
> 
> Looking at the source in Objects/typeobject.c, it uses the
> tp_getattro type slot, which corresponds to __getattribute__.

Thanks for taking the time to look this up for me. I saw the message soon after 
you originally posted it, but it took me this long to sit down and poke at 
everything some more.

I don't doubt what you got from the source, but I am trying to figure out how I 
could have inferred that from the code I was trying. It looks like 
child_instance.__getattribute__ == child_instance.super().__getattribute__. 
They print out with the same address and pass an equality comparison. That 
implies that they are the same, and that the super type is NOT doing something 
special with that slot.

Given that super().__getattribute__ internally ultimately should be something 
else, I am guessing there is something else at play causing an indirection.

I have two reasons to be interested in this:
1. There may be obscure behavior I should worry about in general if I'm trying 
to default to mimicking Python and the data model for my own stuff.
2. I need to improve my kung fu when I'm inspecting these objects so I don't 
get hung up on stuff like this in the future.

The bright side is having a custom get attribute implementation is pretty much 
correct, although mine would have c.__getattribute != 
c.super().__getattribute__.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How does the super type present itself and do lookups?

2020-03-10 Thread Adam Preble
On Tuesday, March 10, 2020 at 9:28:11 AM UTC-5, Peter Otten wrote:
> self.foo looks up the attribute in the instance, falls back to the class and 
> then works its way up to the parent class, whereas
> 
> super().foo bypasses both instance and class, and starts its lookup in the 
> parent class.

Is this foo attribute being looked up in an override of __getattr__, 
__getattribute__, or is it a reserved slot that's internally doing this? That's 
what I'm trying to figure out.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How does the super type present itself and do lookups?

2020-03-09 Thread Adam Preble
On Monday, March 9, 2020 at 9:31:45 PM UTC-5, Souvik Dutta wrote:
> This should be what you are looking for.
> https://python-reference.readthedocs.io/en/latest/docs/functions/super.html

I'm not trying to figure out how the super() function works, but rather the 
anatomy of the object is returns.

What I think is happening in my investigation is that some of the missing 
attributes in __dict__ are getting filled in from reserved slots, but it's just 
a theory. I'm trying to mimic the object in my own interpreter project.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How does the super type present itself and do lookups?

2020-03-09 Thread Adam Preble
On Wednesday, March 4, 2020 at 11:13:20 AM UTC-6, Adam Preble wrote:
> Stuff

I'm speculating that the stuff I don't see when poking are reserved slots. I 
figured out how much of a thing that is when I was digging around for how 
classes know how to construct themselves. I managed to figure out __call__ is 
like that too. So I guess it's something that doesn't readily reveal itself 
when asked but is there if you try to use it.

(or something)
-- 
https://mail.python.org/mailman/listinfo/python-list


How does the super type present itself and do lookups?

2020-03-04 Thread Adam Preble
Months ago, I asked a bunch of stuff about super() and managed to fake it well 
enough to move on to other things for awhile. The day of reckoning came this 
week and I was forced to implement it better for my personal Python project. I 
have a hack in place that makes it work well-enough but I found myself 
frustrated with how shift the super type is. It's both the self and the parent 
class, but not.

If you don't know, you can trap what super() returns some time and poke it with 
a stick. If you print it you'll be able to tell it's definitely unique:
, >

If you try to invoke methods on it, it'll invoke the superclass' methods. 
That's what is supposed to happen and basically what already happens when you 
do super().invoke_this_thing() anyways.

Okay, so how is it doing the lookup for that? The child instance and the super 
types' __dict__ are the same. The contents pass an equality comparison and are 
the same if you print them.

They have the same __getattribute__ method wrapper. However, if you dir() them 
you definitely get different stuff. For one, the super type has its special 
variables __self__, __self_class__, and __thisclass__. It's missing __dict__ 
from the dir output. But wait, I just looked at that!

So I'm thinking that __getattr__ is involved, but it's not listed in anything. 
If I use getattr on the super, I'll get the parent methods. If I use 
__getattribute__, I get the child's methods. I get errors every way I've 
conceived of trying to pull out a __getattr__ dunder. No love.

I guess the fundamental question is: what different stuff happens when 
LOAD_ATTR is performed on a super object versus a regular object?

If you are curious about what I'm doing right now, I overrode __getattribute__ 
since that's primarily what I use for attribute lookups right now. It defer to 
the superclass' __getattribute__. If a method pops out, it replaces the self 
with the super's __self__ before kicking it out. I feel kind of dirty doing it:

https://github.com/rockobonaparte/cloaca/blob/312758b2abb80320fb3bf344ba540a034875bc4b/LanguageImplementation/DataTypes/PySuperType.cs#L36

If you want to see how I was experimenting with super, here's the code and 
output:

class Parent:
def __init__(self):
self.a = 1

def stuff(self):
print("Parent stuff!")


class Child(Parent):
def __init__(self):
super().__init__()
self.b = 2
self.super_instance = super()

def stuff(self):
print("Child stuff!")

def only_in_child(self):
print("Only in child!")


c = Child()
c.super_instance.__init__()
c.stuff()
c.super_instance.stuff()
print(c)
print(c.super_instance)
print(c.__init__)
print(c.super_instance.__init__)
print(c.stuff)
print(c.super_instance.stuff)
print(c.__getattribute__)
print(c.super_instance.__getattribute__)
print(dir(c))
print(dir(c.super_instance))
print(c.__dict__ == c.super_instance.__dict__)
print(getattr(c, "__init__"))
print(getattr(c.super_instance, "__init__"))
print(c.__getattribute__("__init__"))
print(c.super_instance.__getattribute__("__init__"))



Child stuff!
Parent stuff!
<__main__.Child object at 0x026854D99828>
, >
>
>
>
>


['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', 
'__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', 
'__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', 
'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', 
'__str__', '__subclasshook__', '__weakref__', 'a', 'b', 'only_in_child', 
'stuff', 'super_instance']
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', 
'__ge__', '__get__', '__getattribute__', '__gt__', '__hash__', '__init__', 
'__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', 
'__reduce_ex__', '__repr__', '__self__', '__self_class__', '__setattr__', 
'__sizeof__', '__str__', '__subclasshook__', '__thisclass__', 'a', 'b', 
'super_instance']
True
>
>
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Data model and attribute resolution in subclasses

2020-03-02 Thread Adam Preble
On Monday, March 2, 2020 at 3:12:33 PM UTC-6, Marco Sulla wrote:
> Is your project published somewhere? What changes have you done to the
> interpreter?

I'm writing my own mess:
https://github.com/rockobonaparte/cloaca

It's a .NET Pythonish interpreter with the distinction of using a whole lot of 
async-await so I can do expressive game scripting with it in one thread. If 
IronPython had a handle on async-await then I'd probably not be doing this at 
all. Well, it was also a personal education project to learn me some Python 
internals for an internal company job change, but they aren't interested in me 
at all. :(

I still hack with it because I got far enough to have a REPL I could dump into 
Unity and it immediately looked very useful.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Data model and attribute resolution in subclasses

2020-03-02 Thread Adam Preble
On Monday, March 2, 2020 at 7:09:24 AM UTC-6, Lele Gaifax wrote:
> Yes, you just used it, although you may have confused its meaning:
> 

Yeah I absolutely got it backwards. That's a fun one I have to fix in my 
project now!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Data model and attribute resolution in subclasses

2020-03-01 Thread Adam Preble
On Sunday, March 1, 2020 at 3:08:29 PM UTC-6, Terry Reedy wrote:

> Because BaseClass is the superclass of SubClass.

So there's a mechanism for parent classes to know all their children?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Data model and attribute resolution in subclasses

2020-03-01 Thread Adam Preble
Based on what I was seeing here, I did some experiments to try to understand 
better what is going on:

class BaseClass:
def __init__(self):
self.a = 1

def base_method(self):
return self.a

def another_base_method(self):
return self.a + 1


class SubClass(BaseClass):
def __init__(self):
super().__init__()
self.b = 2


c = SubClass()
print(c.__dict__)
print(c.__class__.__dict__)
print(c.__class__.__subclasses__())
print(c.__class__.mro())
print(c.__class__.mro()[1].__dict__)
print(getattr(c, "base_method"))
print(c.b)
print(c.a)

With some notes:
print(c.__dict__)
{'a': 1, 'b': 2}
So the instance directly has a. I am guessing that the object's own dictionary 
is directly getting these are both __init__'s are run.

print(c.__class__.__dict__)
{'__module__': '__main__', '__init__': , '__doc__': None}
I am guessing this is what is found and stuffed into the class' namespace when 
the class is built; that's specifically the BUILD_CLASS opcode doing its thing.

print(c.__class__.__subclasses__())
[]
What?! Why isn't this []?

print(c.__class__.mro())
[, , ]
This is more like what I expected to find with subclasses. Okay, no, method 
resolution order is showing the entire order.

print(c.__class__.mro()[1].__dict__)
{'__module__': '__main__', '__init__': , 'base_method': , 'another_base_method': , '__dict__': , '__weakref__': , '__doc__': None}
No instance-level stuff. Looks like it's the base class namespace when the 
BUILD_CLASS opcode saw it. Okay, looking good.

print(getattr(c, "base_method"))
>
I'm guessing here it didn't find it in the object's __dict__ nor the class' 
__dict__ so it went in mro and found it in BaseClass.

So I need a __dict__ for the class based on the code defined for it when the 
class is defined. That's associated with the class. I need another dictionary 
for each instance. That will get stuffed with whatever started getting dumped 
into it in __init__ (and possibly elsewhere afterwards).

What __dict__ actually is can vary. The mappingproxy helps make sure that 
strings are given as keys (among other things?).
-- 
https://mail.python.org/mailman/listinfo/python-list


Data model and attribute resolution in subclasses

2020-02-27 Thread Adam Preble
I have been making some progress on my custom interpreter project but I found I 
have totally blown implementing proper subclassing in the data model. What I 
have right now is PyClass defining what a PyObject is. When I make a PyObject 
from a PyClass, the PyObject sets up a __dict__ that is used for attribute 
lookup. When I realized I needed to worry about looking up parent namespace 
stuff, this fell apart because my PyClass had no real notion of a namespace.

I'm looking at the Python data model for inspiration. While I don't have to 
implement the full specifications, it helps me where I don't have an 
alternative. However, the data model is definitely a programmer document; it's 
one of those things where the prose is being very precise in what it's saying 
and that can foil a casual reading.

Here's what I think is supposed to exist:
1. PyObject is the base.
2. It has an "internal dictionary." This isn't exposed as __dict__
3. PyClass subclasses PyObject.
4. PyClass has a __dict__

Is there a term for PyObject's internal dictionary. It wasn't called __dict__ 
and I think that's for good reasons. I guess the idea is a PyObject doesn't 
have a namespace, but a PyClass does (?).

Now to look something up. I assume that __getattribute__ is supposed to do 
something like:
1. The PyClass __dict__ for the given PyObject is consulted.
2. The implementation for __getattribute__ for the PyObject will default to 
looking into the "internal dictionary."
3. Assuming the attribute is not found, the subclasses are then consulted using 
the subclass' __getattribute__ calls. We might recurse on this. There's 
probably some trivia here regarding multiple inheritance; I'm not entirely 
concerned (yet).
4. Assuming it's never found, then the user sees an AttributeError

Would each of these failed lookups result in an AttributeError? I don't know 
how much it matters to me right now that I implement exactly to that, but I was 
curious if that's really how that goes under the hood.
-- 
https://mail.python.org/mailman/listinfo/python-list


Understanding bytecode arguments: 1 byte versus 2 bytes

2020-01-06 Thread adam . preble
I'm trying to understand the difference in disassemblies with 3.6+ versus older 
versions of CPython. It looks like the basic opcodes like LOAD_FAST are 3 bytes 
in pre-3.6 versions, but 2 bytes in 3.6+. I read online somewhere that there 
was a change to the argument sizes in 3.6: it became 2 bytes when it used to be 
just one. I wanted to verify that. For 3.6, if an opcode takes an argument, can 
I always assume that argument is just one byte?

I can think of some situations where that doesn't sounds right. For example, 
JUMP_ABSOLUTE would be a problem, although I have yet to see that opcode in the 
wild. Actually, I'd be worried about more involved jumps because it sounds like 
with just a single-byte offset that I'd have to sometimes make trampolines to 
jump to where I ultimately need to be. Again, I haven't really hit that, but 
I'm also use 2-byte opcodes.

What I have works, but it looks ... fairly simple for me to reduce the opcode 
size so I wanted to understand some of the decisions that were made go to a 
single-byte argument size in 3.6.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Plumbing behind super()

2019-06-29 Thread adam . preble
Thanks for the replies from everybody. It looks like I should double check 
super_init and see what truck is coming from that which will hit me with a 
gotcha later. I'm very naively right now plucking the class from my locals and 
I was able to proceed in the very, very short term.

I think I would have run into something like this earlier but I was doing 
something else incorrectly with self references in general. I was having my 
byte code push the object reference on the stack for method calls instead of 
using a naive one.

For example:
m.change_a(2)

Disregarding unrelated code, it disassembles to this in a 3.6 intepreter:
  3   6 LOAD_FAST0 (m)
  8 LOAD_ATTR1 (change_a)
 10 LOAD_CONST   1 (2)
 12 CALL_FUNCTION1

I have been doing an oopsies of trying to push the self reference on the stack 
for the method. So I'm doing something like:
  3   6 LOAD_FAST0 (m)
  8 LOAD_ATTR1 (change_a)
  X LOAD_FAST0 (m)
 10 LOAD_CONST   1 (2)
 12 CALL_FUNCTION2

Whoops. Now I need to figure out how the interpreter knows that change_a is a 
method and knows what self to feed it. I'm assuming that's in the cell 
variables similar to what super()'s doing as explained here. I haven't 
implemented cell variables so this is where I'm stuck in a sand pit.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Plumbing behind super()

2019-06-27 Thread adam . preble
I was wrong in the last email because I accidentally in super_gettro instead of 
super_init.

Just for some helper context:

>>> class Foo:
...   pass
...
>>> class Bar(Foo):
...   def __init__(self):
... super().__init__()
... self.a = 2
...
>>> dis(Bar)
Disassembly of __init__:
  3   0 LOAD_GLOBAL  0 (super)
  2 CALL_FUNCTION0
  4 LOAD_ATTR1 (__init__)
  6 CALL_FUNCTION0
  8 POP_TOP

  4  10 LOAD_CONST   1 (2)
 12 LOAD_FAST0 (self)
 14 STORE_ATTR   2 (a)
 16 LOAD_CONST   0 (None)
 18 RETURN_VALUE

I originally set a breakpoint at super_getattro so I was seeing it getting the 
self pointer from TOS, but I think I needed super_init--especially since that 
is getting called from a CALL_FUNCTION opcode, instead of super_getattro and 
it's LOAD_ATTR. I think that's from the self.a assignment.

The super_init code melted my brain! It looks like it's grabbing the current 
frame and interrogating it to find __class__. I think I have the same amount of 
visibility and similar structure in what I'm writing but... woah.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Plumbing behind super()

2019-06-27 Thread adam . preble
On Thursday, June 27, 2019 at 8:30:21 PM UTC-5, DL Neil wrote:
> I'm mystified by "literally given nothing".

I'm focusing there particularly on the syntax of writing "super()" without any 
arguments to it. However, internally it's getting fed stuff.

> If a class has not defined an attribute, eg self.my_attribute, but the 
> base class has defined an attribute of that name, then self.my_attribute 
> will as initialised by the base class. Welcome to the intricacies of 
> managing scope!

I'm thinking my problem was more fundamental here. I finally got the debugger 
to bypass all the calls to the internal super() implementation that are done on 
startup and run instead just a bit of user-specified code. It's looking like 
super() actually gobbles the self pointer that goes on to the stack when 
__init__ is first created and replaces it when its done by returning it again. 
I think. Maybe.
-- 
https://mail.python.org/mailman/listinfo/python-list


Plumbing behind super()

2019-06-27 Thread adam . preble
I'm trying to mimick Python 3.6 as a .NET science project and have started to 
get into subclassing. The super() not-a-keyword-honestly-guys has tripped me 
up. I have to admit that I've professionally been doing a ton Python 2.7, so 
I'm not good on my Python 3.6 trivia yet. I think I have the general gist of 
this, but want some affirmation.

If you use super() in a method, all it does is load super as a global on to the 
interpreter stack and call it without any arguments. So I'm left to wonder how 
it's able to figure anything out when it's being literally given nothing...

except that it's not being given literally nothing:

static PyObject *
super_getattro(PyObject *self, PyObject *name)

I was thinking maybe self has become more special in Python 3.6, but I don't 
think that's true since I've ported code to Python3 before that had inner 
classes where I'd use "inner_self" to disambiguate with the outer self. And 
though I thought it was so at first, it just turned out I screwed up my little 
code snippet to expose it. If self was special then I presume I could find it 
in my lookups and inject it.

So how do I go from CALL_FUNCTION on a super() global without anything else on 
the stack to somehow having all the information I need? Is there something 
tracking that I'm in an object scope when calling stuff?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: From parsing a class to code object to class to mappingproxy to object (oh my!)

2019-04-05 Thread adam . preble
On Friday, April 5, 2019 at 5:54:42 PM UTC-5, Gregory Ewing wrote:
> But when compiling a class body, it uses a dict to hold the
> locals, and generates LOAD_NAME and STORE_NAME opcodes to
> access it.
> 
> These opcodes actually date from very early versions of
> Python, when locals were always kept in a dict. When
> optimised locals were introduced, the old mechanism was kept
> around for building class dicts. (There used to be one or
> two other uses, but I think classes are the only one left
> now.)

Thanks for the explanation. I've figured from this that I could do most 
variable access by just generating LOAD/STORE_NAME and the FAST is an 
(important) optimization. That is disregarding sneaking in the global or 
nolocal keywords...

So let's say I'm generating byte codes. If I am generating code for a class 
body, then I have a dictionary and I'm generating LOAD/STORE NAMEs. Any other 
body is usually going to wind up being FAST/GLOBAL/the other ones. An important 
exception there would be for built-ins and presumably imported stuff, right?

I'm asking because right now I've literally hard-coded some logic that tells me 
if I'm generating a class body so I know to use names. I just feel kind of 
silly doing that.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: From parsing a class to code object to class to mappingproxy to object (oh my!)

2019-04-04 Thread adam . preble
On Thursday, April 4, 2019 at 1:17:02 PM UTC-5, adam@gmail.com wrote:

> Thanks for the response. I was meaning to write back earlier, but I've been 
> spending my free Python time in the evenings reimplementing what I'm doing to 
> work more correctly. I'm guessing before the code object representing the 
> class body gets run, __build_class__ is establishing various dunders such as 
> __name__, __qual_name__, and __module__. I haven't fully penetrated that, but 
> I also took an embarrassingly long amount of time to fully comprehend 
> LOAD_NAME versus LOAD_FAST versus LOAD_GLOBAL...

I was blabbing on this while I was away from my examples so I did botch this up 
a bit. I guess when building the class, __name__ will get set beforehand, it 
the program's setting __qualname__ and __module__ from there. Something I don't 
really understand from a code generation perspective is the switch over to 
STORE_NAME for class methods. When I disassemble regular function definitions, 
I expect to see the result of MAKE_FUNCTION stored using STORE_FAST. However, 
in the code generated from a class body, it's a STORE_NAME. I know this is 
signifying to the interpeter not to rely on sourcing __init__ locally, but I 
don't understand why it suddenly has decided that now that it's in a class body.

I'm in a situation where I am have to generate those opcodes so I'm trying to 
figure out what the magic switch is--if it isn't just that I'm inside a class 
definition now.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: From parsing a class to code object to class to mappingproxy to object (oh my!)

2019-04-04 Thread adam . preble
On Monday, April 1, 2019 at 1:23:42 AM UTC-5, Gregory Ewing wrote:
> adam.pre...@gmail.com wrote:
> https://eli.thegreenplace.net/2012/06/15/under-the-hood-of-python-class-definitions
> 
> Briefly, it creates a dict to serve as the class's namespace dict,
> then executes the class body function passed to it, with that dict
> as the local namespace. So method defs and other assignments go
> straight into what will become the class namespace when the class
> object is created.

Thanks for the response. I was meaning to write back earlier, but I've been 
spending my free Python time in the evenings reimplementing what I'm doing to 
work more correctly. I'm guessing before the code object representing the class 
body gets run, __build_class__ is establishing various dunders such as 
__name__, __qual_name__, and __module__. I haven't fully penetrated that, but I 
also took an embarrassingly long amount of time to fully comprehend LOAD_NAME 
versus LOAD_FAST versus LOAD_GLOBAL...
-- 
https://mail.python.org/mailman/listinfo/python-list


From parsing a class to code object to class to mappingproxy to object (oh my!)

2019-03-31 Thread adam . preble
I have been mimicking basic Python object constructs successfully until I 
started trying to handle methods as well in my hand-written interpreter. At 
that point, I wasn't sure where to stage all the methods before they get 
shuffled over to an actual instance of an object. I'm having to slap this out 
here partially to rubber duck it and partially because I really don't know 
what's going on.

Here's something tangible we can use:
>>> def construct():
...   class Meow:
... def __init__(self):
...   self.a = 1
... def change_a(self, new_a):
...   self.a = new_a
...   return Meow()
...
>>> dis(construct)
  2   0 LOAD_BUILD_CLASS
  2 LOAD_CONST   1 (", line 2>)
  4 LOAD_CONST   2 ('Meow')
  6 MAKE_FUNCTION0
  8 LOAD_CONST   2 ('Meow')
 10 CALL_FUNCTION2
 12 STORE_FAST   0 (Meow)

  7  14 LOAD_FAST0 (Meow)
 16 CALL_FUNCTION0
 18 RETURN_VALUE

I've wrapped my class in a function so I could more readily poke it with a 
stick. I understand LOAD_BUILD_CLASS will invoke builtins.__build_class__(). By 
the way, why the special opcode for that? Anyways, it takes that code object, 
which looks to be particularly special.

Note that I'm inserting some newlines and extra junk for sanity:

>>> import ctypes
>>> c = ctypes.cast(0x021BD59170C0, ctypes.py_object).value
>>> c.co_consts
('construct..Meow',
", line 3>,
'construct..Meow.__init__',
", line 5>,
'construct..Meow.change_a',
None)

>>> c.co_names
('__name__', '__module__', '__qualname__', '__init__', 'change_a')

>>> dis(c.co_code)
  0 LOAD_NAME0 (0) -> __name__ ...
  2 STORE_NAME   1 (1) -> ... goes into __module__
  4 LOAD_CONST   0 (0) -> Name of the class = 
'construct..Meow' ...
  6 STORE_NAME   2 (2) -> ... goes into __qualname__
  8 LOAD_CONST   1 (1) -> __init__ code object
 10 LOAD_CONST   2 (2) -> The name "__init__" ...
 12 MAKE_FUNCTION0 -> ... Made into a function
 14 STORE_NAME   3 (3) -> Stash it
 16 LOAD_CONST   3 (3) -> The name "change_a" ...
 18 LOAD_CONST   4 (4) -> The __change_a__ code object ...
 20 MAKE_FUNCTION0 -> ... Made into a function
 22 STORE_NAME   4 (4) -> Stash it
 24 LOAD_CONST   5 (5) -> Returns None
 26 RETURN_VALUE

I'm not too surprised to see stuff like this since this kind of thing is what I 
expect to find in flexible languages that do object-oriented programming by 
basically blessing a variable and heaping stuff on it. It's just that I'm 
trying to figure out where this goes in the process of language parsing to 
class declaration to object construction.

What I'm assuming is that when builtins.__build_class__() is invoked, it does 
all the metaclass/subclass chasing first, and then ultimately invokes this code 
object to populate the class. If I look at the __dict__ for the class 
afterwards, I see:

mappingproxy(
{'__module__': '__main__',
'__init__': .Meow.__init__ at 0x021BD592AAE8>,
'change_a': .Meow.change_a at 0x021BD5915F28>,
'__dict__': ,
'__weakref__': ,
'__doc__': None})

What is the plumbing taking the result of that code object over to this proxy? 
I'm assuming __build_class__ runs that code object and then starts looking for 
new names and see to create this. Is this mapping proxy the important thing for 
carrying the method declaration?

Is this also where prepare (and __prepare__) comes into play? Running into that 
was where I felt the need to start asking questions because I got six layers 
deep and my brain melted.

I'm then assuming that in object construction, this proxy is carried by some 
reference to the final object in such a way that if the class's fields are 
modified, all instances would see the modification unless the local object 
itself was overridden.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: log file

2019-03-21 Thread adam . preble
On Thursday, March 21, 2019 at 10:26:14 PM UTC-5, Sharan Basappa wrote:
> I am running a program and even though the program runs all fine, the log 
> file is missing. I have pasted first few lines of the code.
> 
I am thinking--hoping, rather--that you just kind of double pasted there. 
Anyways, you needed to specify the logging level in basicConfig:

import os
import csv
import logging
import random

#Create and configure logger
# Check out level=logging.INFO
logging.basicConfig(filename="test_1.log", filemode='w', format='%(asctime)s 
%(message)s', level=logging.INFO)
log = logging.getLogger()
log.info("Yay!")
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: REPL, global, and local scoping

2019-03-21 Thread adam . preble
On Tuesday, March 19, 2019 at 9:49:48 PM UTC-5, Chris Angelico wrote:
> I would recommend parsing in two broad steps, as CPython does:
> 
> 1) Take the source code and turn it into an abstract syntax tree
> (AST). This conceptualizes the behaviour of the code more-or-less the
> way the programmer wrote it.
> 
> 2) Implement the AST in byte-code.

You are totally right. It's not getting me here, but I am running into a 
problem using ANTLR's visitors to deal with what kind of storage opcode to use 
when running into an assignment. I only roughly pondered it in my head but 
never really looked at the AST Python was generating for clues.

But, you see, my cat is hungry and I have to, you know, ... take care of him. A 
lot. For awhile. Yeah. ;)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: REPL, global, and local scoping

2019-03-19 Thread adam . preble
On Tuesday, March 19, 2019 at 3:48:27 PM UTC-5, Chris Angelico wrote:
> > I can see in vartest() that it's using a LOAD_GLOBAL for that, yet first() 
> > and second() don't go searching upstairs for a meow variable. What is the 
> > basis behind this?
> >
> 
> Both first() and second() assign to the name "meow", so the name is
> considered local to each of them. In vartest(), the name isn't
> assigned, so it looks for an outer scope.

Thanks for the responses. I wanted to poke on this part just a little bit more. 
I want to mimic the proper behavior without having to go back to it repeatedly.

Let's say I'm parsing this code and generating the byte code for it. I run into 
meow on the right side of an expression but never the left. At this point, do I 
always generate a LOAD_GLOBAL opcode? Is that done whether or not I know if the 
variable is defined in a higher scope? That's what it looks like from renaming 
it to something I didn't define. I just want to double check.

On the interpreter side seeing the opcode, does that generally mean I walk up 
the variables I have in higher frames until I either run out of them or find it?

Does that mean defining "global meow" basically states "always use 
LOAD/STORE_GLOBAL opcodes for this one, even if it shows up on the left side of 
an assignment first?"
-- 
https://mail.python.org/mailman/listinfo/python-list


REPL, global, and local scoping

2019-03-19 Thread adam . preble
I got hit on the head and decided to try to write something of a Python 
interpreter for embedding. I'm starting to get into the ugly stuff like 
scoping. This has been a good way to learn the real deep details of the 
language. Here's a situation:

>>> meow = 10
>>> def vartest():
... x = 1
... y = 2
... def first():
...x = 3
...meow = 11
...return x
... def second():
...y = 4
...meow = 12
...return y
... return first() + second() + x + y + meow
...
>>> vartest()
20

first() and second() are messing with their own versions of x and y. Side note: 
the peephole optimizer doesn't just slap a 3 and 4 on the stack, respectively, 
and just return that. It'll actually do a STORE_NAME to 0 for each. The meow 
variable is more peculiar. The first() and second() inner functions are working 
on their own copy. However, vartest() is using the one from the calling scope, 
which is the REPL.

I can see in vartest() that it's using a LOAD_GLOBAL for that, yet first() and 
second() don't go searching upstairs for a meow variable. What is the basis 
behind this?

I tripped on this in my unit tests when I was trying to note that a class' 
constructor had run successfully without getting into poking the class' 
variables. I was trying to have it set a variable in the parent scope that I'd 
just query after the test.

It looks like in practice, that should totally not work at all:
>>> meow = 10
>>> class Foo:
...def __init__(self):
...   meow = 11
...
>>> f = Foo()
>>> meow
10

...and it's on me to carve out a throwaway inner meow variable. But hey, let's 
kick meow up to the class level:

>>> meow = 10
>>> class Foo:
...meow = 11
...def __init__(self):
...   pass
...
>>> f = Foo()
>>> meow
10

So I guess it's a different ballgame for classes entirely. What are the rules 
in play here for:
1. A first-level function knowing to use a variable globally
2. A Second-level function using the name locally and uniquely
3. Classes having no visibility to upper-level variables at all by default
-- 
https://mail.python.org/mailman/listinfo/python-list


pip not resolving newer package than required dependency

2018-08-27 Thread adam . preble
I have a module with a dependency specifically on pillow>=4.2.1. We are using 
an internal PyPI that has removed the pillow 4.x series, but it does have 
5.2.0. If we try to install pillow>=4.2.1 it doesn't find anything. If we just 
instruct pip to install pillow, then it will end up installing pillow 5.2.0 
just fine. Is there some peculiar versioning rule I'm running into here?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Distributing a Python module as .rpm and .deb packages across major distributions

2018-06-13 Thread adam . preble
On Sunday, June 10, 2018 at 3:05:45 PM UTC-5, Barry wrote:

> The way I learn about the details of RPM packaging is to look at examples 
> like what I wish to achieve.
> 
> I would go get the source RPM for a python2 package from each distro you want 
> to supoort and read its .spec file.
> 
> I see on fedora that the way they install packages that are from pypi makes 
> it possible to use pip list to see them.

The impression I'm getting is that they're using tools to help in this process, 
rather than manually specifying crafting the package descriptors and shell 
scripts. I'd like to do similar, of course, since it sounds like it's a lot 
easier.

I have looked at a few already, which is where I got where I was already. In 
Debian packages I see a lot of references to dh_python, and the regularity of 
it implies the lines in question are being generated--but by what? The source 
packages will have these generated outputs, which means it's already too late 
in the process when it comes to reverse engineering what generated those 
outputs.
-- 
https://mail.python.org/mailman/listinfo/python-list


Distributing a Python module as .rpm and .deb packages across major distributions

2018-06-08 Thread adam . preble
I have a situation where internally I need to distribute some Python code using 
Linux packages rather than simply relying on wheel files. This seems to be a 
solved problem because a lot of Python modules clearly get distributed as .rpm 
and .deb. It's not completely unreasonable because soon I will have some other 
modules that are depending on binary applications that are also coming in from 
packages, and having the system package manage resolve and install all this is 
convenient. I'm not really in a political position to change that policy, for 
what it's worth.

I'm still stuck in Python 2.7 here for at least a few more months. Also, it 
probably helps to know this is a pure Python module that doesn't have to 
compile any native code.

Creating a package itself isn't a problem. In my case, I bandied with the 
bdist_rpm rule in setup.py, and used stdeb to add a bdist_deb rule. I get rpm 
and deb files from these, but they seem to be plagued with a problem of making 
assumptions about paths based on my build environment. I'm building on an 
Ubuntu rig where Python modules are installed into dist-packages. The rpm 
package will try to install my module into dist-packages instead of 
site-packages on a Red Hat rig. I haven't yet tried the Debian package on 
different rigs, but the stdeb documentation did call out that this likely won't 
work.

I'm wondering first if many of the modules we see in packages right now are 
actually literally being built using Jenkins or some other CD tool on every 
major OS distribution. If that's the case then at least I know and I can try to 
do that. I was surprised that I couldn't easily provide some additional flags. 
I believe I can specify a setup.cfg that can override the module installation 
path, and I think I can do a little shell script to just rotate different 
setup.cfg files through, but I can't help but wonder if I'm even on the right 
path.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Resolving import errors reported by PyLint in modules using Python.NET

2013-08-11 Thread adam . preble
I thought I responded to this.  Oh well shrugs

On Friday, August 9, 2013 12:47:43 AM UTC-5, Benjamin Kaplan wrote:
 Are you using Python.NET or IronPython? IronPython is reasonably well
 
 supported, and it looks like there's a patch you can use to get PyLint
 
 working on it (see
 
 http://mail.python.org/pipermail/ironpython-users/2012-June/016099.html
 
 ). Not sure what's going on with Python.NET

Definitely Python.NET in this case.  It looks like that issue is different than 
what I get.  PyLint just gets import errors when trying to import the modules, 
which it just reports with everything else I've done wrong.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from x to testx?

2013-08-11 Thread adam . preble
On Friday, August 9, 2013 1:31:43 AM UTC-5, Peter Otten wrote:
 I see I have to fix it myself then...

Sorry man, I think in my excitement of seeing the first of your examples to 
work, that I missed the second example, only seeing your comments about it at 
the end of the post.  I didn't expect such a good response.
-- 
http://mail.python.org/mailman/listinfo/python-list


Is it possible to make a unittest decorator to rename a method from x to testx?

2013-08-08 Thread adam . preble
We were coming into Python's unittest module from backgrounds in nunit, where 
they use a decorate to identify tests.  So I was hoping to avoid the convention 
of prepending test to the TestClass methods that are to be actually run.  I'm 
sure this comes up all the time, but I mean not to have to do:

class Test(unittest.TestCase):
def testBlablabla(self):
self.assertEqual(True, True)

But instead:
class Test(unittest.TestCase):
@test
def Blablabla(self):
self.assertEqual(True, True)

This is admittedly a petty thing.  I have just about given up trying to 
actually deploy a decorator, but I haven't necessarily given up on trying to do 
it for the sake of knowing if it's possible.

Superficially, you'd think changing a function's __name__ should do the trick, 
but it looks like test discovery happens without looking at the transformed 
function.  I tried a decorator like this:

def prepend_test(func):
print running prepend_test
func.__name__ = test + func.__name__

def decorator(*args, **kwargs):
return func(args, kwargs)

return decorator

When running unit tests, I'll see running prepend_test show up, but a dir on 
the class being tested doesn't show a renamed function.  I assume it only works 
with instances.  Are there any other tricks I could consider?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from x to testx?

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
 Peter Otten wrote:
 Oops, that's an odd class name. Fixing the name clash in Types.__new__() is 
 
 left as an exercise...

I will do some experiments with a custom test loader since I wasn't aware of 
that as a viable alternative.  I am grateful for the responses.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from x to testx?

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:04:30 AM UTC-5, Terry Reedy wrote:
 I cannot help but note that this is *more* typing. But anyhow, something 

It wasn't so much about the typing so much as having test in front of 
everything.  It's a problem particular to me since I'm writing code that, well, 
runs experiments.  So the word test is already all over the place.  I would 
even prefer if I could do away with assuming everything starting with test is 
a unittest, but I didn't think I could; it looks like Peter Otten got me in the 
right direction.

 like this might work.
 

 def test(f):
 
  f.__class__.__dict__['test_'+f.__name__]
 
 
 
 might work. Or maybe for the body just
 
 setattr(f.__class__, 'test_'+f.__name__)
 

Just for giggles I can mess around with those exact lines, but I did get 
spanked trying to do something similar.  I couldn't reference __class__ for 
some reason (Python 2.7 problem?).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from x to testx?

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
 Peter Otten wrote:
 Oops, that's an odd class name. Fixing the name clash in Types.__new__() is 
 
 left as an exercise...

Interesting, I got __main__.T, even though I pretty much just tried your code 
wholesale.  For what it's worth, I'm using Python 2.7.  I'm glad to see that 
code since I learned a lot of tricks from it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Resolving import errors reported by PyLint in modules using Python.NET

2013-08-08 Thread adam . preble
PyLint can't figure out imports of .NET code being referenced in my Python 
scripts that use Python.NET.  I can kind of see why; you have to evaluate some 
clr.AddReference calls for the imports to even succeed.  I wonder if I have any 
recourse.  Generally, to import a DLL you have to do a few things.  I guess for 
an example I'll import a .NET string:


import clr# Python .NET common-language runtime module, the important part 
of it all

clr.AddReference(System)
from System import String   # .NET System.String

can = String(Spam)


PyLint is not amused:
F:  4, 0: Unable to import 'System' (import-error)

I wondered if there were any tricks to make it work.  I don't want to just 
ignore import-error, either by explicitly telling pylint to ignore them, or be 
getting complacent in seeing them all the time.  I am also kind of curious if 
PyLint will expose new problems if it's able to figure out more things after 
successfully  passing the imports.  I wouldn't really know.
-- 
http://mail.python.org/mailman/listinfo/python-list