On 01/25/2012 07:59 AM, Vitja Makarov wrote:
2012/1/25 mark florisson<[email protected]>:
On 24 January 2012 19:18, Dag Sverre Seljebotn
<[email protected]>  wrote:
On 01/24/2012 08:05 PM, Vitja Makarov wrote:

2012/1/24 mark florisson<[email protected]>:

On 24 January 2012 18:30, Vitja Makarov<[email protected]>    wrote:

2012/1/24 Robert Bradshaw<[email protected]>:

On Tue, Jan 24, 2012 at 6:09 AM, Vitja Makarov<[email protected]>
  wrote:

2012/1/24 mark florisson<[email protected]>:

On 24 January 2012 11:37, Konrad Hinsen<[email protected]>
  wrote:

Compiling the attached Cython file produced the attached C file
which
has errors in lines 532-534:

  __pyx_v_self->xx = None;
  __pyx_v_self->yy = None;
  __pyx_v_self->zz = None;

There is no C symbol "None", so this doesn't compile.

I first noticed the bug in Cython 0.15, but it's still in the latest
revision from Github.

Konrad.

_______________________________________________
cython-devel mailing list
[email protected]
http://mail.python.org/mailman/listinfo/cython-devel


Hm, it seems the problem is that the call to the builtin float
results
in SimpleCallNode being replaced with PythonCApiNode, which then
generates the result code, but the list of coerced nodes are
CloneNodes of the original rhs, and CloneNode does not generate the
result code of the original rhs (i.e. allocate and assign to a temp),
which results in a None result.

Maybe CascadedAssignmentNode should replace CloneNode.arg with the
latest self.rhs in generate_assignment_code? I'm not entirely sure.


Seems like a hack to me.



May be it's better to run OptimizeBuiltinCalls before
AnalyseExpressionsTransform?


Doesn't OptimizeBuiltinCalls take advantage of type information?


Yes, it does :(

So as Mark said the problem is CascadedAssignmentNode.coerced_rhs_list
is created before rhs is updated.


I think deferring the CloneNode creation to code generation time works
(are there any known problem with doing type coercions at code
generation time?).


Coercion errors at code generation time?


Apologies up front for raising my voice, as my knowledge of the internals
are getting so rusty...take this with a grain of salt.

I'm +1 on working towards having the code generation phase be pure code
generation. I did some refactorings to take mini-steps towards that once
upon a time, moving some error conditions to before code generation.

My preferred approach would be to do away with CascadedAssignmentNode at the
parse tree stage:

a = b = c = expr

goes to

tmp = expr
c = tmp
b = tmp
a = tmp

and so on. Of course it gets messier;

(expr1)[expr2] = (expr3).attr = expr4

But apart from getting the time of evaluating each expression right the
transform should be straightforward. One of the tempnodes/"let"-nodes (I
forgot which one, or if they've been consolidated) should be able to fix
this.

Takes some more work though than a quick hack though...

Dag


In principle it was doing the same thing, apart from the actual
rewrite. I suppose the replacement problem can also be circumvented by
manually wrapping self.rhs in a CoerceToTempNode. The problem with
coerce_to_temp is that it does not create this node if the result is
already in a temp. Creating it manually does mean an extra useless
assignment, but it is an easy fix which happens at analyse_types time.
  Instead we could also use another node that just proxies a few things
like generate_result_code and the result method.

I like the idea though, it would be nice to only handle things in
SingleAssignmentNode. I recently added broadcasting (inserting leading
dimensions) and scalar assignment to memoryviews, and you can only
catch that at the assignment point. Currently it only supports single
assignments as the functionality is only in SingleAssignmentNode.

I must say though, the following would look a bit weird:

    a = b[:] = c[:, :] = d

as you always expect a kind of "cascade", e.g. you expect c[:, :] to
be assignable to b[:], or 'a', but none of that may be true at all. So
I'm fine with disallowing that, I think people should only use
cascaded assignment for variables.

I don't think that is a problem myself; but that's perhaps just because I'm so used to it (and to "a = b.x = y" not invoking b.__getattr__, and so on). After all, that is what you get with Python and NumPy! This is, in a sense Python being a bit strange and us just following Python.

So I'm +1 for supporting this if we can do it "by accident".



E.g. save 'env' during analyse_types and in
generate_assignment_code do

    rhs = CloneNode(self.rhs).coerce_to(lhs.type, self.env)
    rhs.generate_evaluation_code(code)
    lhs.generate_assignment_code(rhs, code)

Seems to work.


Yeah, that's better.




I don't like idea of transforming cascade assignment into N single
assignment since we might break some optimizations and loose CF info.

But what if the user decides to write

tmp = expr
a = tmp
b = tmp

manually? Shouldn't the same optimizations apply then?

Consider that if we can get down to a single assignment node, we can then split it into SingleAssignmentNode into "AssignLocalNode", "AssignPythonModuleVarNode", "AssignAttributeNode", "AssignTypedVarNode" and so on, if we want to -- that should clean up some code...


Dag Sverre
_______________________________________________
cython-devel mailing list
[email protected]
http://mail.python.org/mailman/listinfo/cython-devel

Reply via email to