Re: way for a function to understand whether it's being run through a OnCreate callback or not

2009-07-10 Thread Martin Vilcans
On Fri, Jul 10, 2009 at 8:18 AM, slamdunks.de...@gmail.com wrote:
 is there a way for a function to understand whether it's being run
 through a OnCreate callback or not?
 I have working functions that I want to recycle through the OnCreate
 but need to catch the nuke.thisNode() bit inside them so they can
 still function when called manually through other scripts functions.

I suppose you're programming for The Foundry's Nuke. Python is used in
lots of different contexts, and most people on this list use Python
for something totally unrelated to video.

As an attempt at answering your question, you can add a parameter to
your function that tells from where it's called. I.e. you put True in
the parameter when it is called from OnCreate, False when it is not.
But that isn't a very good design. A function should not need to care
from where it is called. You probably want to have the node as a
parameter to the function instead of calling nuke.thisNode inside of
it.

I can't find documentation for the OnCreate function online. Do you
mean that you can only call nuke.thisNode() from inside OnCreate?

Here's my guess of what I think you want to do:

def OnCreate():
# call your function with the current node as argument
your_function(nuke.thisNode())

def your_function(node):
# The function takes a node as an argument.
# Do whatever you want here

For more Nuke-specific questions, you'd probably get better results by
asking on the Nuke-python mailing list:

http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python


-- 
mar...@librador.com
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clarity vs. code reuse/generality

2009-07-06 Thread Martin Vilcans
On Fri, Jul 3, 2009 at 4:05 PM, kjno.em...@please.post wrote:
 I'm will be teaching a programming class to novices, and I've run
 into a clear conflict between two of the principles I'd like to
 teach: code clarity vs. code reuse.  I'd love your opinion about
 it.

In general, code clarity is more important than reusability.
Unfortunately, many novice programmers have the opposite impression. I
have seen too much convoluted code written by beginners who try to
make the code generic. Writing simple, clear, to-the-point code is
hard enough as it is, even when not aiming at making it reusable.

If in the future you see an opportunity to reuse the code, then and
only then is the time to make it generic.

YAGNI is a wonderful principle.

-- 
mar...@librador.com
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nimrod programming language

2009-05-13 Thread Martin Vilcans
On Tue, May 12, 2009 at 3:10 PM,  rump...@web.de wrote:
 You can certainly have a string type that uses byte arrays in UTF-8
 encoding internally, but your string functions should be aware of that
 and treat it as a unicode string. The len function and index operators
 should count characters, not bytes. Add a byte array data type for
 byte arrays instead.

 It's not easy. I think Python3's byte arrays have an upper method
 (and a string literal syntax babc) which is quite alarming to me
 that they chose the wrong default.

I suppose that is to make it possible to use the 'bytes' data type for
text strings if you really want to (and for backwards-compatibility).
Default text strings should use Unicode (as in Python 3), and that
should be supported by the language.

 Eventually the rope data structure (that the compiler uses heavily)
 will become a proper part of the library: By rope I mean an
 immutable string implemented as a tree, so concatenation is O(1). For
 immutable strings there is no ``[]=`` operation, so using UTF-8 and
 converting it to a 32bit char works better.

Consider a string class that keeps track of its own encoding and can
change it on the fly as needed.

-- 
mar...@librador.com
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nimrod programming language

2009-05-12 Thread Martin Vilcans
On Fri, May 8, 2009 at 5:48 PM, Andreas Rumpf rump...@web.de wrote:
 Dear Python-users,

 I invented a new programming language called Nimrod that combines Python's 
 readability with C's performance. Please check it out: 
 http://force7.de/nimrod/
 Any feedback is appreciated.

Nice with a language with a new language designed for high
performance. It seems like a direct competitor with D, i.e. a
high-level language with low-level abilities. The Python-like syntax
is a good idea.

There are two showstoppers for me though:

1. Introducing a new programming language where the char type is a
byte is anachronistic. You're saying that programmers don't have to
care about the string encoding and can just treat them as an array of
bytes. That is exactly what causes all the problems with applications
that haven't been designed to handle anything but ASCII. If you're
doing any logic with strings except dumb reading and writing, you
typically *have to* know the encoding. Forcing programmers to be aware
of this problem is a good idea IMO.

You can certainly have a string type that uses byte arrays in UTF-8
encoding internally, but your string functions should be aware of that
and treat it as a unicode string. The len function and index operators
should count characters, not bytes. Add a byte array data type for
byte arrays instead.

2. The dynamic dispatch is messy. I agree that procedural is often
simpler and more efficient than object-oriented programming, but
object-oriented programming is useful just as often and should be made
a simple as possible. Since Nimrod seems flexible, perhaps it would be
possible to implement an object-orientation layer in Nimrod that hides
the dynamic dispatch complexity?

-- 
mar...@librador.com
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Type feedback tool?

2008-10-27 Thread Martin Vilcans
Thanks everyone for the suggestions. I've implemented a simple
solution using sys.settrace. It's quite nice because it doesn't
require any instrumentation of the code (it works like a debugger that
traps all function calls).

Here's the output I get right now when profiling Skip's example code
(but without using decorators):

fib2 (testee.py:8)
1 arguments
  n: type 'float' (1) type 'int' (23)

main (testee.py:17)
0 arguments

fib (testee.py:1)
1 arguments
  n: type 'float' (9) type 'int' (15)

This means that fib2 has been called once with a float in the n
parameter, and 23 times with an int, etc.

There's more work to be done to make this a robust tool (which is why
I was hoping there already existed a tool for this). It should handle
varargs and keyword arguments properly, and probably needs to handle
exceptions better. I'll see if I can run it on real code tomorrow and
see if the results are useful.

Martin

On Mon, Oct 27, 2008 at 11:50 AM, M.-A. Lemburg [EMAIL PROTECTED] wrote:
 On 2008-10-26 13:54, Martin Vilcans wrote:
 Hi list,

 I'm wondering if there's a tool that can analyze a Python program
 while it runs, and generate a database with the types of arguments and
 return values for each function. In a way it is like a profiler, that
 instead of measuring how often functions are called and how long time
 it takes, it records the type information. So afterwards, when I'm
 reading the code, I can go to the database to see what data type
 parameter foo of function bar typically has. It would help a lot
 with deciphering old code.

 When I googled this, I learned that this is called type feedback,
 and is used (?) to give type information to a compiler to help it
 generate fast code. My needs are much more humble. I just want a
 faster way to understand undocumented code with bad naming.

 You could try the trace module:

http://www.python.org/doc/2.5.2/lib/module-trace.html

 but I'm not sure whether that includes parameter listings.

 Or write your own tracing function and then plug it into your
 application using sys.settrace():

http://www.python.org/doc/2.5.2/lib/debugger-hooks.html#debugger-hooks

 The frame object will have the information you need:

http://www.python.org/doc/2.5.2/ref/types.html#l2h-143

 in f_locals.

 --
 Marc-Andre Lemburg
 eGenix.com

 Professional Python Services directly from the Source  (#1, Oct 27 2008)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/
 

  Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611




-- 
[EMAIL PROTECTED]
http://www.librador.com
--
http://mail.python.org/mailman/listinfo/python-list


Type feedback tool?

2008-10-26 Thread Martin Vilcans
Hi list,

I'm wondering if there's a tool that can analyze a Python program
while it runs, and generate a database with the types of arguments and
return values for each function. In a way it is like a profiler, that
instead of measuring how often functions are called and how long time
it takes, it records the type information. So afterwards, when I'm
reading the code, I can go to the database to see what data type
parameter foo of function bar typically has. It would help a lot
with deciphering old code.

When I googled this, I learned that this is called type feedback,
and is used (?) to give type information to a compiler to help it
generate fast code. My needs are much more humble. I just want a
faster way to understand undocumented code with bad naming.

-- 
[EMAIL PROTECTED]
http://www.librador.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Looping through the gmail dot trick

2008-01-20 Thread Martin Vilcans
On Jan 20, 2008 8:58 PM, Martin Marcher [EMAIL PROTECTED] wrote:
 are you saying that when i have 2 gmail addresses

 [EMAIL PROTECTED] and
 [EMAIL PROTECTED]

 they are actually treated the same? That is plain wrong and would break a
 lot of mail addresses as I have 2 that follow just this pattern and they
 are delivered correctly!

 Do you have any reference on that where one could read up why gmail would
 have such a behaviour?

Try the SMTP spec. IIRC there's a passage there that says that the
server should try to make sense of addresses that don't map directly
to a user name. Specifically, it says that firstname.lastname should
be mapped to the user with those first and last names.

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I'm searching for Python style guidelines

2008-01-08 Thread Martin Vilcans
On 1/7/08, Guilherme Polo [EMAIL PROTECTED] wrote:
 2008/1/7, [EMAIL PROTECTED] [EMAIL PROTECTED]:
  Anything written somewhere that's thorough? Any code body that should
  serve as a reference?

 PEP 8
 http://www.python.org/dev/peps/pep-0008/

The problem with PEP 8 is that even code in the standard libraries
doesn't follow the recommendations regarding the naming of functions
for example. The recommendation is to_separate_words_with_underscore,
but some code uses lowerCamelCase instead.

I tended to dislike the underscore convention, but after forcing
myself to use it for a while I'm starting to appreciate its beauty.

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using python as primary language

2007-11-12 Thread Martin Vilcans
On Nov 10, 2007 12:48 AM, Rhamphoryncus [EMAIL PROTECTED] wrote:
 On Nov 9, 1:45 pm, Terry Reedy [EMAIL PROTECTED] wrote:
  2. If micro-locked Python ran, say, half as fast, then you can have a lot
  of IPC (interprocess communition) overhead and still be faster with
  multiple processes rather than multiple threads.

 Of course you'd be faster still if you rewrote key portions in C.
 That's usually not necessary though, so long as Python gives a roughly
 constant overhead compared to C, which in this case would be true so
 long as Python scaled up near 100% with the number of cores/threads.

 The bigger question is one of usability.  We could make a usability/
 performance tradeoff if we had more options, and there's a lot that
 can give good performance, but at this point they all offer poor to
 moderate usability, none having good usability.  The crux of the
 multicore crisis is that lack of good usability.

Certainly. I guess it would be possible to implement GIL-less
threading in Python quite easily if we required the programmer to
synchronize all data access (like the synchronized keyword in Java for
example), but that gets harder to use. Am I right that this is the
problem?

Actually, I would prefer to do parallell programming at a higher
level. If Python can't do efficient threading at low level (such as in
Java or C), then so be it. Perhaps multiple processes with message
passing is the way to go. It just that it seems so... primitive.

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using python as primary language

2007-11-09 Thread Martin Vilcans
 If by 'this' you mean the global interpreter lock, yes, there are good
 technical reasons.  All attempts so far to remove it have resulted in an
 interpeter that is substantially slower on a single processor.

Is there any good technical reason that CPython doesn't use the GIL on
single CPU systems and other locking mechanisms on multi-CPU systems?
It could be selected at startup with a switch, couldn't it?

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using python as primary language

2007-11-09 Thread Martin Vilcans
On Nov 9, 2007 10:37 AM, Hrvoje Niksic [EMAIL PROTECTED] wrote:
 Martin Vilcans [EMAIL PROTECTED] writes:

  If by 'this' you mean the global interpreter lock, yes, there are good
  technical reasons.  All attempts so far to remove it have resulted in an
  interpeter that is substantially slower on a single processor.
 
  Is there any good technical reason that CPython doesn't use the GIL
  on single CPU systems and other locking mechanisms on multi-CPU
  systems?

 It's not the locking mechanism itself that is slow, what's slow is the
 Python you get when you remove it.  By removing the GIL you grant
 different threads concurrent access to a number of shared resources.
 Removing the global lock requires protecting those shared resources
 with a large number of smaller locks.  Suddenly each incref and decref
 (at the C level) must acquire a lock, every dict operation must be
 locked (dicts are used to implement namespaces and in many other
 places), every access to a global (module) variable must be locked, a
 number of optimizations that involve using global objects must be
 removed, and so on.  Each of those changes would slow down Python;
 combined, they grind it to a halt.

But if Python gets slow when you add fine-grained locks, then most
certainly it wouldn't get so slow if the locks were very fast, right?

But that's not what my question was about. It was about whether it
would make sense to, on the same python installation, select between
the GIL and fine-grained locks at startup. Because even if the locks
slows down the interpreter, if they let you utilize a 32 core CPU, it
may not be so bad anymore. Or am I missing something?

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Vector math library

2005-12-31 Thread Martin Vilcans
Hi, I'm new to this mailing list and fairly new to Python as well. I'm 
working on a prototype for a 3D game using OpenGL, and take this 
opportunity to learn Python better.

I'm looking for a good library for vector math. I need to do vector 
addition, cross products, dot products etc. and probably in the future 
I'll need matrix math as well.

So far I've used the Scientific library, which is very nice, but 
unfortunately it crashes when I use the Rotation class under OSX (which 
is my current development environment). I've seen mailing list posts 
that suggests that this crash is because of some problem with 64 bit CPUs.

I guess I can find a workaround for this problem, but first I want to 
check if there's a better library for vector math. When I googled for 
vector libraries, I found people claiming that the Numeric library can 
be used for vector math. But skimming the Numeric documentation, I 
didn't find a cross product function for instance, but it may just that 
I don't understand how to use it.

I also found SciPy, but it doesn't seem to have any vector math in it. 
In fact, I'm a bit confused about the libraries SciPy, Scientific, 
Numeric and NumericArray and the relations between them.

Any suggestions on what library I should use?

Best regards,

Martin Vilcans
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list