[issue3982] support .format for bytes

2013-05-06 Thread Ecir Hana

Changes by Ecir Hana ecir.h...@gmail.com:


--
nosy: +ecir.hana

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3982
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



PyCFunction_New(): to Py_DECREF() or not to Py_DECREF()

2009-12-26 Thread Ecir Hana
Hello,

if creating new CFunction

PyObject *function = PyCFunction_New(function_name, NULL);

and then this is the only thing which uses it (dictionary stays
alive...)

PyDict_SetItemString(dictionary, function, function);

do I have to

Py_DECREF(function)

or not?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCFunction_New(): to Py_DECREF() or not to Py_DECREF()

2009-12-26 Thread Ecir Hana
On Dec 27, 3:15 am, Benjamin Peterson benja...@python.org wrote:

 Yes, you still own the reference to the function.

Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Redirect stdout to a buffer [Errno 9]

2009-12-10 Thread Ecir Hana
I tried to replace official Python dll with the one built with MinGW
and it works. The problem is, that port is very old and so far it
seems that official support for building Python under MinGW is nowhere
near.

I really don't want to use MSVC, so if there's any other way around
this, please, let me know.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Redirect stdout to a buffer [Errno 9]

2009-11-18 Thread Ecir Hana
On Nov 17, 6:51 am, Gabriel Genellina gagsl-...@yahoo.com.ar
wrote:

 The code below now reads from the pipe everything that has been written --  
 except from Python :(

Thanks a lot for the fine code! So far I don't know why it fails to
print from Python - I'll post here any news I get...

-- 
http://mail.python.org/mailman/listinfo/python-list


[issue7346] Redirected stdout fires [Errno 9]

2009-11-18 Thread Ecir Hana

New submission from Ecir Hana ecir.h...@gmail.com:

I try to log all the output of a program written in Python and C to a
buffer. I create a pipe, redirect stdout to its write-end and then read
its content afterward. However, printing from Python fires IOError:
[Errno 9] Bad file descriptor. Please see the attached test-case.

It is happening on Windows XP, Python 2.6 and MinGW GCC and I used this
to compile:
gcc -o std.exe std.c -Ic:/dev/include/python2.6 -l python26

PS: It might be that the problem is that Python was compiled with
MSVC2008 and I'm using MinGW but I'm not sure...

--
components: IO, Windows
files: std.c
messages: 95433
nosy: ecir.hana
severity: normal
status: open
title: Redirected stdout fires [Errno 9]
versions: Python 2.6
Added file: http://bugs.python.org/file15358/std.c

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7346
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Redirect stdout to a buffer [Errno 9]

2009-11-16 Thread Ecir Hana
On Nov 15, 5:28 pm, Ecir Hana ecir.h...@gmail.com wrote:
 Hello,

 I'm trying to write a simple Win32 app, which may run some Python
 scripts. Since it is a Windows GUI app, I would like to redirect all
 output (Python print, C printf, fprinf stderr, ...) to a text area
 inside the app. In other words, I'm trying to log all the output from
 the app (C, Python) to a window. So far, this works for C printf():

 int fds[2];
 _pipe(fds, 1024, O_TEXT);
 _dup2(fds[1], 1);
 ...
 and then I read from pipe's read-end and append the text to the text
 area.

 But when I try to run:
 Py_Initialize();
 PyRun_SimpleString(print 'abc');
 Py_Finalize();

 I get an error:
 IOError: [Errno 9] Bad file descriptor

 What am I doing wrong? How to redirect standard IO, both for C and for
 Python?
 PS: Maybe I'm doind something wrong, but SetStdHandle() does not work
 at all

Also, maybe this matters: it's on WinXP, Python 2.6 and MinGW GCC.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Redirect stdout to a buffer [Errno 9]

2009-11-16 Thread Ecir Hana
On Nov 16, 7:21 pm, Gabriel Genellina gagsl-...@yahoo.com.ar
wrote:
 En Mon, 16 Nov 2009 14:17:52 -0300, Ecir Hana ecir.h...@gmail.com  
 escribió:

  I'm trying to write a simple Win32 app, which may run some Python
  scripts. Since it is a Windows GUI app, I would like to redirect all
  output (Python print, C printf, fprinf stderr, ...) to a text area
  inside the app. In other words, I'm trying to log all the output from
  the app (C, Python) to a window. So far, this works for C printf():
  [...]
  PS: Maybe I'm doind something wrong, but SetStdHandle() does not work
  at all

 This worked for me:

 #include windows.h
 #include Python.h

 int main()
 {
    HANDLE hReadPipe, hWritePipe;
    DWORD nr, nw;
    char buffer[100];

    CreatePipe(
      hReadPipe,
      hWritePipe,
      NULL,
      1024);
    SetStdHandle(STD_OUTPUT_HANDLE, hWritePipe);

    Py_Initialize();
    PyRun_SimpleString(print 'from Python');
    Py_Finalize();

    puts(from C\n);

    CloseHandle(hWritePipe);
    ReadFile(hReadPipe, buffer, 19, nr, NULL);
    CloseHandle(hReadPipe);
    WriteFile(GetStdHandle(STD_ERROR_HANDLE), buffer, nr, nw, NULL);

 }
  Also, maybe this matters: it's on WinXP, Python 2.6 and MinGW GCC.

 I'm using Visual Studio 2008 Express Edition.

 --
 Gabriel Genellina

Hi,

thanks for the reply!

However, please, could you tell me how many bytes it read here:

ReadFile(hReadPipe, buffer, 19, nr, NULL);

because for me, it has read 0. When I run your code, it prints from
both C and Python, but it prints straight to the console, not to the
buffer. Could you also please try to add:

WriteFile(GetStdHandle(STD_ERROR_HANDLE), \n, 5, nw, NULL);

before:

WriteFile(GetStdHandle(STD_ERROR_HANDLE), buffer, nr, nw, NULL);

Does it print  before from Python and from C for you?
Because for me it comes afterwards (as nr is 0)...
-- 
http://mail.python.org/mailman/listinfo/python-list


Redirect stdout to a buffer

2009-11-15 Thread Ecir Hana
Hello,

I'm trying to write a simple Win32 app, which may run some Python
scripts. Since it is a Windows GUI app, I would like to redirect all
output (Python print, C printf, fprinf stderr, ...) to a text area
inside the app. In other words, I'm trying to log all the output from
the app (C, Python) to a window. So far, this works for C printf():

int fds[2];
_pipe(fds, 1024, O_TEXT);
_dup2(fds[1], 1);
...
and then I read from pipe's read-end and append the text to the text
area.

But when I try to run:
Py_Initialize();
PyRun_SimpleString(print 'abc');
Py_Finalize();

I get an error:
IOError: [Errno 9] Bad file descriptor

What am I doing wrong? How to redirect standard IO, both for C and for
Python?
PS: Maybe I'm doind something wrong, but SetStdHandle() does not work
at all
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Distributing Python environment

2009-09-14 Thread Ecir Hana
I see, thanks a lot!
-- 
http://mail.python.org/mailman/listinfo/python-list


Distributing Python environment

2009-09-13 Thread Ecir Hana
Hello,
I have an app which I would like to extend with Python. I I saw how to
embed the interpreter into C. If I bundle my app with the Python lib
(say, python26.dll) I can PyRun_SimpleString() some code. My question
is, how do I bundle the rest of the libraries (site, os, elementtree,
random, ...)? Is it possible to make one huge (ok, not so huge) .zip
blob containing all of the libraries? And what happens if some user
has Python already installed? Which libraries get loaded first? Is it
possible to alter this order? I mean, first check for local Python
install and if the user doesn't have Python installation use the
bundled one?
-- 
http://mail.python.org/mailman/listinfo/python-list


Executing python script stored as a string

2009-09-01 Thread Ecir Hana
Hello,

please, how to execute a python script stored as a string? But let me
impose several limitations, so simple exec wont work:

- if I understood it correctly defining a function in the string and
exec-ing it created the function in current scope. This is something I
really don't want

- simple exec also blocks the rest of the program

- I also would like the string to be able to use and return some parts
of the caller

So to give an example what I try to achieve:

result = []
def up(s):
  result.append(s.upper())

code = '''
up(abc)
print 'hello'
i = i + 3
def x(s):
  up(s)
x('def')
print i
'''

somehow_execute(code)

Couple of points:

- the script in string should behave just like any other ordinary
python script executed in separate process, except it should also know
about a function caller up. Nothing else. (I read that something
similar is possible while embedding python into your C project - that
you could invoke the VM and provide some default imports)

- if the other script runs in separate process how should it call the
remote function? And how to pass its arguments? I really hope I don't
have to serialize every communication, maybe I should use threading
instead of process? All I want is that running it wont block the
caller and that it cannot modify callers code/variables/scope (apart
from calling the predefined callers' functions). Or maybe even better,
let it block the caller but provide a way to stop its execution?

- how to know that the script finished? I was thinking about atexit()
- could it work here?

Think of it as a text editor with a special ability to execute its
content, while providing access of some of its functionality to the
script.

The reason I *think* I cannot just simple import the editor module
into the script is that theeditor is GUI application and script
should have access to just this instance of editor.

Anyway, I hope I was not too confusing. Thanks for any help!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Executing python script stored as a string

2009-09-01 Thread Ecir Hana
On Sep 1, 5:31 am, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:

 You can pass in a global and local namespaces to exec as arguments:

  x = 4
  ns = {'x': 4}
  exec x += 1 in ns
  x
 4
  ns['x']

 5

 See the docs for details.

Thanks! This is very useful!

 You can copy the parts of the current scope into the namespace you pass
 to exec, then later copy the revised values out again.

 But are you sure you really want to take this approach? exec is up to ten
 times slower than just executing the code directly. And if the string is
 coming from an untrusted source, it is a *huge* security risk.

I don't know if I should use exec. I don't really mind that it's slow
(btw., why is it so?). But I don't quite understand why is it security
risk. How is it different to run:
exec 'format(your_hdd)'
than:
/bin/python format.py
?

 As far as I know, you can't kill threads, you can only ask them to kill
 themselves.

Also, I'm not sure if I follow. What does this mean? If a thread runs:

while True:
  pass

it is not possible to kill it from another thread? (Bacause it doesn't
check whether some other thread asks to stop it..?)

 Something like this?

Well, something more like:

data = [1, 2, 3]
map(lambda x: x * 2, data)
display_data_in_editor_viewport(data) #this renders into part of main
editor window (may take some time)

 If so, I think you are making this much too complicated for such a simple
 use-case. Just publish an API which the script can use, and have the main
 text editor application specify a script namespace containing only that
 API. That could be a module:

  import math  # pretend this is your API shared module
  exec myvalue = 42 in math.__dict__
  math.myvalue

 42

 Then execute the text using exec, but don't bother about putting it into
 a thread or subprocess. That just makes it harder to implement, and you
 have to worry about concurrency issues.

Ok, I could try exec, thanks for the explanation. But what about those
security concerns you mentioned above?

Thanks a lot, very informative!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Executing python script stored as a string

2009-09-01 Thread Ecir Hana
On Sep 1, 11:32 am, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:
  But I don't quite understand why is it security
  risk. How is it different to run:
  exec 'format(your_hdd)'
  than:
  /bin/python format.py
  ?

 It's not different. But read what I said -- if the string is coming from
 an UNTRUSTED source -- presumably you trust yourself. If you run 'exec
 format(your_hdd)' it is because *you* want to format your hard disk.

 Now imagine you have a web-app which gets a string from the user and
 calls exec on it. Then you might have this:

 exec search('%d') % user_input

 and the user, who is halfway across the world, enters the following
 search string:

 places to eat'); import os; os.system('#rm -rf /

 Your web app will go right ahead and erase itself. That's why you need to
 keep untrusted strings away from exec, execfile, and eval.

Ah, I see! Ok.

 No, I believe that the only way to halt that is to halt the entire
 process.

 Possibly there is a way to have a thread halt itself after a certain
 amount of time? I'm not an expert on threads, I've hardly ever used them.

Thank you once again!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how? Part 2

2007-12-23 Thread ecir . hana
On Dec 23, 4:54 am, Dennis Lee Bieber [EMAIL PROTECTED] wrote:
 On Sat, 22 Dec 2007 17:56:05 -0800 (PST), [EMAIL PROTECTED] declaimed
 the following in comp.lang.python:

  just to recap: last time I asked how to do an interprocess
  communitation, between one Manager process (graphical beckend) and
  some Worker processes.

 Never considered a graphical control process as a backend
 before... Backend processing, to me, implies some sort of server without
 a user interface.

  However, I would like to ask another thing: I would like to collect
  everyting what the Workers print and display in Manager. Or, redirect
  all Workers' stdout to stdio of Manager. If there was only one Worker
  I could use a pipe, right? But if there are more than one Worker, what
  to do? I found something called named pipe which seems rather
  complicated. Then I thought I could somehow (how?) create a fake
  (virtual) file object, redirect stdout of a Worket into it and from
  there send the data to Manager via sockets. Please, what do you think?

 I'd forget about stdout as a data communication means... The parent
 should probably set up a socket that accepts messages from any worker...
 or create a reply socket for each worker, and pass the worker the port
 on which the master expects to retrieve its output.

Ok, but how to redirect print statement into a socket?


 Named pipes are, I think, a M$ Windows creation (though I think
 the Amiga supported disjoint pipes by using run program pipe:name and
 program pipe:name instead of program | 
 programhttp://stason.org/TULARC/pc/amiga/faq/2-5-1-Using-PIPE-in-a-standard-...
 -- run program being ~ program  in most UNIX-based shells)
 --
 WulfraedDennis Lee Bieber   KD6MOG
 [EMAIL PROTECTED] [EMAIL PROTECTED]
 HTTP://wlfraed.home.netcom.com/
 (Bestiaria Support Staff:   [EMAIL PROTECTED])
 HTTP://www.bestiaria.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how? Part 2

2007-12-23 Thread ecir . hana
On Dec 23, 10:30 am, Guilherme Polo [EMAIL PROTECTED] wrote:
 2007/12/22, [EMAIL PROTECTED] [EMAIL PROTECTED]:

  Hello,

  just to recap: last time I asked how to do an interprocess
  communitation, between one Manager process (graphical beckend) and
  some Worker processes.

  I decided to go with sockets, thanks for replies, once more.

  However, I would like to ask another thing: I would like to collect
  everyting what the Workers print and display in Manager. Or, redirect
  all Workers' stdout to stdio of Manager. If there was only one Worker
  I could use a pipe, right? But if there are more than one Worker, what
  to do? I found something called named pipe which seems rather
  complicated.

 Named pipe is called FIFO, but they are not that complicated.

  Then I thought I could somehow (how?) create a fake
  (virtual) file object,

 That is why it is called named pipe, because you will be using a
 (especial) file in your filesystem to use as a pipe.



  redirect stdout of a Worket into it and from
  there send the data to Manager via sockets. Please, what do you think?

  Preferably, it should look like this:

  --- Worker 1 ---
  ...some code...
  print '123'

  --- Manager ---
  Worker 1: 123

  --- Worker 2 ---
  ...some code...
  print '456'

  --- Manager ---
  Worker 1: 123
  Worker 2: 456

  Thanks in advance!
  --
 http://mail.python.org/mailman/listinfo/python-list

 Manager would create and open the FIFO, Workers would open this same
 FIFO. So workers write to FIFO and the manager reads from FIFO.

 --
 -- Guilherme H. Polo Goncalves

What I don't like about FIFO, is that on Unix they are persistent
files. So whatever happens to Manager they would stay there...
I was just wondering if there's another way of doing the above and if
not, I would probably go with FIFO. Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Inter-process communication, how? Part 2

2007-12-22 Thread ecir . hana
Hello,

just to recap: last time I asked how to do an interprocess
communitation, between one Manager process (graphical beckend) and
some Worker processes.

I decided to go with sockets, thanks for replies, once more.

However, I would like to ask another thing: I would like to collect
everyting what the Workers print and display in Manager. Or, redirect
all Workers' stdout to stdio of Manager. If there was only one Worker
I could use a pipe, right? But if there are more than one Worker, what
to do? I found something called named pipe which seems rather
complicated. Then I thought I could somehow (how?) create a fake
(virtual) file object, redirect stdout of a Worket into it and from
there send the data to Manager via sockets. Please, what do you think?

Preferably, it should look like this:

--- Worker 1 ---
...some code...
print '123'

--- Manager ---
Worker 1: 123

--- Worker 2 ---
...some code...
print '456'

--- Manager ---
Worker 1: 123
Worker 2: 456

Thanks in advance!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how?

2007-12-16 Thread ecir . hana
On Dec 16, 5:24 am, John Machin [EMAIL PROTECTED] wrote:

 Yes. Consider this: If you were to run your calculation script from
 the shell prompt [strongly recommended during testing], how would you
 tell it the name of the file? Now look at the docs again.


File arguments! Of course, totally forgot about them! Thanks a lot!



  PS: both with mmpam and temp file you probably meant that I should
  hard code some as-weirdest-filename-as-possible two both programs but
  what if I run the computation several times?

 That's mmap, not mmpam. No, Dennis didn't mean that you should hard
 code a filename. Have a look at the tempfile module.

Now I recall I read I somewhere that network communication is so much
faster than disk access (especially on the same machine, as steve
howell suggested) so instead of file names I probably should find out
which port is open to use.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how?

2007-12-16 Thread ecir . hana
On Dec 16, 6:38 am, Dennis Lee Bieber [EMAIL PROTECTED] wrote:


 Read the details for subprocess.Popen() again...

 
 /args/ should be a string, or a sequence of program arguments. The
 program to execute is normally the first item in the args sequence or
 string, but can be explicitly set by using the executable argument.
 

 IOWs, passing it what you would enter on a command line

 subscript.py tempfilename
 [subscript.py, tempfilename]

 should be sufficient.



 There is a module that can generate temporary file names, though for
 this usage you could even do something to obtain the parent program
 process ID along with a timestamp and create a file name from all that.
 What are the odds that your several times would have the same clock
 time?


Quite small, I guess. However, perhaps I should better consider using
sockets.

Thanks!

ps: I really like how you format the paragraphs! :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how?

2007-12-16 Thread ecir . hana
Just for the record:

http://www.amk.ca/python/howto/sockets/

Of the various forms of IPC (Inter Process Communication), sockets
are by far the most popular. On any given platform, there are likely
to be other forms of IPC that are faster, but for cross-platform
communication, sockets are about the only game in town.
-- 
http://mail.python.org/mailman/listinfo/python-list


Inter-process communication, how?

2007-12-15 Thread ecir . hana
Hi,
let's say I have two scripts: one does some computations and the other
one is a graphical front end for launching the first one. And both run
in separate processes (front end runs and that it spawns a subprocess
with the computation). Now, if the computation has a result I would
like to display it in the front end. In another words, I would like to
pass some data from one process to another. How to do that? I'm
affraid I can't use a pipe since the computation could print out some
logging (if I understant pipes correctly).
Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-process communication, how?

2007-12-15 Thread ecir . hana
On Dec 16, 2:42 am, Dennis Lee Bieber [EMAIL PROTECTED] wrote:

 Have you perused and been perplexed by the Is Python a Scripting
 Language thread? G

Oh, no!! Script as in shell script.



 Question:   Are you creating both scripts from scratch?
 Yes?

Yes.


Then you can define whatever protocol is needed for your usage and
 is available on your OS.

 If it is a one-shot (spawn sub, wait, retrieve results) you could
 generate a temporary file name in the parent, pass that name to the sub
 when invoking it, wait for the sub to complete (giving you a status
 code) and then read the result the sub has written to the file.

Yes, it's once shot. But how do I pass that name?
From all the arguments of class Popen (http://docs.python.org/lib/
node529.html) perhaps I could use env only. Or am I wrong?
TCP/IP sounds good but isn't it a bit too heavy?
And what the other options goes I would prefer a cross platform
solution, if there is one.

PS: both with mmpam and temp file you probably meant that I should
hard code some as-weirdest-filename-as-possible two both programs but
what if I run the computation several times?

And thanks for reply!



 Or, you could have parent and sub both mmap the same file,
 essentially passing data in memory unless it becomes too large and pages
 out to swap disk. You might even be able to do bidirectional and dynamic
 updates (rather than waiting for the sub to exit)... Define, say, an
 epoch count for each process -- these would be the first two words of
 the mmap file

 |p-epoch|s-epoch|n-words of p-data (fixed constant known to both)|n-words of 
 s-data|

 periodically the parent would examine the value of s-epoch, and if it
 has changed, read the s-data.. (both sides write the data first, then
 update the epoch count). When the epoch stays the same and two
 consecutive reads of the data match, you have stable data (so the reads
 should occur more often than updates) and can process the data
 transferred.

 OR, you could have the parent open a TCP/IP socket as a server, and
 pass the socket port number to the sub. The sub would then connect to
 that port and write the data. For bidirectional you could pass the
 parent port, and the sub's first action is to connect and pass a port
 that it will be monitoring.

 On a VMS system, the processes would connect to named mailboxes
 and use QIO operations to pass data between them.

 On an Amiga you'd use message ports (which operated somewhat
 similar to VMS mailboxes except that mailboxes had an independent
 existence, multiple processes can read or write to them -- message ports
 were readable by the creating process, but could have messages sent from
 anywhere; typically passing the message port [address of a linked list
 of messages] for replies). Or a higher level message port: an ARexx
 port.

 On a Windows NT class system, the win32 extensions allow access to
 Windows Named Pipes... Or maybe the Windows clipboard could be used...
 --
 WulfraedDennis Lee Bieber   KD6MOG
 [EMAIL PROTECTED] [EMAIL PROTECTED]
 HTTP://wlfraed.home.netcom.com/
 (Bestiaria Support Staff:   [EMAIL PROTECTED])
 HTTP://www.bestiaria.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-16 Thread ecir . hana
On Apr 16, 3:05 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] writes:

  Please, can you elaborate further, I'm not sure if I understood.
  Should I lock global variables i, j during the execution of run()? In
  that case I have to apologize, I showed rather simplified version of
  the actual problem I have - in fact changer() and run() will be a bit
  more complex thus executing a bit longer and perhaps causing a dead-lock.

 Put both variables into one shared object with a lock (see the docs for
 threading.RLock()).  Acquire the lock before modifying or reading the
 variables, and release it afterwards.  That is the traditional way.

Thanks for the reply! And at the same time, please bear with me.

If I understand correctly: when one thread acquires the lock, every
other thread has to wait. If so, this is not exacly what I would like
to have since the thread might take a bit longer to finish.

The reason why I try so hard to use local variables is that they are
inherently thread-safe. So I don't even mind to copy changer() every
time run() is called - run() has it's own local variables i, j, no one
has to touch them except it's (local) function changer(). But the
problem is, I don't know how to propagate run()'s variables into
changer() without declarating them as changer()'s arguments (it would
be ok to append the declaration during run-time, though, if I only
knew how).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-16 Thread ecir . hana
On Apr 16, 5:36 pm, Jason [EMAIL PROTECTED] wrote:
 On Apr 16, 7:28 am, [EMAIL PROTECTED] wrote:



  On Apr 16, 3:05 am, Paul Rubin http://[EMAIL PROTECTED] wrote:

   [EMAIL PROTECTED] writes:

Please, can you elaborate further, I'm not sure if I understood.
Should I lock global variables i, j during the execution of run()? In
that case I have to apologize, I showed rather simplified version of
the actual problem I have - in fact changer() and run() will be a bit
more complex thus executing a bit longer and perhaps causing a 
dead-lock.

   Put both variables into one shared object with a lock (see the docs for
   threading.RLock()).  Acquire the lock before modifying or reading the
   variables, and release it afterwards.  That is the traditional way.

  Thanks for the reply! And at the same time, please bear with me.

  If I understand correctly: when one thread acquires the lock, every
  other thread has to wait. If so, this is not exacly what I would like
  to have since the thread might take a bit longer to finish.

  The reason why I try so hard to use local variables is that they are
  inherently thread-safe. So I don't even mind to copy changer() every
  time run() is called - run() has it's own local variables i, j, no one
  has to touch them except it's (local) function changer(). But the
  problem is, I don't know how to propagate run()'s variables into
  changer() without declarating them as changer()'s arguments (it would
  be ok to append the declaration during run-time, though, if I only
  knew how).

 In Python, names are bound to objects.  The parameter names passed to
 a function *are not inherently thread safe*!  Python parameters are
 not passed-by-value.  To show you what I mean:

  spam = [delicious]
  def test(meal):

 ...  global spam
 ...  if spam is meal:
 ...print Spam is the same object as meal
 ... test(spam)

 Spam is the same object as meal

 (While the global spam statement is optional in this case, I wanted
 to make it painfully obvious where the spam name in function test is
 coming from.)

 It is thread-safe to rebind the name meal in the function test (ie,
 meal = Green eggs).   It is not thread-safe to mutate or modify the
 object that meal is bound to.  In the example given above, appending
 data to the list, removing data, changing elements, and other
 operations will cause potential race conditions across multiple
 threads.

 Follow Paul's advice and get acquainted with the issues of concurrent
 and threaded programming.  Judicious locking will help avoid most race
 conditions.  If you don't want to keep other threads waiting, make a
 copy of your data then release the data lock.

 Depending on the data, you can usually have multiple threads reading
 the data, as long as no other threads write to the data while there
 are any readers.  A writer can be allowed to change the data, but only
 if there are no readers and no other writers.  (This is commonly known
 as a read/write lock.)  I didn't see a read/write lock in the Python
 documentation with some casual browsing, but one can be implemented
 from the existing thread locking mechanisms.

 Your description of what you want to do is rather vague, so I can't
 get too specific.  You've described how you want to do things, but I
 don't know what you're trying to accomplish.  Where possible, simplify
 your design.

 --Jason

All I was trying to do, was to get rid of those 'k's in changer():



def change_i(k, arg):
k[0] = arg

def change_j(k, arg):
k[1] = arg

def changer(k):
change_i(k, 'changed_i')
change_j(k, 'changed_j')

def run(i='', j=''):
k = [i, j]
changer(k)
[i, j] = k
return i, j

print run() == ('changed_i', 'changed_j')



Maybe I made a mistake, I should have asked this first, sorry. If the
only way to accomplish this is through locks, then I guess I better
use those 'k's, what do you think?

Thanks Jason, thanks Paul!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-16 Thread ecir . hana
I'm reading the docs now and I stumbled upon something:
section 15.3 threading -- Higher-level threading interface mensions
a class local, in which ... Thread-local data are data whose values
are thread specific. ...

Does it mean, I can create global variables whose changing is thread-
safe?
More specific:

import threading

def change_i(arg):
global k
k.i = arg

def change_j(arg):
global k
k.j = arg

def changer():
change_i('changed_i')
change_j('changed_j')

def run(i='', j=''):
global k
k = threading.local()
k.i = i
k.j = j
changer()
i = k.i
j = k.j
return i, j

print run() == ('changed_i', 'changed_j')

Is this ok?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-16 Thread ecir . hana
On Apr 17, 3:51 am, [EMAIL PROTECTED] wrote:
 I'm reading the docs now and I stumbled upon something:
 section 15.3 threading -- Higher-level threading interface mensions
 a class local, in which ... Thread-local data are data whose values
 are thread specific. ...

 Does it mean, I can create global variables whose changing is thread-
 safe?
 More specific:

 import threading

 def change_i(arg):
 global k
 k.i = arg

 def change_j(arg):
 global k
 k.j = arg

 def changer():
 change_i('changed_i')
 change_j('changed_j')

 def run(i='', j=''):
 global k
 k = threading.local()
 k.i = i
 k.j = j
 changer()
 i = k.i
 j = k.j
 return i, j

 print run() == ('changed_i', 'changed_j')

 Is this ok?

I was too quick, k = threading.local() has to by outside of run(),
right?

-- 
http://mail.python.org/mailman/listinfo/python-list


yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-15 Thread ecir . hana
Dear list,

maybe I'm overlooking something obvious or this is not possible at all
or I don't know. Please, consider the following code:



## insert here anything you like

def changer():
change_i('changed_i')
change_j('changed_j')

def run(i='', j=''):

## insert here anything you like

return i, j

run() == 'changed_i', 'changed_j'



Let me explain: First, changer() is kind of templating language so it
should be written down in this form - however, it can change during
run-time as you like. Basically, it is just ordinary python code which
changes (should change) the local variables of another function,
run(). Oh, and it has to be *thread-safe*.

Here's what I tried and didn't work (maybe I just haven't tried hard
enough):
- pass i, j around: changer(i, j) and change_i(i, arg) - easiest, most
obvious solution - can't do it, certainly not during writing of the
code - have to preserve changer()'s form
- global variable - not thread-safe
- in run(), create dummy closure for changer() and call it -
new.function(blah, blah, blah, dummy.func_closure) - not thread safe,
i guess, and besides change_i() and change_j() would also need a such
a trick - I am not sure how to  alter the codeobject of changer()
- new.function(new.code(... - oh my, this won't work
- in run(), modify change_i() and change_j() so it actually yield-s
the changes, collect them here - something like: change_i.__call__  =
return yield ('changed_i', 'comes_from_changer_i()') - doesn't work,
of course
- update locals() - yes!! - no!! - doen't work, it's just a copy
- curry, mix-in, ... - hmm
- eval(blah, blah, locals()) - not safe, ugly - maybe? what do you
think?
- find out which file changer() is written in, which line it starts
and ends, read that file's section, parse, edit, append the needed
local variable's declarations, compile to changer_new() and changer =
changer_new - there has to be something better! :)

So, to wrap up, the question is: how can I pass a variable from one
function to another, without writing down function's argument during
coding, and still be thread-safe?

I can only hope you are still with me and not very confused

Thanks for any feedback!!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield, curry, mix-in, new.function, global, closure, .... what will work?

2007-04-15 Thread ecir . hana
On Apr 15, 8:07 pm, Paul Rubin http://[EMAIL PROTECTED] wrote:
 That is total madness.  Just use a normal object or dictionary with a lock.

Please, can you elaborate further, I'm not sure if I understood.
Should I lock global variables i, j during the execution of run()? In
that case I have to apologize, I showed rather simplified version of
the actual problem I have - in fact changer() and run() will be a bit
more complex thus executing a bit longer and perhaps causing a dead-
lock.

-- 
http://mail.python.org/mailman/listinfo/python-list