Am I correct in assuming that you only have one version of the
cygwin1.dll on our system?  This could conceivably be a problem
with two disparate versions otherwise.

cgf

On Sat, Mar 10, 2001 at 09:13:28PM +0100, Dr. Volker Zell wrote:
>Hi
>
>I'm using
>
>o cygwin1-20010304.dll 
>
>and see a lot of error messages of the following type when using subprocesses inside
>cygwin-Xemacs.
>
>
>This is from a gcc run inside a shell window in Xemacs:
>
>180134 [main] gcc 440 close_handle: closing protected handle void events_init 
>():1072(title_mutex<0x10C>)
> 181617 [main] gcc 440 close_handle:  by int fhandler_tty_common::close 
>():958(input_mutex<0x10C>)
>
>Can't remember when this showed up:
>
>151069 [main] xemacs 257 close_handle: closing protected handle void sigproc_init 
>():536(wait_sig_inited<0xFC>)
>159657 [main] xemacs 257 close_handle:  by int fhandler_tty_common::close 
>():960(input_available_event<0xFC>)
>
>
>Here I tried to create an xemacs dired buffer:
>
>  /usr/local/lib/xemacs/site-packages/lisp/jde-2.2.7beta2:
>   186896 [main] xemacs 392 close_handle: closing protected handle void sigproc_init 
>():541(signal_arrived<0x148>)
>   194816 [main] xemacs 392 close_handle:  by int fhandler_tty_common::close 
>():960(input_available_event<0x148>)
>   221448 [main] xemacs 392! spawn_guts: wait failed: nwait 3, pid 392, winpid 392, 
>Win32 error 6
>   221563 [main] xemacs 392! spawn_guts: waitbuf[0] 0x50 258
>   221639 [main] xemacs 392! spawn_guts: waitbuf[1] 0x148 = -1
>   221691 [main] xemacs 392! spawn_guts: waitbuf[w] 0x1DC = 258
>  Can't exec program /usr/bin/ls.exe
>  D:\bin\ls.exe: *** Couldn't read parent's cygwin heap 43800 bytes != 0, Win32 error 
>5
>
>
>Any hints ??
>
>1.1.8-2 worked perfect.
>
>Ciao
>  Volker
>
>
>--
>Want to unsubscribe from this list?
>Check out: http://cygwin.com/ml/#unsubscribe-simple

-- 
[EMAIL PROTECTED]                        Red Hat, Inc.
http://sources.redhat.com/            http://www.redhat.com/

--
Want to unsubscribe from this list?
Check out: http://cygwin.com/ml/#unsubscribe-simple


Reply via email to