Re: keystroke ctrl+s freezes terminal/console

2003-07-15 Thread Erdmut Pfeifer
On Tue, Jul 15, 2003 at 04:30:37PM +0200, Joerg Johannes wrote:
  On Tue, Jul 15, 2003 at 07:33:47AM -0400, Shawn Lamson wrote:
 
  Note that control-S doesn't suspend input. It suspends OUTPUT!
  control-Q lets the output go again.
 
 As Roger already asked: Is there a way to disable this Ctrl-S shortcut?

You could do an stty stop undef for the terminal in question. This
disables the terminal's stop feature which is by default bound to ^S.

stty -a shows the current settings...

-- 
Erdmut Pfeifer


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Need help building php4

2001-11-30 Thread Erdmut Pfeifer
On Thu, Nov 29, 2001 at 07:30:55PM -0800, jennyw wrote:
 I've run into some problems with php4 and it's been suggested I recompile
 php4.  The problem is ... I've never done this before, and reading the
 directions is leaving me with a few questions. First, am I getting the right
 files? I typed apt-get source php4 and then found the files in /etc/apt.
 
 The instructions said to run configure in the php4 source directory which I
 did (to compile the dynamic module). When this is done, the script says that
 it's the CGI version ... I don't think this is what I want ...  What I want
 is to be able to use mod_php4, like what I have currently (except that the
 current install has a few issues, hence the recompile: e.g. it says that
 squirrel mail is redefining functions when it's not).

as with any autoconf-generated configure script, you can always run

./configure --help

to get a list of all available configure options. Among these, you'll
find --with-apxs and --prefix, etc. and a whole bunch of --with-* or
--enable-* options, some of which might apply here.

To build the dynamically loadable apache module (DSO), you probably
need at least something like

./configure --with-apxs

(or ./configure --with-apxs=/usr/bin/apxs, if apxs should not be
found for some reason)

plus some other --with-* options (e.g. --with-mysql). Also, the
--prefix[=DIR] option might come in handy to get make install to put
the stuff in the right directories. For the purpose at hand, though, it 
might suffice to manually copy the libphp4.so file to /usr/lib/apache/1.3/,
the rest should already have been setup correctly upon installation of
the original php4 deb-package.

The INSTALL document that comes with php includes some more details, in
particular the section VERBOSE INSTALL, subsections 2a and 4a.

You probably also need to get the apache-dev package, as this includes
the apxs tool. Along similar lines, always check whether there is some
developer package, in case you find something is missing when trying
to build things from sources (this might happen for some of the other
--with-* options...).

As usual, http://packages.debain.org/ is your friend, if you need to
locate which package a specific file is contained in.  Also see

http://packages.debian.org/testing/web/php4.html

for the list of php add-on modules that are built into the regular
deb-package. If you need the associated funtionalities, you might (or
might not) need to specify the appropriate --with-* or --enable-*
option to configure, depending on the defaults in effect (see the
output of --help).
I'm not a PHP expert (personally prefer mod_perl), so I can't tell you
any details on which options you most probably want, and how to get
those working properly -- I assume there is some PHP list to post
specific questions to...

Good luck

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: Need help building php4

2001-11-30 Thread Erdmut Pfeifer
On Fri, Nov 30, 2001 at 03:00:12PM -0800, jennyw wrote:
 Thanks! Actually, the thing I'm most unclear about is what all the output
 files are and where they go.  I saw on another Web page that php4 is the
 only file that matters, so I'll try that this afternoon. But it seems that a
 lot of other files are generated, too.

typically, a lot of intermediate files are being created (e.g. for
every source file (.c) there'll be an object file (.o) ). The configure
process itself also creates a couple of temporary files. You don't need
to worry about all those...

Very generally speaking -- if we leave out any config and documentation
files for the moment -- the target of the build process will usually be
one or more of the following: (1) a binary executable (typically named
after the package), (2) a static library file (extension .a) or (3) a
dynamic library, also called shared object file (thus the extension
.so). Library filenames by convention start with lib.

If I understood you correctly, you're trying to build the dynamic php4
module to be loaded into apache. In this case you probably only need
the libphp4.so shared-object file. After a successful build you can
fish that out of the .lib subdirectory (IIRC), that should've been
created in the source directory while building. Then simply replace the
other file of the same name (the one which refuses to work) with this
newly created one -- it should reside in /usr/lib/apache/1.3/.

The php4 file you mentioned above is the stand-alone PHP binary for use
in conventional CGIs. I guess this is not the one you're interested in ;)

If you'd rather like to try the automatic install, after having set up
the appropriate destination directories, but are still feeling a little
unsure about where stuff will be installed into, you can always try the
generic dry run facility of make (option -n), i.e. make -n install
(instead of the usual make install).  This is supposed to cause make
to just print out what it *would* do without actually modifying
anything...  It doesn't always work as desired under every circumstance,
though (i.e. if in intermediate steps some files need to be put in
their proper places to be able to continue with subsequent installation
steps...), so YMMV.  But even then, you'll get a rough idea of where
stuff will be put, when looking at the directory names in the commands
being printed during the dry run...

Cheers

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: an automated web browser

2001-11-22 Thread Erdmut Pfeifer
On Thu, Nov 22, 2001 at 06:38:43PM +0100, martin f krafft wrote:
 hi,
 i use this internet service that *requires* you to log in once a day,
 or they close your account after three days of inactivity. it's
 absolutely bloody ridiculous, but i have to live with it, for i do
 need the service (and there is no other like that one) about once a
 month. since i can't possibly access the web everyday (if i am
 traveling for instance), i would like to set up a cron script that
 basically surfs the site for me (including login). however, there are
 at least 10 Redirects happening before the login is counted, and
 that's just too much for my nerves as a shell scripter.
 
 so i am wondering, there *has* to be a tool like lynx or the like,
 which will accept commands over stdin or the command line of the form:
 
 - enter madduck into field username
 - enter abc123 into field password
 - click onto login submit button
 (after all the redirects have settled, i need to take one other
   step).
 - click onto the third link on the page  (or the link labeled
   something).
 
 do you know of something like that?

I often found curl (http://curl.haxx.se/) quite useful for these kind
of automation tasks. It can POST forms, handle cookies, etc., but it
isn't fully automated in the sense of recording and replaying a web
session... You'd still first have to investigate what needs to be done
and then tell curl to simulate the session in a series of individual
requests (which links to follow, values of form variables, cookies,
special HTTP headers, and whatever else may be required...).
So, it may not be exactly what you're looking for, but it's a nice and
versatile tool, anyway :)

Cheers

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: X-Authentification with XFree 4?

2001-11-22 Thread Erdmut Pfeifer
On Thu, Nov 22, 2001 at 09:01:11PM +0100, Debian User wrote:
 Hi folks!
 
 Last week I tried to get a debian-derived CD-booting linux online.
 (It's called knoppix. It seems to derive from woody, but has a pretty
 good hw-detection (sound, gfx, mouse, ... in less than 30s on a PIII 400).
 Bringing the system online was the easy part (wvdial-conf worked great).
 I saved the config-files and a little init-script to the local hd, so that
 it boots only from CD and installes nothing to a hd (I just need a floppy).
 But here comes the prob:
 I did this that i can play spellcast against my brother. I intended to use
 spellcast.vulpyne.net for this. But spellcast.vulpyne.net always complaints
 about noX.
 xhost +spellcast.vulpyne.net or even xhost + did not solve this problem.
 Since I have XFree 3.3.6 my knowledge in XFree 4 is _very_ poor.

I don't know much about spellcast, but my guess would be that you have
-nolisten tcp in your /etc/X11/xinit/xserverrc (or wherever X is
being started in your derived distro).  Remove that, and see if it
works then... (don't forget to restart X).

Also, if this box is more than a simple playstation, I guess (or hope)
you are aware of security-related issues with xhost +, are you?

HTH

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: quick ghoscript font question

2001-11-22 Thread Erdmut Pfeifer
On Thu, Nov 22, 2001 at 10:07:04AM -0500, David Z Maze wrote:
 
 snip
 
 Not to be picky or anything, but whether or not a font is available to
 X has *absolutely no bearing* as to whether Ghostscript likes it or
 not.  Ghostscript won't magically recognize X fonts; the various
 Postscript previewers based on Ghostscript will correctly render
 Ghostscript fonts in complete ignorance of X's font scheme.

I'd definitely second that...

 By way of useful advice, though, I'd read through
 /usr/share/doc/gs/Fonts.htm.  The Adding your own fonts section
 mentions how to convert a BDF bitmap font to a Type 1 font; I'm not
 clear if a PCF font can readily be converted to BDF, though (the other
 direction appears to be possible).

There is an old little program getbdf that can read out fonts via
the X server itself and store them to BDF format.

Google located it here, for example:

  http://crl.nmsu.edu/~mleisher/getbdf.c

If you have the X development stuff installed, you can easily build
it yourself

  gcc getbdf.c -o getbdf -lX11 -L/usr/X11/lib/

(if you find this too cumbersome, feel free to drop me a note off-list
and I'll send you the binary...[23k])

Then capture the font in question, e.g.

  getbdf -font 9x15  9x15.bdf

(the full name -Misc-Fixed-Medium-R-Normal--15-140-75-75-C-90-ISO8859-1
would work too, of course)

and use bdftops to create a Type1 font from that, as described in the
above mentioned gs docs.
But don't expect the quality of a typical PostScript font -- a bitmap
font will always look like one, even after this conversion ;)

Cheers,
Erdmut

PS: if you feel like fiddling around yourself with (bitmapped) X fonts,
there is a reasonably usable font editor xmbdfed. Unfortunately, it's
Motif based, so it's probably easiest to directly get the statically
linked binary, which is also available from

http://crl.nmsu.edu/~mleisher/xmbdfed-4.5-LINUX.tar.gz


-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: Realtek module don't want to be loaded! Is too busy... ;)

2001-11-21 Thread Erdmut Pfeifer
On Wed, Nov 21, 2001 at 01:42:09PM +, Nuno Emanuel F. Carvalho wrote:
 Hi there,
 
   I'm having problems on getting my pcmcia card to work on Debian.
   Already compiled RTL8139 as module from kernel and installed pcmcia-cs
 tarball.
   In my opinion, problem is from rtl8139 module:
 
   $ insmod rtl8139
 /lib/modules/2.2.14/net/rtl8139.o: init_module: Device or resource busy

check whether your BIOS has a Windows 9x support option or some
b*llsh*t like that -- and if so, try to disable it.

On my ASUS notebook, I initially got the same error, which was solved by
disabling said option.
This notebook has an *onboard* NIC (same RealTek chip), however, so YMMV...

For details on the issue, see Donald Becker's pages, in particular:

  http://www.scyld.com/expert/irq-conflict.html


Cheers

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: Realtek module don't want to be loaded! Is too busy... ;)

2001-11-21 Thread Erdmut Pfeifer
On Wed, Nov 21, 2001 at 03:13:44PM +, Nuno Emanuel F. Carvalho wrote:
 On Wed, 21 Nov 2001, Erdmut Pfeifer wrote:
 
  check whether your BIOS has a Windows 9x support option or some
  b*llsh*t like that -- and if so, try to disable it.
 
  On my ASUS notebook, I initially got the same error, which was solved by
  disabling said option.
  This notebook has an *onboard* NIC (same RealTek chip), however, so YMMV...
 
My problem isn't any IRQ Conflit. I already had RedHat 6.1 installed
 working with pcmcia. Didn't changed bios setup. Unfortunally i didn't
 wrote any documentation... ;(

which version of the rtl8139.c driver are u using?
When comparing the init_module()-fragment of the rtl8139.c that came
with the 2.2.14 kernel (v1.07) with the current version (v1.16a):

v1.07:

int init_module(void)
{
return rtl8139_probe(0);
}

v1.16a:

int init_module(void)
{
if (debug)  /* Emit version even if no cards detected. */
printk(KERN_INFO %s KERN_INFO %s, versionA, versionB);
#ifdef CARDBUS
register_driver(realtek_ops);
return 0;
#else
return pci_drv_register(rtl8139_drv_id, NULL);
#endif
}

...I somehow get the impression that CARDBUS support might not have
been present in the earlier version of the driver.  So, it might be
worth trying a newer version.  Not sure, though, whether the new
version will work with the old kernel -- but why not simply try?
There isn't much to lose ;)   If the most recent version doesn't
compile/work, maybe some other version in between v1.07-1.16 does.

Also, as you said you already had it working in RH 6.1, you might want
to figure out which mix of versions was in use there, and try those...

HTH

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: Netscape in different desks with fvwm2

2001-11-09 Thread Erdmut Pfeifer
On Thu, Nov 08, 2001 at 11:18:35AM -0700, Mike Fontenot wrote:
 
 I know how to automatically start Netscape in a
 different desktop (using fvwm2),
 but I would also like to be
 able to automatically also start the Messenger
 window in a third desktop.
 
 I.e., I'd like the initial Netscape window (Navigator,
 in my case) to be started in Desk 1, and the Messenger
 window of that SAME Netscape process to be started
 in desk 2.
 
 Anyone know if that's possible?  I suppose it would
 require some kind of specification to netscape itself.
 
 I know that I can start
 two instances of the Netscape process, one starting
 with the Navigator window, and one starting with
 the Messenger window.  But I don't think that's
 what I want...

just an idea: there's a not too well advertised -remote option
to Netscape that can be used to issue commands to be executed in an
*already running* instance of Netscape. So, I assume something like

  netscape -remote 'openInbox()'

should have the desired effect, presuming you know how to have fvwm2
open the resulting new window on a different desktop (which you seem to
have figured out already :)

BTW, similarly, commands like

  netscape -remote 'openURL(http://www.debian.org)'

considerably speed up startup times when trying to open HTML documents
from the commandline, as this avoids launching another instance of
Netscape.

For details see

http://home.netscape.com/newsref/std/x-remote.html

and have a look at Netscape's X-resources file Netscape.ad for further
ideas on what you might want to control remotely -- just in case you
start liking that feature :) 

Cheers

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: a challenge

2001-10-18 Thread Erdmut Pfeifer
On Thu, Oct 18, 2001 at 10:59:25AM -0500, Nathan E Norman wrote:
 On Thu, Oct 18, 2001 at 01:58:10PM +0200, martin f krafft wrote:
  goal: a 4-16 byte 7-bit character value that somehow encodes the time
of creation such that it can be extracted if the encoding scheme/seed
is known. the encoded value should be such that it is mostly
impossible to change it so as to yield a later time of creation to be
encoded. in general, changing the encoded value may well render the
data invalid.
  
this is supposed to be a token that's valid for a limited amount of
time, after which, a new token has to be fetched. this token should
not be obvious (e.g. the timestamp) to prevent people from changing
it to be valid longer rather than fetching a new one.
  
  can you do it? or is there a tool out there?
 
 use perl, Digest::HMAC_MD5 to encode the token, and MIME::Base64 to
 make the result HTTP palatable.
 
 I used this to write a cookie-based web authentication scheme which
 timed out after some period of inactivity.  I'll look around for the
 code as it sounds like you're doing something similar.
 
 libdigest-hmac-perl contains Digest::HMAC_MD5
 libmime-base64-perl contains MIME::Base64

also, if *tamper*-protecting the timestamp is your primary intention,
you might find the related section of the book Writing Apache Modules
with Perl and C from O'Reilly (often referred to as the Eagle book)
a useful read.  Luckily, the relevant chapter of this very fine book is
available online:

http://www.modperl.com/book/chapters/ch6.html#Cookie_Based_Access_Control

It explains the basic principles of using hash functions (MD5) to
protect snippets of data against modification, like your expiration
date, etc...

The essential idea is to incorporate a secret key when computing the
checksum -- in its most simple form something like the following
pseudo code:

  $hash = md5sum( $secret$data )

  $ticket = $data$hash

To verify the validity of the ticket, just seperate the data and hash
part, and check whether the hash matches the real one which *only you*
can compute using your secret key (as shown above).  As hashes like
MD5 are *one-way* functions, it's infeasible (within the general
limits of cryptography) to reverse the operation to obtain your secret
key.  Of course, many variations on the theme exist...

Actually, this is a quite common technique in the context of web
authentication/authorization, so, of course, there are various
utilities to make your life easier.  I'd suggest that you flip through
the pages of the mentioned docs -- it might save you from reinventing
the wheel ;)

Cheers

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: forking so apache won't wait

2001-09-27 Thread Erdmut Pfeifer
On Thu, Sep 27, 2001 at 12:44:35AM -0500, will trillich wrote:

 debianistas tend to know where to look, so i'm hoping someone
 will point the way--

sure :)

 i know i've run across it before but when i WANT to stub my toe
 on it, it's nowhere to be found: HOW can i have apache/web page
 initiate a process and return quickly to generate a 'processing,
 hold on' page while the process does its processing?

 snip
 
 pointers? flames? any gdM you think i should RTF out of?

Hi Will,

...as I've seen you being active on the mod_perl list, I would've
assumed that you've already heard of The Guide ;)

Basically, there's nothing wrong with your concept of forking off the
long-running process, and as a normal CGI program it would probably
work fine. The few remaining flaws in it (specific to the mod_perl
environment) are all detailed in the mentioned docs.

So, just in case you'd like to pursue your original idea (instead of
using the multipart/x-mixed-replace MIME-type method), you probably
want to start reading here:

http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess

Also, if you haven't already thought of this, you might consider
sending back a 'waiting'-document that automatically keeps pulling the
server in certain intervals. Just include the following in the HEAD
section of that html page

HTML
HEAD
META HTTP-EQUIV=refresh content=5; URL=...(your URI here)
/HEAD
...

to have it check back/reload after 5 secs. In the most simple case, the
remote process you call via this URI would check whether there exists
some result file (indicating that the long running process has
completed), and, if so, send back the final result page, if not, resend
the wait page.

What also often comes in handy is the not so well known HTTP return
code Status: 204 No Content, which causes the browser to *not*
update/delete its currently displayed page -- it's a kinda 'do nothing'
instruction. For example, returning this 204 while the process is still
running, avoids having to resend/redraw the wait-page, and the
associated flickering...

The above refresh technique does, however, not work in combination with
the 204 trick. It would reload exactly once, as the no content-page
cannot resend the same html code snippet, of course (and sending a real
HTTP Refresh: ... header after the the Status: 204 does not seem to
work -- at least not with Netscape. So, for this to work, you would
need a bit of javascript, something like:

HTML
HEAD
SCRIPT LANGUAGE=JavaScript
function check_done() {
  this.location.replace(http://your.server/path/check-done.url;);
}
/SCRIPT
/HEAD
BODY onLoad=setInterval('check_done()', 5000)
Processing... Please wait...
/BODY
/HTML

(the setInterval() period is in msecs)

Cheers

-- 
Erdmut Pfeifer
science+computing ag
www.science-computing.de

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: HPLJ4, PCL, specifically tray selection

2001-08-30 Thread Erdmut Pfeifer
On Wed, Aug 29, 2001 at 04:32:30PM -0700, Karsten M. Self wrote:
 Another PITA:
 
 I've got a Hewlett Packard LaserJet 4 Printer, it's got a primary and
 manual feed tray.  Question is:  how do I indicate a job's supposed to
 go to the manual-feed tray?
 
 I suspect it's a PCL issue.

if it's a non-postscript printer, yes.

 I've found a page with a list of codes that looks promising:
 
 http://www.hp.com/cposupport/printers/support_doc/bpl02705.html

(thanks for the pointer, BTW.  Good to know there exists such a page.
At the moment I don't really need the info -- but who knows when the
time comes ;)


 ...but hell if I know how to apply them.
 
 I'm using lpr.  I remember some stuff from way back when I ran on HPUX
 that we'd throw a slew of options at the printer with jobs for various
 output options (usually portrait/landscape orientation).  I'm hoping
 that I can do something here either with a '-o' option or with a
 printcap configuration.

If I were in the situation, I'd start my first experiments by cat-ing
the stuff directly to the printer device to avoid having to deal with
the lpr system at this stage of experimentation.

I'd write myself a little script to ease transferring the information
from the above webpage to my printer. It could look something like:

#!/usr/bin/perl

my $manual_feed = pack H*, join , qw(1B 26 6C 32 48);

   # $manual_feed now is a 5-byte binary string representing the
   # PCL escape/control sequence for manual feed

print $manual_feed, ;

This does nothing more than assembling the escape sequence specified in
hex notation (as found in the last column of the table on the webpage)
into the (binary) string that needs to be sent to the printer. This
string is then simply prepended to the data to be printed. The script
can be used as a pipe or be called with argument(s) representing the
files to be sent to the printer. It's meant as an example only -- with
a hardcoded PCL sequence for selecting the manual-feed paper option.
(The idea is just to make it easy to cut  paste the sequences form the
webpage without too much editing...)

Once you have the PCL sequences causing the desired effects in the
printer, you could proceed to the next step, which would be the
question how to integrate that into the lpr system.

Basically, I'd try to use printcap's facility to specify filters. In
this particular case it wouldn't matter much whether you use the 'if'
or the 'of' filter (though, if you already have a filter configured,
you'd somehow have to chain/merge their effects, of course, e.g. by
making one filter a wrapper for the other...). Also, IIRC, the 'of'
filter will only get called when there's no 'if' filter -- see the
manpage for details.

The filter would be a script resembling the above example.
Theoretically, the filter could be passed some options that would
control its detailed behaviour (manual-feed, tray-1, tray-2, etc.), yet
there remains the problem of how to get the option to the script.  I'm
not aware of a general option passing facility in the lpr system -- the
'-o' is specific to the lp system, AFAIK.  Yet, I may simply not have
been reading the manpage carefully enough ;)

So, I assume it would get quite cumbersome to pass some custom options
to the filter.  The easiest workaround would probably be to setup
several, differently named printers in printcap (which would all be
printing to the same physical device), specifying a different filter
for each one, which would take care of prepending the required PCL
control sequences.  Selecting a paper tray would then be matter of
specifying the dedicated printer (alias) via the usual -P option. 
Someone else might have a better idea...

Also, more generally, the whole approach of simply *prepending* some
PCL escape sequences would not be able to deal with data already
containing conflicting PCL code, as might be generated by some
applications. For example, if an application insists on generating
PCL code to select tray-1, this would simply override any attempts to
select manual-feed, etc., as the 'tray-1' code appears later in the
stream sent to the printer...

Anyhow, these are just a few thoughts -- I haven't actually tried
anything of this :)

Cheers,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: HPLJ4, PCL, specifically tray selection

2001-08-30 Thread Erdmut Pfeifer
On Thu, Aug 30, 2001 at 10:36:50AM -0700, Karsten M. Self wrote:
 
 The one way I've found to use the manual tray to date has been through
 WP8, which apparently uses printer-specific controls rather than just
 treating the device as an arbitrary postscript printer.

just in case you tend to favor a generic and straightforward
PostScript solution, give the following code snippet a try:

%!PS
 /ManualFeed true  setpagedevice

% example output -- real document would go here
/Helvetica findfont 20 scalefont setfont
100 600 moveto
(This tests whether ManualFeed is supported) show
showpage


When you send this to the printer, and the printer asks you to insert
a sheet, then you know that you can immediately start to forget about
all the printer-specific, proprietary PCL b***sh*t :)

The 'setpagedevice' operator is the key element here. It takes a
dictionary of various key-value pairs, which request device-specific
behaviour, if available. ManualFeed is just one of them.

If you want to learn more about what other options your printer might
be supporting, I'd warmheartedly recommend getting the PostScript
Language Reference Manual (if you haven't got it already). Actually,
there are two versions of it, the 2nd and the 3rd edition.  Both of
these reference books are also available online as PDF, and can be
downloaded for free (beer, not speech) from here:

http://partners.adobe.com/asn/developer/pdfs/tn/psrefman.pdf
  (2nd edition, 3.3MB)

http://partners.adobe.com/asn/developer/pdfs/tn/PLRM.pdf
  (3rd edition, 7.4MB)

Although it may sound strange, my personal preference is the old 2nd
ed.  Sure, it's a bit outdated, but it's got much less irrelevant
detail, which makes it easier to find what you're looking for. Also,
it's got two useful appendices on EPS and DSC, that have been removed
from the new version -- due to space constraints. OTOH, if you'd rather
want to read about every new and spiffy level 3 feature in excruciating
detail, then get yourself the 3rd ed. ;)

The relevant sections for the topic at hand are: section 4.11. (2nd
ed.), or chapter 6 (3rd ed.). Those should cover most of what you need
to know.

Cheers,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: PHP4 compilation problem

2001-08-28 Thread Erdmut Pfeifer
On Tue, Aug 28, 2001 at 06:52:52PM +0200, Francois Thomas wrote:
 
 
  -Message d'origine-
  De : Russell Speed [mailto:[EMAIL PROTECTED]
  Envoyé : mardi 28 août 2001 17:47
  À : Francois Thomas
  Cc : 'debian-user@lists.debian.org'
  Objet : Re: PHP4 compilation problem
  
  
  You need to install the imap support library package.
 
 I already have installed the libc-client4.7 libraries.. Is there something
 else that I have missed ?

I think you also need the package libc-client4.7-dev, which contains the
header files (*.h) and the static library (.a).

This package installs the header files into:   /usr/include/c-client/
and the library into:  /usr/lib/

(just in case you need to specify that information for the PHP
./configure step)

Once you have it installed, reconfigure/rebuild PHP. Preferably do a

  make distclean

before you rerun configure (occasionally I've seen configure getting
confused by its own config.cache, which will also be removed in the
above step).  Then try again what you had probably done before:

  ./configure --with-imap  [other options...]

It *should* complain in case it doesn't find the imap stuff... ;)

What's a bit strange is the presence of the '-limap' in the linking
step (as shown in your log below). That instructs the compiler to look
for a library file named libimap.a, which is definitely not provided by
the above (UW-)imap package. It's called c-client.a and libc-client.a
(the latter is a symlink) instead.
I guess that must be the result of an error during the configure step...

Cheers,
Erdmut


   make[1]: Entre dans le répertoire `/usr/local/src/php-4.0.6'
   /bin/sh /usr/local/src/php-4.0.6/libtool --silent 
  --mode=link gcc  -I.
   -I/usr/local/src/php-4.0.6/ -I/usr/local/src/php-4.0.6/main
   -I/usr/local/src/php-4.0.6 -I/usr/local/apache/include
   -I/usr/local/src/php-4.0.6/Zend -I/usr/include/c-client 
   -I/usr/local/src/php-4.0.6/ext/xml/expat/xmltok
   -I/usr/local/src/php-4.0.6/ext/xml/expat/xmlparse
   -I/usr/local/src/php-4.0.6/TSRM  -DLINUX=2 -DUSE_HSREGEX -DUSE_EXPAT
   -DSUPPORT_UTF8 -DXML_BYTE_ORDER=12 -g -O2   -o libphp4.la -rpath
   /usr/local/src/php-4.0.6/libs -avoid-version 
  -L/usr/local/pgsql/lib  -R
   /usr/local/pgsql/lib stub.lo  Zend/libZend.la sapi/apache/libsapi.la
   main/libmain.la regex/libregex.la ext/calendar/libcalendar.la
   ext/imap/libimap.la ext/pcre/libpcre.la ext/pgsql/libpgsql.la
   ext/posix/libposix.la ext/session/libsession.la 
  ext/standard/libstandard.la
   ext/xml/libxml.la TSRM/libtsrm.la -limap -ldl -lpq -lcrypt 
  -lresolv -lm -ldl
   -lnsl -lresolv 
   /usr/bin/ld: cannot find -limap 


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: python-fcgi?

2001-08-27 Thread Erdmut Pfeifer
On Mon, Aug 27, 2001 at 09:53:12AM -0600, Robert L. Harris wrote:
 
 I'm trying to install web2ldap for apache.  It appears to require the
 python-fcgi which I can't find.  I've got pythong and the other modules
 hanging around, but can't find this one.  

hmm, are you sure you're looking for python-fcgi, and not python-ldap
(debian package) and the script fcgi.py?  You can get that from:

http://alldunn.com/python/fcgi.py

Some more useful info may be found here:

http://www.suxers.de/python/fcgi.htm
http://www.fastcgi.com/

I don't think there are any ready-to-install deb packages for those
components yet.

HTH

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: python-fcgi?

2001-08-27 Thread Erdmut Pfeifer
On Mon, Aug 27, 2001 at 11:02:19AM -0600, Robert L. Harris wrote:
 
 This is what I get in my apache error.log:
 
 [Mon Aug 27 08:57:39 2001] [warn] FastCGI: server 
 /var/www/web2ldap/fcgi/web2ldap.py restarted (pid 6110)
 Traceback (innermost last):
   File /var/www/web2ldap/fcgi/web2ldap.py, line 14, in ?
 import sys,os,time,fcgi,threading
 ImportError: No module named fcgi
 [Mon Aug 27 08:57:39 2001] [warn] FastCGI: server 
 /var/www/web2ldap/fcgi/web2ldap.py (pid 6110) terminated by calling exit 
 with status '1'

I think it's just missing the fcgi.py, which is the python-side module
to establish the persistent CGI process together with the FastCGI
apache module mod_fastcgi. Just download that file (fcgi.py) from the
address below and put it in the directory where the web2ldap.py script
is located. It should normally be found there -- at least for testing
purposes that should do. If not, you might try fiddling with the env
var PYTHONPATH (module search path) or with sys.path from within the
web2ldap.py script. (Check the documentation to see if there's a more
appropriate place to install fcgi.py... If you can't find anything, and
things work, I wouldn't bother too much, though :)

  
  http://alldunn.com/python/fcgi.py

Cheers,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: python-fcgi?

2001-08-27 Thread Erdmut Pfeifer
On Mon, Aug 27, 2001 at 12:07:49PM -0600, Robert L. Harris wrote:
 
 Ok, that fixed that.  now I'm getting this:
 
 [Mon Aug 27 11:05:04 2001] [warn] FastCGI: server 
 /var/www/web2ldap/fcgi/web2ldap.py restarted (pid 7869)
 Traceback (innermost last):
   File /var/www/web2ldap/fcgi/web2ldap.py, line 18, in ?
 sys.path.insert(0,os.sep.join([exec_startdir,'etc','web2ldap']))
 AttributeError: 'string' object has no attribute 'join'
 [Mon Aug 27 11:05:04 2001] [warn] FastCGI: server 
 /var/www/web2ldap/fcgi/web2ldap.py (pid 7869) terminated by calling exit 
 with status '1'
 
 Here's lines 17-19:
 
 exec_startdir = os.path.dirname(os.path.dirname(os.path.abspath(sys.argv[0])))
 sys.path.insert(0,os.sep.join([exec_startdir,'etc','web2ldap']))
 sys.path.insert(0,os.sep.join([exec_startdir,'pylib']))

my guess would be that you're running a python version 2.0 (probably
1.5.x). There were some changes in string handling, which is what seems
to break the script (the join method it's complaining about)...

If you're interested in the details, check this page:

http://www.python.org/2.0/new-python.html#SECTION00080

I just had a look at http://www.web2ldap.de/install.html, where it says:

For running web2ldap 0.9.4+ you need at least: 

Python 2.0 or later (currently Python 2.1.1 is recommended).
...

so, I assume you won't get around getting a new python package ;)

(Or, if you've already installed version v2.x, maybe you're just
invoking the old 1.5 interpreter in the #!-line at the start of the
script -- in which case v1.5 obviously still seems to be present, too)

And, once you've completed the upgrade, don't forget to change the
#!-lines of the *.py scripts involved to reflect the change
(i.e. #!/usr/local/bin/python1.5 or similar will no longer work then)

Good luck,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: mod-ssl vs apache-ssl

2001-08-26 Thread Erdmut Pfeifer
On Sun, Aug 26, 2001 at 01:18:03PM -0700, Bill Wohler wrote:
   A little while ago, Hans (obviously from Northern Germany--moin
   moin!) asked whether it was better to use mod-ssl or apache-ssl.
   That question wasn't really answered.
 
   Since libapache-mod-jk only attaches itself to apache, I'm now
   considering apache (with mod-ssl) over apache-ssl.
 
   I use both ports: 80 and 443.
 
   Are there any advantages of apache-ssl over apache + mod-ssl?

In terms of practical usability, I'd say no -- at least I wouldn't
know of any. If you search the web for differences between the two SSL
implementations, about the only thing you'll find is a difference in
philosophy: apache-ssl is primarily focusing on stability, while
mod_ssl (or Ralf Engelschall, to be more precise) dares to add new
features as required...

As to the stability of mod_ssl, I can say that we've been using apache
+ mod_ssl in various projects for quite some time now, and we've never
had any problems with stability so far.

Just my 2 cents.

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: X11: double-click word delimiters

2001-08-24 Thread Erdmut Pfeifer
On Fri, Aug 24, 2001 at 12:44:55PM +0200, Lukas Ruf wrote:
 Dear all,
 
 where and how can I configure the delimiters of what gets selected under
 X11v4 (fvwm) when I double-click on words?
 
 An example:
 http://www.nodeos.org -- double click on nodeos selects just nodeos but I
 would like that all enclosed by whitespace is selected.

although X provides the general mechanisms for making text selections,
the detailed specific behaviour is determined by the application, e.g.
the terminal emulators like xterm, rxvt, etc.

I'm not sure which application you are refering to, but assuming you
mean xterm, have a look at the manpage section CHARACTER CLASSES --
it's detailed enough :)
As another example, for rxvt (my preferred terminal emulator), you can
configure it at compile time, or - if you choose dynamic configuration
at compile time - also via Xresources or command line option. Do a

rxvt --help

which will show the available resources. If you see cutchars there,
you are lucky -- that's where you can specify the delimiting charset.

rxvt -h

should show something like (.Xdefaults), if you have dynamic config
via Xresources compiled in. In the source the relevant section is in
feature.h:

/*  

 * Default separating chars for multiple-click selection

 * Space and tab are separate separating characters and are not settable

 */ 

#define CUTCHARS\'()*,;=[EMAIL PROTECTED]|}~  



/*  

 * Add run-time support for changing the cutchars for double click selection

 */ 

#define CUTCHAR_RESOURCE



Just these two as examples. Other terminals or applications certainly
have their own different mechanisms. Someone else on the list might
know the details. Searching in the respective manpages or other docs
for char, delimit, cut or select might also point you to the
appropriate section... YMMV.

HTH

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Stable and XFree86 4.x

2001-08-24 Thread Erdmut Pfeifer
On Fri, Aug 24, 2001 at 08:11:22AM -0400, Jonathan D. Proulx wrote:
 Hi,
 
 I know this has come up before, but my searching skills aren't up to
 the task of finding it in the archives apparently...
 
 I have about 20 workstations on the way for incoming students and They
 all have GForce2 cards (AFAIK this requires XFree86 4.x), what's the
 best way tho keep these machines running stable but also grabbing
 XFree86 4.x?  I seem to recall someone building stable .deb's for this
 a long time ago, but can't seem to find where they are.

see here:

http://people.debian.org/~cpbotha/

or, if you're feeling adventurous:

http://cpbotha.net/building_xfree86_4.1.0_debs_on_potato.HOWTO


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Ethernet Card Setup

2001-08-22 Thread Erdmut Pfeifer
On Wed, Aug 22, 2001 at 09:34:43AM -0400, Hall Stevenson wrote:
  I am new to Debian, and am having difficulty
  setting up my Network Card.
 
  I currently have a 3Com  3C905-TX   PCI
  10/100 network card.
 
  It is apparently supported in the kernel distributed
  with the current Potato release. I have read that it
  should be auto-detected.
 
  It doesn't seem to be detected and set up in the
  Debian Install.
 
  How do I go about configuring it?
 
 Try disabling the Plug-n-Play Operating System setting in
 your BIOS, if you have one. It may be worded differently, but
 you should get the idea...
 
 I *think* (and I'd like to know the answer for sure in case
 anyone knows) that the kernel drivers for various pieces of
 hardware only looks at 'x' number of settings, i.e. IRQ and IO
 address. They're considered 'standard' settings for that
 particular device. If you have 'PnP O/S' setting enabled in
 your BIOS, it doesn't assign the settings, but instead, lets
 the operating system do it.

I recently fought with a similar problem while trying to get the NIC
of my new notebook working (an ASUS L8400K with onboard RTL8139 chip).
I'm not entirely sure whether this has much to do with the original
problem -- but anyway, just for the record:

The card was being detected and I could load the appropriate driver
module. But when trying an ifup or ifconfig, I always got a
SIOCSFFLAGS: Resource temporarily unavailable.

A google search provided converging evidence that this has to do with
an IRQ conflict, which made me take a closer look at the messages
being output while the card was being detected. There I saw, that the
PCI BIOS had assigned IRQ 0 (!) to the NIC -- no surprise there were
conflicts...

Further googling revealed that this is a M$-made PnP issue, so I looked
for a way to disable PnP in the BIOS.  Unfortunately, it took me while
to realise that this ridiculous Win98/W2K vs. Other OS option in
the Phoenix-BIOS (which I had not taken seriously before) really meant:
PnP enabled vs. PnP disabled...
Setting it to Other immediately solved the problem.

To quote one of the NIC driver gurus, Donald Becker:

  The PnP OS problem occurs because Microsoft has convinced BIOS
  makers to modify their PCI device configuration from the previous
  rational standard, to one that works well only with Microsoft
  operating systems. Where previously the BIOS allocated resources for
  and enabled the PCI device by default, it now does so only for boot
  devices and audio devices. (Why are audio devices specifically an
  exception? Because MS-Windows can't handle the resource allocation
  for them!)
  
  The solution is to either update to the latest driver, (the drivers
  are being re-worked to enable the devices) or to disable the PnP OS
  setting in the machine's BIOS setup.
  
  The reason Microsoft had to have this change implemented for them was
  that MS-Windows still handles some devices with real-mode drivers,
  and this change makes it easier to mix real-mode and protected-mode
  device drivers. This is an excellent example of Microsoft using its
  dominant position in the software industry force a technical change
  that is detrimental to other operating systems.

  (Donald Becker, http://scyld.com/expert/modules.html )

Sheesh!

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: default shell

2001-08-07 Thread Erdmut Pfeifer
On Tue, Aug 07, 2001 at 06:49:08AM -0500, will trillich wrote:
  [EMAIL PROTECTED] (pkm) writes:
   
   hey... how can I set my default shell (when I don't have root
   access)... I'm being forced to use csh but I want to use bash
 
 if you don't wanna have to also log out of your default shell,
 you can do something like this:
 
   % bash  exit
 
 then when you log out of your bash invocation, your csh instance
 will exit as well.

or simply do a

$ exec bash

This will *replace* the current shell (csh) instead of creating a new
process for the bash (fork + exec). You will benefit from saving a bit
of memory, especially if you do it many times in your 20+ terminals...

Cheers

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Very odd programming trouble

2001-08-07 Thread Erdmut Pfeifer
On Tue, Aug 07, 2001 at 03:20:52PM +0100, J.A.Serralheiro wrote:
 /*Hi folks. Heres a very odd trouble ( for me at least)
   I was reading a book where it was stated that
   char *ptr = text;is an allowed declaration and that
   the compiler automatically allocates space for the string text and for the
   \0 terminating character ( true) . 
   I decided to try it with strcpy(char *dest, const char *src) and 
   there seems to be a problem. Whenever dest is a pointer declared as above
   theres a segmentation fault. I tried the code bellow which suffers from 
   exactly the same. The problem seems to relly In the expression
while ( (*dest++ = *src++)!= '\0' ) .
 I tried all the combinations with strcpy, but only this one gave odd results.
   It seems that the compiler as some dificulties assigning *dest++ = *src++
   when dest is a char *ptr = kljdflg. But when src is this kind of
 pointer and dest is an array ( as so declared) ,
   it works fine. Its not very usual to declare strings this way
   but its stated as ansi compliant, and the compiler silently accepts it
 without any warnings.
   The code is set to the particular combination where a SIGSEGV is generated
   dest=s1; /*char *s1 = ldksj 
   src=string; /*char string[]= dflkjg*/
 
 Can someone solve this mistery ?

char *s = whatever;

declares a string constant. As the name 'constant' implies, this is not
modifiable, so the compiler may decide to place it in the code segment
of the program, which is typically flagged read-only.  If you try to
modify that memory location you'll get the segment access violation.

Your options are to use a local variable as the string buffer (which
will reside on the stack), a global variable (data segment) or allocate
memory dynamically from the heap with malloc().

Cheers

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: disabling DELETE in potato's apache

2001-07-24 Thread Erdmut Pfeifer
On Tue, Jul 24, 2001 at 06:01:03PM +0200, Martin F. Krafft wrote:
 hi all,
 how can i disable the DELETE command for all my virtual sites in
 apache?

typically, you wouldn't have to do anything, as DELETE is not
implemented in the core apache feature set. So, unless you have
compiled in or dynamically loaded and activated some module that
implements this HTTP method, there's nothing to worry about ;)

One module that implements it is mod_put -- for more info see

http://hpwww.ec-lyon.fr/~vincent/apache/mod_put.html

Another possibility would be to bind the DELETE method to some CGI
script via the Script directive of the module mod_actions (which is a
base module) -- but I guess you haven't done that either...

Also, mod_dav for WEBDAV (RFC 2518) support implements the DELETE
method:

http://www.webdav.org/mod_dav/

(I've heard rumors that there's a similar module mod_webdav (reported
to be by www.cyberteams.com), but I've never gotten hold of that one)

Cheers,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Draft printing.

2001-07-13 Thread Erdmut Pfeifer
On Fri, Jul 13, 2001 at 04:30:45PM +0400, Alexey wrote:

 I print ps files with gs (-sDEVICE=stcolor).
 Is there any way to print in economy mode?

you might want to check whether the device driver ('stcolor', maybe
also 'uniprint') has a specific option for activating draft/economy
mode. A detailed listing of options is in the docs:

http://www.cs.wisc.edu/~ghost/doc/AFPL/6.50/Devices.htm#STC_epson_stylus
   ^
   (put your gs-version here)
   
If there is no such option, you might consider tweaking the printer
init code sequence to include the ESC-sequence that enables economy
mode (you'd have to look that up in the printer's manual).
The init sequence can be overidden (not appended) with the option:

-sescp_Init=ESC-string

Printing at a lower resolution (e.g. -r180x180) might have a similar
effect as 'economy'. YMMV.

HTH,

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: cannot open display

2001-06-21 Thread Erdmut Pfeifer
On Thu, Jun 21, 2001 at 03:45:38PM -0400, Hall Stevenson wrote:
  On Thu, 21 Jun 2001, G.LeeJohnson wrote:
 
  You can narrow it down by specifying the allowed hosts:
  xhost + allows every host
  xhost +foo allows computer 'foo' to connect
  xhost - allows only the owner of the display
 
  I do not know how to specify a user. You could try
  'xhost localhost' for allowing all users from localhost.
 
 There's a secure way to accomplish this, but I don't
 remember it. I do use it at home though and can post it later.
 I'm surprised no one's already done so though...

g  I think what you're looking for is:

after su'ing to root, do

$ xauth merge ~USER/.Xauthority

where USER is the regular user's username.

Depending on context the right name might still be in the variable
$USER, so the following should work too

$ xauth merge ~$USER/.Xauthority

$USER may already be set to 'root' in some other contexts, though. 


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Shutting down as a user

2001-04-27 Thread Erdmut Pfeifer
On Fri, Apr 27, 2001 at 12:27:06PM +, Victor wrote:
 Of course it works but it reboots the system, doesn't shut it down. That's 
 what I want.

take a look at /etc/inittab and search for 'shutdown' (preceded by some
comment about ctrl-alt-del). There you can change the arguments to shutdown,
i.e. replace '-r' (=reboot) by '-h' (=halt).

 On Friday 27 April 2001 08:26, Sebastiaan wrote:
  Hi,
 
  does an ordinary ctrl+alt+delete not work?
 
  Greetz,
  Sebastiaan
 
  On 27 Apr 2001 [EMAIL PROTECTED] wrote:
   Using debian on a stand-alone laptop I usually work as an ordinary user
   and find somewhat awkward the fact that I have to su in order to shutdown
   the PC. Is there a way to power my PC off as a user?

-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: compile problem--where 2 look 4 cause? (short story, long info)

2001-04-27 Thread Erdmut Pfeifer
On Thu, Apr 26, 2001 at 11:54:31PM -0700, Kenward Vaughan wrote:

 I hope someone can give me clues about what's screwing up the compilation of
 a molecular docking app I'm trying to assemble for my classes... It is
 supposed to compile on a number of *nix platforms including Linux, but my
 attempts die at the linking stage.
 
 (...)
 
 Can someone suggest places to look for the cause of this?  Is there other
 information I could find to help?  I have tried the nm command with the
 three functions referenced below, and the only consistent difference I can
 see is the lack of a bunch of alphanumerics after the name shown on the last
 line of that listing.

this small difference in the symbolnames _could_ be an indication that
there's some kind of prototype mismatch. You might want to use nm's -C
option to demangle those cryptic alphanumerics. This will give you a more
human-readable output, though I'm not sure whether it'll help you beyond
that... (see 'man c++filt' for a short intro on what mangling is about).

I would try to locate the implementation of the 'get_atom_type(char *, char *)'
function in the source code (most probably in get_atom_type.cc) and see
whether it has the required prototype (the one that the linker's undefined
reference message complains about).
Somehow, however, I doubt that some bug like this would have made it
into the AutoDock distribution. Or is this a brand new release, that no
one else has yet tried to compile?

Too bad this isn't open source -- otherwise I could have given it a
quick try myself... Anyway, good luck!

Erdmut



 (...)

 readPDBQ.o(.text+0x465): undefined reference to `get_atom_type(char *, char
 *)'
 collect2: ld returned 1 exit status
 make: *** [autodock3] Error 1
 
 ---end of run--next is a listing by nm of libad.a for get_atom_type
 
 daddy:/home/local/src/autodock/dist305/src/autodock# nm libad.a|grep -i
 get_atom_type
  U get_atom_type__FPcT0
  U get_atom_type__FPcT0
  U get_atom_type__FPcT0
 get_atom_type.o:
  T get_atom_type


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: login hassles........

2001-04-27 Thread Erdmut Pfeifer
On Fri, Apr 27, 2001 at 06:38:51AM -0700, Pad Bambury wrote:
 Hey all,
 problem with logging in to a pc.  Can shell in
 remotely, and log in using the consoles, but the gnome
 login merely disappears as if it's going to work and
 then brings you back to the login screen.

typically, the graphical display/login managers like gdm exhibit this
behaviour if your login shell isn't listed in /etc/shells -- so you
might want to check that (if unsure about your login shell: it's the
last entry of the lines in /etc/passwd).
I'm not sure, however, whether in this case it would be possible to
login at all (remotely or via console)...

 Did the command startx  x.log in one of the consoles
 and got this output, can anyone shed any light on it
 for me please:
 
 Fatal server error:
 Server is already active for display 0
   If this server is no longer running, remove
 /tmp/.X0-lock
   and start again.

if you have X already running, then this is the message one would
expect. If not, then simply do what the message suggests ;)

Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: MC

2001-04-24 Thread Erdmut Pfeifer
On Tue, Apr 24, 2001 at 08:25:55AM +0200, Ales Jerman wrote:
 Why is mc in xterm blackwhite, why there are no colors? Because ls
 --color does what I asked for, why not mc? (mc-Midnight Commander)
 Thanks!

whether mc automatically starts up in color mode depends on some
termcap/terminfo setting of your terminal.
There's a commandline option to force it to start in color mode:

mc -c

Alternatively, you can set the environment variable COLORTERM before
starting mc.

HTH,
Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: [perl] glob() and filenames w/ spaces

2001-04-24 Thread Erdmut Pfeifer
On Wed, Apr 25, 2001 at 01:37:39AM +0200, Sven Burgener wrote:
 Hi all
 
 Sorry if this is too off-topic, but on debian-user there is usually
 excellent help, so I cannot resist. =)
 
 How do I deal with the situation where glob(*) is used and where there
 are files that contain spaces in their file names?
 
 I know spaces in file names suck. I have no choice. It's the way it is.
 
 What I like about glob() is that it returns the whole path as opposed to
 readdir(DIR) which only returns the top of the path.
 That is very useful for my situation, so I need this property.
 
 So, is there any way to make glob(*) smart about files with spaces in
 their names?
 
 Has anyone dealt with something similar before?

afaik, it depends on the perl version whether spaces in filenames will cause
problems. Perl-5.004 was buggy in that respect, while perl-5.005_03 and
perl-5.6.x work correctly:

$ ls -1
test 1.dat
test 2.dat

with version 5.004 you get:
$ perl -e 'print join \n,glob(*)'
test
1.dat
test
2.dat

with both 5.005 and 5.6 you get (at least I do here...):
$ perl -e 'print join \n,glob(*)'
test 1.dat
test 2.dat


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: EMAIL PROCESSORS WANTED IMMEDIATLY

2001-04-24 Thread Erdmut Pfeifer
On Tue, Apr 24, 2001 at 07:42:07PM -0400, Rob Mahurin wrote:
 On Tue, Apr 24, 2001 at 09:02:53AM -0700, paul wrote:
  EMAIL PROCESSING COMPANY LOOKING FOR EMAIL PROCESSORS IMMEDIATELY, 
  TO SUSTAIN EXPLOSIVE GROWTH. EARN $5,000- $10,000.00 AND 
  MORE MONTHLY. NO EXPERIENCE NECESSARY. SEND AN EMAIL TO 
  [EMAIL PROTECTED] WITH EMAIL PROCESSOR IN THE 
  SUBJECT LINE. WE WANT SERIOUS INQUIRIES FROM PEOPLE WHO 
  WANT TO MAKE SERIOUS MONEY!  
  
  To remove from this list just insert remove in the 
  subject line.
  Thank you for your time
 
 Paul,
 
 You should look into procmail.  It's an excellent email processor and
 available for much less than $5,000.

;)

He's probably just trying to collect email addresses with a status of
verified-by-human, which he himself can then sell to earn the $5000.



Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: why does ps-eps conversion reduce line thickness?

2001-04-23 Thread Erdmut Pfeifer
On Mon, Apr 23, 2001 at 01:08:30PM +1000, Mark Mackenzie wrote:
 I do ps-eps conversions using:
 
 echo -n '\004' | gs -q -dNOPAUSE -sDEVICE=epswrite \
 -sOutputFile=box.eps box.ps /dev/null
 
 using box.ps below. When box.ps is printed, it comes out as a 1mm thick
 1 box (quite dark). When box.eps is printed, the lines are very thin 
 (perhaps .2mm) and faint. 
 
 This is when printing to a postscript printer hp2100tn. I think if
 you are using gs via magicfilter for a non-ps printer, the printout is
 ok.
 
 Does anyone know why this is the case? The problem is that when I
 my ps files in a latex document most lines are too faint, and the
 thickness is varying as if I had an aliasing error.

Hi,

in the postscript drawing model there is a clean seperation between
constructing the paths that define the shapes to draw, and the actual
rendering. The paths can be imagined as invisible descriptions of the
shapes, while the rendering process then puts some color onto the
virtual page. The rendering itself can basically be stroking (like
taking a pencil and moving/putting color along the path), or filling
the area circumscribed by a closed path.
Actually, there are three steps: (1) path construction, (2) rendering
and (3) transferring the internally built-up page image onto the
physical paper or some other media.

Now, how does that relate to your problem?  If you want to stroke the
path (the square in your case) you need to tell the postscript
interpreter which line width, line style, etc. you wish to use. In the
example below this didn't happen anywhere, so the interpreter uses
some built-in defaults. These are device-dependent, i.e. they can vary
from printer to printer. Typically the default line width is set to
the device resolution. So if you have a 600dpi printer, the lines will
be 1/600 inch wide, whereas, if you let ghostscript output the page
to the screen, the same line will usually be one pixel wide.
But these are just the defaults. You can of course set any desired line
width, and the postscript interpreter will try to approximate that
thickness as good as possible at the given device resolution.
The operator for setting the line width is, guess what, setlinewidth ;)
The units in which you specify the width is by default 1/72 inch, but
you can change that as well.
Thus to get a line width of, let's say, 5/72 inch, you simple put

  5 setlinewidth

somewhere before executing the stroke operator, as I've done in your
code below. The exact place is not so important as long as it's before
the stroke, so putting it before the newpath would be equally fine,
for example. You can specify _any_ width, not just integer multiples of
the unit currently in effect. So, something like 1.35 setlinewidth is
ok, too.

In case you'd rather wanted to use mm-units, you could put

  72 25.4 div dup scale

at the beginning of the postscript document, etc. etc. ...

So, if you write your (simple) documents from scratch, things are
basically quite easy and flexible. On the other hand, if you'd have to
manipulate some third-party postscript document, it can of course be
considerably harder to find the correct place to modify...

If you feel like doing more advanced things than drawing a simple square,
I'd recommend that you get the PostScript Language Reference Manual from
Adobe -- it's available for free:

http://partners.adobe.com/asn/developer/technotes/postscript.html  (overview)
http://partners.adobe.com/asn/developer/pdfs/tn/PLRM.pdf(direct link)

(also available as a printed book)

HTH,
Erdmut


 box.ps - from postscript.tar.gz on the net somewhere.
 %!
 %% Draws a one square inch box and inch in from the bottom left
 
 /inch {72 mul} def

 newpath  |
 1 inch 1 inch moveto |
 2 inch 1 inch lineto |  this is the path construction
 2 inch 2 inch lineto |
 1 inch 2 inch lineto |
 closepath|

  5 setlinewidth
 stroke  this renders the square

 showpagethis transfers it to the paper
 


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Corruption many (unknown) files; how best to restore?

2001-04-22 Thread Erdmut Pfeifer
On Sun, Apr 22, 2001 at 09:37:55AM -0500, Kent West wrote:
 Hi all.
 
 HISTORY (the actual question is below):
 
 For some reason my Sid box at home has been locking up in X lately. I don't
 know if it's an X problem, or a hardware problem, or what. I'm running 
 2.2.18; have been for months without any problem, so I doubt it's any bug
 within the kernel.
 
 I've just run memtest86, and it seems that my RAM's fine.
 
 I had a hard drive fail a couple of months or so ago; I pulled it out and
 replaced it (it had /usr and /home on it) and rebuilt as best I could; I
 think I got it working.
 
 Then last week I put that failed drive back in to see if I could recover
 any data off of it before consigning it to it's eternal resting place. After
 finding a bay to rest it in and plugging in an IDE data cable, I realized I
 didn't have any extra power plugs for it. So I left it that way until I
 could get a power splitter. I figured that not having any power to it, the
 system would ignore it.

I'm afraid that not having the power cable attached to the drive _may_
have been the initial cause for the lock-ups you're experiencing...

I once created myself a similar problem on a SCSI-based box simply by
attaching a fairly long external SCSI cable to connect my scanner.
Obviously, the cable was too long, as it apparently deformed the
electrical impulses on the SCSI bus in such a way that sector adressing
of the drive happened to continue to work but in a more or less random
fashion! Now, I don't have to elaborate any further on what that means
in terms of file system corruption... Actually, hundreds of files got
corrupted :(
The strange thing was that I didn't get _any_ error messages from the
SCSI subsystem -- after attaching that harmful cable, I was able to
work happily for another quarter of an hour in GIMP before the system
finally completely froze (and interestingly, the scanner did work).
Post-mortem analysis of the corrupted files revealed, that the data I
had written to disk during that quarter of an hour seemed to have
been randomly scattered all over the file system, doing its destructive
job as thoroughly as possible...
I'm sharing that story because my educated guess would be that the freely
floating drive (as to electrical charge) in your case might well have
had affected the signals on the IDE bus in a similar way. Hopefully not as
thorough as in my case ;)
( Also, while we're at the topic, my advice to anyone playing with the
idea of exceeding the maximum recommended cable lengths for whatever
reason: just don't do it! Or have a recent backup :)

 
 I think maybe my suspicion was incorrect, and that the system saw this drive
 and got confused and started doing nasty things. I shut down and unplugged
 the drive, and restarted the system. Everything looked fine, except that
 KDM no longer started an X session; it acted the same way that it would if
 there was something wrong, like a wrong mouse section, in the XF86Config
 file. But I could start X with the startx command, so I just figured it was
 some glitch I downloaded with my most recent upgrade of Sid.
 
 Nevertheless, since then I've started having lockups in X. It may be
 related to a Windows-based Backgammon game I'm running via Wine (this game
 typically bombs now, whereas it used to work fine).
 
 To make this (very) long story short, the repeated crashing and subsequent
 resets (no way to ssh/telnet in, and loss of keyboard control) has tended
 to do nasty things to my file system.
 
 
 ACTUAL QUESTION:
 
 I don't know which files/packages are corrupt; is there any automated way
 to have the system check to see what's installed, what's broken, and what
 needs to be reinstalled to fix what's broken?

I would check my most recent MD5-checksum filelists against the existing
files to produce a list of what's damaged -- hopefully you did create
MD5 lists while the system was still working properly? ;) That doesn't
solve the automatic reinstallation part, though.
Maybe someone else has a better suggestion.

Anyway, it's a good moment to reconsider installing a tool like tripwire
(www.tripwire.org) -- or if you think that's overkill, run something like
find / -type f | xargs md5sum files.md5  periodically...

Good luck,
Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: elm-me+:Failed: :No such file or directory

2001-04-17 Thread Erdmut Pfeifer
On Mon, Apr 16, 2001 at 02:28:48PM -0400, Sebastian Canagaratna wrote:
 Hi:
 
   I recently changed to Debian Testing and elm-me+2.4 that comes with
   it. When I try to send myself an email using elm I get the
   error message:
 
   Failed: :no such file or directory.
 
   WHen I try try to debug with elm -d11, I get the message:
 
Warning: system created without debugging, request ignored.
 
However, I can send the mail to myself using 
 
sendmail -bm 
 
Elm is able to pick up the mail the display it. So clearly the
problem must be with elm configuration rather than exim. 
 
What file is elm not able to find? How do I proceed from here?


strace is your friend in these situations... Try to run something like

  strace -e trace=file,write -f -o /tmp/elm-strace.out your elm-command here

and search /tmp/elm-strace.out (near the end) for a line resembling

  open(the file in question, ...) = -1 ENOENT (No such file or directory)

immediately followed by a couple of write() calls printing out the
Failed: ... error message you see.
This should give you an idea of what's going wrong.

Good luck,
Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: make xconfig button.ref eroor

2001-04-17 Thread Erdmut Pfeifer
On Tue, Apr 17, 2001 at 10:37:47AM -0400, Jesse Goerz wrote:
 
 I'm trying to recompile a kernel but keep getting this error trying to use 
 make xconfig:
 
 storm:/usr/src/linux# make xconfig
 rm -f include/asm
 ( cd include ; ln -sf asm-i386 asm)
 make -C scripts kconfig.tk
 make[1]: Entering directory `/usr/src/kernel-source-2.2.18pre21/scripts'
 cat header.tk  ./kconfig.tk
 ./tkparse  ../arch/i386/config.in  kconfig.tk
 echo set defaults \arch/i386/defconfig\  kconfig.tk
 echo set ARCH \i386\  kconfig.tk
 cat tail.tk  kconfig.tk
 chmod 755 kconfig.tk
 make[1]: Leaving directory `/usr/src/kernel-source-2.2.18pre21/scripts'
 wish -f scripts/kconfig.tk

can you run any other X program (try xclock, for example) under these
very circumstances? If you get the same Xlib error, then I suppose you
were logged in as root while running make xconfig, but started the X
server under your regular userid (which is good, btw). In this case do a

  xauth merge ~your regular username/.Xauthority

(...hopefully not starting the same old discussion about what else you
can do and why not -- it's all in the archives ;)

Regards,
Erdmut

 Xlib: connection to :0.0 refused by server
 Xlib: Client is not authorized to connect to Server
 Application initialization failed: couldn't connect to display :0
 Error in startup script: invalid command name button
 while executing
 button .ref
 (file scripts/kconfig.tk line 51)
 make: *** [xconfig] Error 1


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: can't print from acroread

2001-03-25 Thread Erdmut Pfeifer
On Fri, Mar 23, 2001 at 01:20:10PM -0800, peanut butter wrote:
 
 (...)
 
 To really simplify things while exposing the basic problem, I saved the
 pdf from acroread as a postscript file.  I then wrote one script that
 prints this postscript file with the lpr -D5 option saving the output
 to one file while recording the returned status from lpr to another.  I
 then ran this exact same command once from the command line and once
 from within acroread.  Now, mind you, the script accepts no input
 arguments . . . so the execution of the script is exactly the same,
 printing the exact same postscript file, while printing correctly from
 one execution (command line) yet not from the other (acroread).

good idea...

 I can offer the full debug report from either or both runs to anyone
 who should care to view it/them but in attempt to keep things as concise
 as possible, just below is the segment from both reports where they begin to
 deviate.

thanks for the clear report. From this debugging information and a
quick look at the lprng source, I would say that the problem occurs
while trying to fork/exec the so-called input filter, though I do not
yet have a clear idea why it fails...

The waitpid(2) call returns a -1, which indicates that the waitpid
failed. Actually, there are three error/return codes involved here: the
exit code of the waited-for child process (the filter), the return code
of the waitpid function itself, and in case the latter returns -1, the
errno being set by waitpid, which _might_ provide further details on
what went wrong (not sure whether it really would in this case, though).
Unfortunately, this error number doesn't seem to make it to the
debugging logs (or else we should see another diff) -- the -1 appears
to be propagated to the logs instead. Although it would generally be
easy to add another printf() to the lprng code, outputting the errno, I
assume that you would rather avoid having to recompile the code and
having to make sure that the modified version gets installed in the
appropriate place. I guess we should do things like these as a last
resort only.
Instead we might try to apply the same wrap a script-technique here
as well, by substituting a script for the filter program run by lprng.
The script could output a few interesting things like command line
args, environment, the errno in question, etc. before/after running the
filter... There is a faint hope that something will differ here... :)

First, however, I'd like to take a look at the full logs available
already, so feel free to send them to me privately. In particular, I'm
not sure yet as to how the other diff (the fd-0..5 vs. fd-0..8 thing)
is related to the waitpid error ocurring later on. Maybe that is the
real place where things start to go wrong. The complete logs will make
it easier to dig through the appropriate portions of the source.

Also, I'd like to take a look at your printcap file (especially the
if-specification which is in effect for the printer in question), so
be sure to attach that as well. BTW, which printer are you using -- is
it a native postscript printer, or are you using ghostscript as a
filter?

If you haven't done so already, you might also want to try to run your
test script from the command line but in the background. This might
help to rule out issues with a controlling terminal being required by
the filter. Don't know why it would need one, but who knows ... just
an idea.

Erdmut

PS: I cannot promise to be able to take a closer look at this immediately,
but I _will_ as soon as time permits. After all, getting a deeper
understanding of the lprng mechanics might help me to eventually solve
a somewhat similar problem I'm experiencing myself sporadically ;)


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: Setting time to 24 hour format

2001-03-19 Thread Erdmut Pfeifer
On Mon, Mar 19, 2001 at 10:27:34AM -0600, Bryan Walton wrote:
 Does anybody know how I can set my clock so that when I type uptime the
 time will be shown in 24 hour time rather than as below:
 
 [EMAIL PROTECTED]:~$ uptime
  10:25am  up 4 days,  1:32,  6 users,  load average: 0.06, 0.07, 0.08


I don't think there is a way to achieve this without editing the source.
uptime is neither locale aware, nor does it have a command line option
for this.
A quick look at the source reveals that the output format is hardcoded...
The relevant fragment from whattime.c (called from uptime.c -- in the
procps pkg):

  pos = sprintf(buf,  %2d:%02d%s  ,   

realtime-tm_hour%12 ? realtime-tm_hour%12 : 12,   

realtime-tm_min, realtime-tm_hour  11 ? pm : am);


I guess you'd have to change this, if the 12 hour mode really bothers
you badly enough ;)

Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: can't print from acroread

2001-03-16 Thread Erdmut Pfeifer
On Fri, Mar 16, 2001 at 11:18:00AM -0800, peanut butter wrote:
 Hi, I'm using lprng with filter /etc/magicfilter/ljet4-filter to an HP
 Laserjet 5M printer.  Though things print fine from the command line,
 if I open a pdf with acroread, nothing prints when clicking the print
 button from within the application.  A pop-up window will appear saying
 that the print job has been submitted and it will sequentially course
 through each page number of the document giving every sign that things
 should be printing yet nothing ever appears in the print
 queue, nothing ever comes out of the printer and an error message
 never shows up in any system log file (that I've ever checked, anyhow).
 
 Supposing this is too specific a problem for anyone to immediately
 have an idea as to the cause, can anyone at least suggest
 some manner at which to attempt to trace what's going on here and
 where the failure is occurring?

can you print the file in question if you have acroread direct its
output to a file, which you then send to the printer manually?
Can you view the resulting PS-file in ghostscript?

Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: can't print from acroread

2001-03-16 Thread Erdmut Pfeifer
On Fri, Mar 16, 2001 at 01:01:31PM -0800, peanut butter wrote:
  can you print the file in question if you have acroread direct its
  output to a file, which you then send to the printer manually?
  Can you view the resulting PS-file in ghostscript?
 
 Yes to both.  Sorry not to mention this right off.  I mentioned this the
 first time I attempted to post this message but apparently wasn't yet
 fully subscribed and neglected to save myself a copy.
 
 If I save the file as a postscript from the acroread print pop-up
 window, I can print the job without a problem from the command line
 and, thus, to no surprise, can also correctly display it with gv.
 Thus, the job seems to somehow never be making it outside acroread.
 
 In trying to diagnose the problem, I tried using another printer that I
 didn't realize hadn't been configured for the system by changing the
 printer command to /usr/bin/lpr -Plex.  I received the same print
 error message I would have received from lprng if I had tried this from
 the command line yet it was displayed within a pop-up window from
 acroread.  So obviously acroread is talking to lprng to some degree.

well, this is really a little strange...

Perhaps you might want to try the following to get some more information:

Write a shell script something like

#!/bin/sh
ls -l $1 /tmp/acroprint-debug.$$
cp $1 /tmp/acroprint-out.$$
/usr/bin/lpr -Pprinter $1  # substitute your printer here
echo $? /tmp/acroprint-debug.$$

  (the .$$ are not required, they just create a seperate pair of files
  for each try, with the PID appended)

and run this instead of the /usr/bin/lpr from acroread's print dialogbox
(e.g. Printer Command: /home/name/test-print  -- no further options)

This should

(a) give you some info about the temp-file that acroread creates (- ls)
(b) copy the temp-file to a safe place, before it gets deleted (- cp)
(c) try to run the actual print command -- maybe it works from here
(d) capture the return code of the print command -- should be 0 if OK

Then you can also compare the /tmp/acroprint-out with the file you
created when printing directly to a file from within the dialogbox.
I guess both files should be identical.

Also, feel free to add other debugging commands you could think of
to the script...


Don't know whether it helps ;)
Erdmut


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: can't print from acroread

2001-03-16 Thread Erdmut Pfeifer
On Fri, Mar 16, 2001 at 01:34:45PM -0800, Bob Nielsen wrote:
 
 I had a similar problem with only certain .pdf documents.  The error
 light on my Lexmark Optra E312 would flash when lpr was sending data to
 the printer and nothing was printed although there were no error
 messages.  I used pdftops to convert the file to .ps and noticed that
 the first line said %!PS-Adobe-3.0.  My printer only handles PS

just a short note:
contrary to what one might think, the 3.0 in %!PS-Adobe-3.0 does not
refer to the PostScript language level. Instead it states that the
PS document adheres to the Adobe DSC (document structuring conventions)
version 3.0 (with versioning independent from the PS level). The DSC
mainly specify the syntax and sematics of the %%-comments.

Erdmut

 level 2, so I believe this was the problem.  I think it had something
 to do with the way the document was created and neither acroread nor
 pdftops could convert to level 2 postscript, although I had selected
 level 2 in the acroread print dialog box.


-- 
Erdmut Pfeifer
science+computing ag

-- Bugs come in through open windows. Keep Windows shut! --



Re: After Debian install, XMS and HIMEM errors prevent Win95 launch.

2001-03-07 Thread Erdmut Pfeifer
On Tue, Mar 06, 2001 at 10:56:42PM -0500, Noah L. Meyerhans wrote:
 On Tue, Mar 06, 2001 at 02:23:58PM -0800, Robert Cymbala wrote:
  
Error: HIMEM.SYS is missing... or Error: Unable to control A20
line... After Installing Norton SystemWorks
 
 Very odd.  I have that same problem on my laptop (Win98  sid).
 However, on mine it isn't merely after installation, but it happens all
 the time.  Windows refuses to run except on a hard reboot.
 
 I don't pretend to understand it.

rant
This A20-line crap rates as one of the most insane ideas ever put forth
in the whole history of PCs. Anyone who doesn't know already may want to
read up a little on what it's about, for example here

http://www.phys.uu.nl/~mjanssen/control.php3?chapter=6
http://www.phys.uu.nl/~mjanssen/control.php3?chapter=9
(a google search for the exact phrase A20 line will turn up a couple
of more links)

In short, it was born in a desperate attempt at gaining another 64k
(k!) of memory addressable beyond the 1M limit of the so-called real
mode of 80x86 processors. It was realised by introducing some weird
mechanics into the PC hardware that allows to switch that special
addressing scheme on or off.
It's one of the prototypical examples of how an ad-hoc solution of no
real benefit can cause headaches for thousands of people. Even years
after its invention it seems to haunt innocent users. Unbelievable!
/rant


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Simple c program won't compile

2001-03-05 Thread Erdmut Pfeifer
On Tue, Mar 06, 2001 at 12:26:30AM +1030, Mark Phillips wrote:
 Hi,
 
 The following program:
 
 
 #include stdio.h
 #include math.h
 
 int main(int argv, char **argc){
   double x;
 
   x=sqrt(5.0);
 }
 
 
 does not compile.  Instead I get the errors:
 
 $ gcc thick.c
 /tmp/ccU9fgSr.o: In function `main':
 /tmp/ccU9fgSr.o(.text+0x16): undefined reference to `sqrt'
 collect2: ld returned 1 exit status

you have to link it with the math lib:

gcc thick.c -lm
 ^^
Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Anacron schedule

2001-03-05 Thread Erdmut Pfeifer
On Mon, Mar 05, 2001 at 12:19:07PM -0600, Judith Elaine Bush wrote:
 
 
 I've edited /etc/crontab to change the time (ana)cron runs its daily,
 weekly, and monthly scripts. They still seem to run after 7 am and not
 at 5:25. The system has been rebooted since the crontab change (not my
 fault!), so any and all daemons have been restrted since the change.
 
 I am a little confused about the structure of the crontab command that
 refers to the daily, weekly, and monthly cron schedules. Since anacron
 is installed on my system, 'test -e /usr/sbin/anacron' returns
 0. Thus, since the second command only runs IFF the first command
 returns a non-zero status, it seems that these crontab entries don't
 trigger anacron or the run-parts. (And then the
 /etc/cron[daily|monthly|weekly] all have 0anacron scripts that seem to
 run run-parts on the self directory.
 
 Between my change making no difference and closely examining the
 crontab entries, I am now left puzzled where anacron is scheduled to
 run when the system is up for over 24 hours.

as there is /etc/crontab for cron, there is a /etc/anacrontab for anacron.
See man anacron[tab] for the details...

The suggested test -x ... lines in crontab are just to disable those
entries temporarily while anacron is installed. This way, if you decide
to uninstall anacron, the crontab entries will automatically be
reactivated, without you having to edit any config files.

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Anacron schedule

2001-03-05 Thread Erdmut Pfeifer
On Mon, Mar 05, 2001 at 03:58:10PM -0600, Judith Elaine Bush wrote:
 On Mon, Mar 05, 2001 at 09:12:33PM +0100, Erdmut Pfeifer wrote:
  
  as there is /etc/crontab for cron, there is a /etc/anacrontab for anacron.
 
 Indeed. Except the /etc/anacron specifies frequency (how many days
 apart should something be done) and delay (how long after anacron is
 invoked shoud something be done). The /etc/anachron file does *not*
 run anacron just after 7 am every morning.
 
  The suggested test -x ... lines in crontab are just to disable those
  entries temporarily while anacron is installed. This way, if you decide
 
 So, /etc/crontab is NOT where anacron is invoked. Somehow anacron is
 invoked around 7 am each day. If it's not in /etc/crontab, where is
 it?

sorry, I guess I didn't read your mail carefully enough...

I think what you are looking for is /etc/cron.d and the files therein.
Typically, when anacron is installed, there is a file /etc/cron.d/anacron
containing the line

30 7* * *   roottest -x /usr/sbin/anacron  /usr/sbin/anacron -s   


which explains the activity around 7 am you see.

The feature that cron is treating the files in /etc/cron.d as extensions
to /etc/crontab is a debian-specific modification of the regular cron
behaviour -- see the section DEBIAN SPECIFIC in man cron.

hope that helps now ;)
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: lag?

2001-03-05 Thread Erdmut Pfeifer
On Mon, Mar 05, 2001 at 06:52:43PM -0500, MaD dUCK wrote:
 is it just me and my mail server, or is the debian-users list like 10
 minutes behind?
 
 fishbowl:~/web/limerence.org date
 Mon Mar  5 18:52:33 EST 2001

for me the delay is around 40 minutes.


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: Re: C editor

2001-03-04 Thread Erdmut Pfeifer
On Sun, Mar 04, 2001 at 10:08:41PM +, Colin Watson wrote:
 Vishal Soni [EMAIL PROTECTED] wrote:
 Don't laugh. 
 Say i want to compile a solaris binary on my linux box. Can i get
 asolaris c-compiler and compile it on my linux box?? it that possible?
 
 Depends what you mean by compile a Solaris binary. If you've got the
 source code, then source is source is source; just compile it as normal.
 Don't bother compiling a Solaris compiler, it's a lot of work and won't
 help. If the source code isn't portable enough, you might have to fix it
 up a bit, but that depends.


not sure, but the initial question could also be read meaning he wants
to do cross-compiling, i.e. doing the build on Linux but running the
executable on Solaris.

If that's the intention, I would advise you not to try it, except you
consider yourself an experts in these kind of things. Although
theoretically possible, it's definitely nothing for the faint-hearted. ;)

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: anyone have a epson stylus photo 870 working?

2001-03-04 Thread Erdmut Pfeifer
On Sun, Mar 04, 2001 at 06:46:45AM -0900, Ethan Benson wrote:
 
 (...)
 
 i noticed, gs is simply broken, when run by lprng (or by anyone for
 that matter, it only works in X) all it does is spit out `svgalib:
 Cannot get I/O permissions.'  regardless of driver or anything i do
 this is all it does.  i have tried compiling it without svga support
 but that just makes it crash.
 
 (...)
 
 unfortunatly not, nobody has been able to tell me why gs does not
 function.  i think i may just return this thing and get a used Apple
 laserwriter off ebay, these are true postscript printers and won't
 need this filter crap.  


maybe we can get that gs-thing solved somehow, before you throw away your
printer... ;)

Actually this gs problem made me curious, however, I was unable to
reproduce it here, probably due to a different setup.
Which exact gs command do you use when getting the error? Which version
of ghostscript? Have you tried it with a non-dummy installation of libsvga,
if so, did it fail too? How did gs crash when compiled without svga
support? Maybe you could also post an strace of the failing gs command
(or send it to me privately if you feel that is is too lengthy for the
list)? 

Don't know whether I'll be able to help, but perhaps we could start
narrowing things down a little...

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: anyone have a epson stylus photo 870 working?

2001-03-04 Thread Erdmut Pfeifer
On Sun, Mar 04, 2001 at 06:11:10PM -0900, Ethan Benson wrote:
 
 stat(/proc/bus/pci, {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
 open(/etc/vga/libvga.config, O_RDONLY) = 4
 fstat(4, {st_mode=S_IFREG|0644, st_size=16082, ...}) = 0
 read(4, # Configuration file for svgalib..., 16082) = 16082
 close(4)= 0
 open(/plato/eb/.svgalibrc, O_RDONLY)  = -1 ENOENT (No such file or 
 directory)
 ioperm(0x3b4, 0x2c, 0x1)= -1 EPERM (Operation not permitted)
 write(1, svgalib: Cannot get I/O permissi..., 37svgalib: Cannot get I/O 
 permissions.
 
 it shouldn't even be messing with svgalib, its not needed for acting
 as a filter.  

exactly -- at least that's what one would expect.
Normally gs should just load the svga shared lib, but not start reading
related config files, etc. However, the fact that in your case it does
proceed as if it wanted to init the svga driver, makes me guess that
there might be some problem with the device specification in the gs
command (that's why I asked for that exact command). In that case it
would be possible that gs falls back to the built-in default device
(x11 when in X, and that stupid svga thing when in console mode).

To clarify this further: what happens if you take the following
trivial PostScript fragment

%!PS
/Helvetica findfont 36 scalefont setfont
10 10 moveto (testpage) show
showpage

and for example run the following gs command

gs -sDEVICE=jpeg -sOutputFile=test.jpg -g150x50 -dBATCH test.ps

(assuming you saved the PS under test.ps, of course)

This should create a small jpeg file without messing around with
svgalib. When I run something like this under strace I don't see
anything like what you get above before it fails.
Does at least that work for you or do you get the same error?

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: anyone have a epson stylus photo 870 working?

2001-03-04 Thread Erdmut Pfeifer
On Sun, Mar 04, 2001 at 07:41:28PM -0900, Ethan Benson wrote:
 On Mon, Mar 05, 2001 at 05:22:50AM +0100, Erdmut Pfeifer wrote:
  
  exactly -- at least that's what one would expect.
  Normally gs should just load the svga shared lib, but not start reading
  related config files, etc. However, the fact that in your case it does
  proceed as if it wanted to init the svga driver, makes me guess that
  there might be some problem with the device specification in the gs
  command (that's why I asked for that exact command). In that case it
  would be possible that gs falls back to the built-in default device
  (x11 when in X, and that stupid svga thing when in console mode).
 
 the lpdomatic filters use the uniprint driver.  another version uses
 stp which is not available in gs.  (in either potato or woody/sid)

it seems as if there's a special gs package with stp included:

ftp://ftp.debian.org/pub/sourceforge/gimp-print/gs_5.10stp-10_i386.deb

also, in there you'll find a README.stp.gz with further infos on how
to build it from source, etc. -- just in case you need to.

Can't tell you whether it works, though, because I don't own that
printer -- from the docs it sounds promising :)

Good luck,
Erdmut

 
  To clarify this further: what happens if you take the following
  trivial PostScript fragment
  
  %!PS
  /Helvetica findfont 36 scalefont setfont
  10 10 moveto (testpage) show
  showpage
  
  and for example run the following gs command
  
  gs -sDEVICE=jpeg -sOutputFile=test.jpg -g150x50 -dBATCH test.ps
  
  (assuming you saved the PS under test.ps, of course)
 
 this worked.  
 
 so it would seem debian's gs does not have a suitable driver for this
 printer?  
 
 -- 
 Ethan Benson
 http://www.alaska.net/~erbenson/


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: making thumbnails

2001-03-03 Thread Erdmut Pfeifer
On Sat, Mar 03, 2001 at 05:58:48PM -0500, Michael P. Soulier wrote:
 On Sat, Mar 03, 2001 at 04:32:57PM -0500, Michael P. Soulier wrote:
  Hey people. Is there an easy way to make thumbnails for large numbers of
  images? I'm thinking of Image Magick, but the mogrify -geometry argument
  doesn't maintain the aspect ratio. 
 
 Oh wait, it _does_ maintain the aspect ratio if you don't use the !
 argument. I'm ok now. :)


you might want a have a look at webmagick:

  http://packages.debian.org/stable/web/webmagick.html

The package mainly is a perl script for automatically creating
browsable thumbnail indices for large image collections. It is based on
perlmagick which is a perl wrapper for the imagemagick lib (as you
might have guessed).

It's highly configurable, and if you know a little perl and HTML you can
easily customize it even further...

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: #! syntax

2001-02-26 Thread Erdmut Pfeifer
On Mon, Feb 26, 2001 at 04:23:52PM -0300, Christoph Simon wrote:
 
 This one should work (didn't try it):
 
   #!/bin/sh
 
   PERL=`which perl`
   tail +7 $0 | $PERL
   exit 0
 
   # start perl code
   ...
 
 ..assuming that perl code starts on line 7.


it wouldn't be Perl if there wasn't yet another way to do it:

#!/bin/sh
exec perl -x $0 $@
#!perl

# your perl code here ...


See the manpage 'perlrun' for why this works. Additional options to perl
can be put where the -x is.

cheers,
Erdmut


 
   The problem I found with env was that the shell incorrectly passes args to
   env:
  
  The shell DOES pass args correctly. RTFM (info bash):
  
  The arguments to the interpreter consist of a single optional
  argument following the interpreter name on the first line of the script
  file, followed by the name of the script file, followed by the rest of
  the arguments.
  
   
   % head -1 t348.sh 
   #! /usr/bin/env perl -w
   % ./t348.sh 
   env: perl -w: No such file or directory
   
   How does one get around this? Please don't say, Don't use perl.
  
  Just write some wrapper and use
  #!/your/wrapper
  
  wrapper will be runned as 
  
  wrapper 'name_of_your_script'


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: after 'su -', 'Can't open display'

2001-02-20 Thread Erdmut Pfeifer
On Wed, Feb 21, 2001 at 09:46:35AM +1000, James Sinnamon wrote:
 
 
 
 Martin,
 
 Thank you for your suggestion ...
 
 
  Hi,
 
  By default the Xserver doesn't listen to the tcp port (for security
  reasons) and I'll guess thats the reason for your problem. I've added the
  following few lines to the /root/.bashrc file:
 
 if [ ! $LOGNAME = root ] ; then
  export XAUTHORITY=/home/$LOGNAME/.Xauthority
 fi
 
  This works for me.
 
 
 ... but it didn't work for me.
 
 Please forgive my ignorance, but I don't really understand what this will
 achieve.
 If I execute 'su -', then $LOGNAME in the child bash shell will be 'root', the
 expression '[ ! $LOGNAME = root ]' will evaluate to false, and so nothing 
 will
 
 happen, as far as I can see.   So I don't see what difference it makes in my
 situation.

I guess, something like the following should work:

su - root -c export XAUTHORITY=/home/$LOGNAME/.Xauthority; exec /bin/bash -i

The difference is that LOGNAME gets expanded before you are root.
Put this in a script or create an appropriate alias...

e.g.:

#!/bin/bash
exec /bin/su - root -c export XAUTHORITY=/home/$LOGNAME/.Xauthority; exec 
/bin/bash -i

(the syntax for setting XAUTHORITY depends on the login shell flavour,
of course, i.e. whether to use export... or setenv..., etc.)

Cheers,
Erdmut


 
  On Wed, 21 Feb 2001, James Sinnamon wrote:
 
   Dear Debian user's,
  
   My apologies for a question that should have been answered over and over
   again on this
   list (I have searced but not been able to find an answer),  or if I am
   on the wrong  list.
  
   When I start X Windows, using the KDE window manager, I change to root
   (with su - )
   for administrative tasks.  However I seem unable to run X window
   applications.
  
   Whatever X application  I try to run, I inevitably get a message similar
   to ... unable to open display.
  
   Previously on other distributions of Linux, I have used, as root :
  
   export DISPLAY=localhost:0.0
  
   and, prior to that,  as the normal user that started the X window
   session:
  
   xhost localhost
  
   This somehow doesn;t work on Debian Linux (unstable distribution).
  
   Would anybody be able to tell me why, and how to go about diagnosing the
   problem,
   or better still, what to do about it.
  
   TIA,
  
   James
  
   --
   James Sinnamon  [EMAIL PROTECTED]
  
   ph +61 7 46311490, +61 412 319669
   PO Box 517 Darling Heights QLD 4350
  
  
  
  
  
 
  --
  To UNSUBSCRIBE, email to [EMAIL PROTECTED]
  with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 
 --
 James Sinnamon  [EMAIL PROTECTED]
 
 ph +61 7 46311490, +61 412 319669
 PO Box 517 Darling Heights QLD 4350
 
 
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 

-- 
Erdmut Pfeifer
science+computing gmbh
Hagellocher Weg 73  phone: +49 (0)7071-9457-255
D-72070 Tuebingen   email: [EMAIL PROTECTED]

-- Bugs come in through open windows. Keep Windows shut! --



Re: can't exec some CGI scripts

2001-02-20 Thread Erdmut Pfeifer
On Tue, Feb 20, 2001 at 11:20:11AM -0500, John May wrote:
 I am using Apache 1.3.17 (compiled from source) on Debian Woody.  When I
 try to exec some CGI scripts, by typeing in the URL, ex.
 http://www.cybergeek.org/cgi-bin/newspro/newspro.cgi, I get an an
 Internal Server Error with the following error in Apache's error log:
 
 [Tue Feb 20 08:55:23 2001] [error] (2)No such file or directory: exec of
 /home/www/cgi-bin/newspro/newspro.cgi failed
 
 [Tue Feb 20 08:55:23 2001] [error] [client (ip address)] Premature end
 of script headers: /home/www/cgi-bin/newspro/newspro.cgi
 
 I have made sure that the path to Perl is correct in the scripts and
 that the correct permissions are set.  I also made sure that my
 ScripAlias directory was correct.  I can run other scripts, like the
 test-cgi script and the printenv script, but not any others.  Also if I
 add a (-w) to the Perl statement at the begginning of the script, i.e.
 #!/usr/bin/perl -w  then the script will run, but a whole lot of
 debugging information fills up the error logs.  I have scoured the
 maillist archives for any solution, but came up empty handed.  Any help
 would be appreciated.

Can you run the script normally outside of apache -- it may not do
what you want it to do, then, but does it start properly?

It might also be useful if you would post the info you get in the error
log when using -w.

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: who/finger output - billions of pts

2001-02-20 Thread Erdmut Pfeifer
On Tue, Feb 20, 2001 at 12:57:52PM -0500, MaD dUCK wrote:
 hey all,
 want to help me figure something out? i am a former redhat/suse person
 finally having ascended to debian. there is something peculiar that i
 noticed which i cannot explain with my (pretty good) linux knowledge.
 
 so on either suse or debian, i use xdm to start windowmaker after login
 and i have some 20 or so rxvt's created for my convenience at startup.
 
 on the suse machine, the finger output with a local windowmaker
 session and a remote ssh login looks as follows:
 
 Login Name   Tty  Idle  Login Time   Office Office Phone
 madduck   MaD dUCK  *:0 Feb 18 10:42 Robot Lab 1-610-328x8618
 madduck   MaD dUCK   pts/2  Feb 20 12:48 (d136.sproul.swarthmore.edu)
 
 
 on my debian system, finger looks as follows:
 
 LoginName   Tty  Idle  Login Time   Office Office Phone
 madduck  MaD dUCK   :0 Feb 19 16:00 (console)
 madduck  MaD dUCK   pts/0   19:16  Feb 19 16:01 (:0)
 madduck  MaD dUCK   pts/2   19:15  Feb 19 16:02 (:0)
 ...
 
 and a line for every terminal i opened on :0
 
 why is this? what's different about wtmp/utmp (i presume) on
 suse/redhat than on debian? i don't want finger to show 20+ logins of
 my account when all i did was login once and opened xterms
 otherwise...
 
 any pointers?

rxvt has a compile-time option for wtmp/utmp support. Maybe that's
where the distros differ...
For testing purposes you might want to roll one your own rxvt
(reasonably simple) without wtmp/utmp support and see if the
problem/feature goes away.
(Or, if you are lucky, the one from SuSe runs on Debian too...)

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: cannot forward X11

2001-02-18 Thread Erdmut Pfeifer
On Sun, Feb 18, 2001 at 07:01:43PM -0500, Anthony Fox wrote:
 Hello,
 
 I have recently upgraded to XF4.0.2.  I cannot forward some remote
 computers' DISPLAY to my local XServer.  For example, I have a FreeBSD
 firewall for which I cannot forward the DISPLAY variable to the Debian
 box.  I can forward local accounts, such as root, and run X11 apps.  I
 can forward a remote linux ssh connection and run X11 apps.  It is
 only the FreeBSD box that refuses to forward it's DISPLAY.  I have
 been able to forward X11 displays from the BSD box to a RH6.2 linux
 box at work.
 
 From the local box:
 [EMAIL PROTECTED] ~ $ xhost +
 access control disabled, clients can connect from any host
 
 From the BSD box:
 [EMAIL PROTECTED] ~ $ export DISPLAY=thedebianbox:0
 [EMAIL PROTECTED] ~ $ xload 
 Error: Can't open display:  thedebianbox:0
 
 Does anyone know what the problem is and what I can do?  

not sure, but it may have to do with your firewall settings...
Direct (non-tunneled) X connections use the port range 6000+N (where N
is the display number, i.e. :0 = 6000, :1 = 6001, ...), so packets
destined for these ports need to be routed correctly.
Can you establish an X connection through an ssh tunnel?
(you probably know that you don't need to / should not set the DISPLAY
variable yourself when using ssh to forward X, as it's doing it for
you)

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: cannot forward X11

2001-02-18 Thread Erdmut Pfeifer
On Sun, Feb 18, 2001 at 11:00:09PM -0500, Anthony Fox wrote:
 Just to add:
 
 When I try to use the -X option with ssh, I get the following error:
 
 ant@debianbox ~ $ ssh -X firewall
 ant@firewall's password:
 Warning: Remote host denied X11 forwarding.
 ant@firewall ~ $

maybe you have X forwarding disabled in your sshd configuration?

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Hair-puller

2001-02-18 Thread Erdmut Pfeifer
On Sun, Feb 18, 2001 at 12:56:14PM -0500, Don Berkich wrote:
 Hi Folks,
 
 Anybody had trouble with debconf falling over dead on a woody upgrade?
 
 Specifically:
 
 
 Preparing to replace debconf 0.2.80.17 (using debconf_0.5.61_i386.deb)
 ...
 Unpacking replacement debconf ...
 Setting up debconf (0.5.61) ...
 Data::Dumper object version 2.101 does not match $Data::Dumper::VERSION
 2.09 at /usr/lib/perl/5.6.0/DynaLoader.pm line 219.
 Compilation failed in require at /user/lib/perl5/Debconf/ConfigDb.pm
 line 82.
 BEGIN failed--compilation aborted at /usr/lib/perl5/Debconf/ConfigDb.pm
 line 82.
 Compilation failed in require at /usr/share/debconf/frontend line 23.
 Begin failed--compilation aborted at /usr/share/debconf/frontend line
 23.
 -
 
 whereupon apt chokes.

there was a thread in December centered around a similar problem:

http://lists.debian.org/debian-user-0012/msg00973.html

the result of the communication continued privately was that there
was a file /usr/lib/perl5/Data/Dumper.pm (probably from an older
version of perl) that should not have been there...
If you also do have that file, try moving it away temporarily and see
if it works then.

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: HD size problem (IBM 30 GB)

2001-02-16 Thread Erdmut Pfeifer
On Fri, Feb 16, 2001 at 06:04:47PM +0600, Bram Dumolin wrote:
 re,
 
 CaT([EMAIL PROTECTED])@Tue, Feb 13, 2001 at 10:41:12PM +1100:
  On Tue, Feb 13, 2001 at 12:38:14PM +0600, Bram Dumolin wrote:
This'll get you a testing fdisk compiled for your system. Worked 
brilliantly
for me.
   
   uhm the fdisk source isn't there...
   not in stable, not in testing, ...
   Can you give me your sources.list entry?
  
  Ooops. linux-util or util-linux is what you want. my line is:
  
  deb-src ftp://debian-ftp.pacific.net.au/debian testing main contrib non-free
  
  change the hostname to what's appropriate for you.
 
 tnx :)
 
 But still doesn't work...
 Might be a kernel driver prob... 2.0.33 is pretty old...

there were various size-limits in older kernels/BIOSes...
For example there was one at 33.8 GB. Are you sure your drive is
exactly 30 GB, and not a few percent larger?
You'll find more details on this in the following howto:

http://www.win.tue.nl/~aeb/linux/Large-Disk.html


 Too bad of the patches, I should rewrite them I guess.

as a temporary solution you might try to find a patch for your kernel
version. If I remember correctly, there is one for 2.0.38, maybe you
can somehow get that working for 2.0.33... 
In the long run, it would probably be a better idea to rewrite your
patches ;)

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: OffTopic - What's the proper way to...

2001-02-16 Thread Erdmut Pfeifer
On Fri, Feb 16, 2001 at 02:12:19PM -0600, William Jensen wrote:
 I've got a web site that has some protected data.  On some of the pages
 I have javascript that does some calculations.  Right now I have it so
 if they click on the link to that page it auto asks for username/password,
 however, what I would like to do is let them see the page and only
 ask for username/password once they click the calculate button.  Is this
 type of thing done thru the use of cgi scripts or what.  There is probably
 more than one way to do it but I'd be interested in some opinions in 
 a good way to accomplish this.

don't know what kind of calculations you offer and in what way the
javascript is involved, but if your requirements fit into the scheme
of having some kind of form into which the user fills in several
parameters before clicking calculate, then the classical CGI script
would probably be the best solution. In that case, everything you might
want to protect - data, algorithms, whatever - is on the server side,
and you can easily control at which point you require authentication/
authorization.
When doing it in javascript (client-side, I assume), keep in mind that
the code is delivered to the browser as is, so you have no real control
over what the user then does with it. If this code implements the
calculations you would like to protect, then this probably isn't the
best way of doing it (it only requires very little expert knowledge to
get that code executing outside of the context of your website, except
if you devise some clever challenge-response mechanism). However, these
concerns only apply if you are _not_ having some data on the server
side without which the javascript would be useless...

Maybe you could elaborate a little more on the details of your
intentions... (e.g. what you mean by let them see the page, etc.)

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Here you have

2001-02-13 Thread Erdmut Pfeifer
On Tue, Feb 13, 2001 at 05:33:26PM -0500, DSC Lithuania wrote:
 Actually, that was a highly irresponsible thing to do.  I do use Microsoft
 Outlook
 and Netscape both, and I am subscribed to the Linux User list because I am
 trying
 to set up a Linux system at the place I work.
 
 It would be nice to know if I have a new virus, considering that you may
 have sent
 one out.
 
 And as far as it goes, yes I do use protection, but there are limits to how
 good
 protection can be.  I think I may have to give up linux rather than further
 risk my company's computer.

the reasoning-logic behind the last sentence seems a little weird and
far-fetched to me...
If you are worried about getting infected from this list, why not use
a decent MUA that doesn't expose you to these kind of risks. Especially
under linux, there are several choices.
Generally, I guess that you'll suffer from a much higher vulnerability
when carrying on using stuff like Outlook.

just my 5 cents,
Erdmut


 Which is too bad, since that means that the school
 where I have an informatics lab will not have linux.  They will instead have
 to use
 Microsoft, which will in turn encourage students to approach the dual
 problems of
 copyrights and piracy (both of which I consider to be evils in their own
 right.)
 
 Oh, well.
 



Re: Shell-script question

2001-02-11 Thread Erdmut Pfeifer
On Sun, Feb 11, 2001 at 02:26:51PM +0100, Andre Berger wrote:
 I wrote a shell script /usr/local/bin/mailcheck (/usr/local/bin/ is in
 $PATH of my potato bash) that gives a list of pon targets (diff.
 ISPs), and is owned by root., perms 755. I've virtually no experience
 with shell scripting, so it may be poor quality. Anyway: If the script
 is invoked from a command line within or outside X, it works.  If it's
 invoked via rxvt -e mailcheck or rxvt -ls -e mailcheck, everything
 looks normal, but nothing's dialed. 

just guessing...: maybe you have different sets of environment
variables for the different invocations. From where do you execute the
rxvt ...? You might want to put a set /tmp/somefile in your script
and see if the relevant env settings differ from where you can invoke
it successfully...

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Shell-script question

2001-02-11 Thread Erdmut Pfeifer
On Sun, Feb 11, 2001 at 03:42:15PM +0100, Andre Berger wrote:
 On 2001-02-11 15:16 +0100, Erdmut Pfeifer [EMAIL PROTECTED] wrote:
  On Sun, Feb 11, 2001 at 02:26:51PM +0100, Andre Berger wrote:
   I wrote a shell script /usr/local/bin/mailcheck (/usr/local/bin/ is in
   $PATH of my potato bash) that gives a list of pon targets (diff.
   ISPs), and is owned by root., perms 755. I've virtually no experience
   with shell scripting, so it may be poor quality. Anyway: If the script
   is invoked from a command line within or outside X, it works.  If it's
   invoked via rxvt -e mailcheck or rxvt -ls -e mailcheck, everything
   looks normal, but nothing's dialed. 
  
  just guessing...: maybe you have different sets of environment
  variables for the different invocations. From where do you execute the
  rxvt ...? 
 
 icewm's menu.
 
  You might want to put a set /tmp/somefile in your script
  and see if the relevant env settings differ from where you can invoke
  it successfully...
  
  Erdmut
 
  is from the command line,  from the script via icewm's menu/rxvt.
 
  BASH=/bin/bash
  BASH=/bin/sh
  LC_MESSAGES=POSIX
  LC_MESSAGES=en_US
  PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/lib/wp/wpbin
  PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games
  
 SHELLOPTS=braceexpand:hashall:histexpand:monitor:history:interactive-comments:emacs
  SHELLOPTS=braceexpand:hashall:interactive-comments
  _=LC_MESSAGES
  i=LC_IDENTIFICATION=de_DE
  _=sh
 
 Can you see from this what's wrong?

no, not really ;) -- don't think the different locale settings do have
an effect...

do you get any error messages if you remove the exec in the
exec pon ...-line in your script and put an additional sleep 10
after that -- so you have a chance to read its output?

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: mke2fs: invalid arg to ext2 library ?

2001-02-11 Thread Erdmut Pfeifer
On Sun, Feb 11, 2001 at 03:21:24PM -0600, will trillich wrote:
 Invalid argument passed to ext2 library while setting up superblock
 
 --i didn't get any response before, so i'm trying a different
 --subject line. if this is the wrong place to ask, pliz direct
 --me to the right one...
 
 i tried the potato mke2fs on /dev/hda9 hda10 hda11, but only one
 of the three worked -- the other two bombed out with 'Invalid
 argument passed to ext2 library while setting up superblock' ??

I recently had a similar problem. I my case it was /dev/hda4 (on a
10GB IBM), so I'm not sure whether it really has something to do with
your 2-digit partition numbers...
I fiddled around for a while, then finally gave up -- it was on one of
my machines which I don't use regularly, and I could live without the
partition at that time...

Then, after having seen your initial post yesterday, I thought I might
look into that issue again, and do some low-level debugging.
So I booted the machine, tried the mke2fs again and, to my surprise, it
magically worked right away...

Don't really know, why it worked now, maybe it had to do with the
partition table having been reread (though I did also reboot the machine
the first time), or maybe some hidden bug in mke2fs, like uninitialized
variables or such.
Currently I can't reproduce the problem, so there's no way for me to
find out...

I guess you've probably already tried rebooting, have you?

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Memory/cpu quoatas for users

2001-02-08 Thread Erdmut Pfeifer
On Thu, Feb 08, 2001 at 09:35:21AM +, Mike Moran wrote:
 
 Hi. I am looking for a way to limit the amount of memory and cpu used by
 the apache
 daemon, since the web machine is very resource-limited. I know I can
 reduce the
 maximum number of servers, but that doesn't limit usage by each server.

if you are using mod_perl, then you might want to use one of the
modules Apache::SizeLimit or Apache::GTopLimit, which are dedicated to
automatically killing excessively growing apache child processes.

You'll find them on CPAN:

http://search.cpan.org

Also, you might want to browse the archives of the mod_perl mailing
list. Excessive memory usage and how to deal with it is one of the
topics that keep popping up there regularly ;)

http://perl.apache.org/#maillists

Good luck,
Erdmut


 Perhaps a
 default nice level would also help. How do you set the default nice
 for a user?
 
 The thing is, would either of these stop something like a while(true)
 loop, which allocated memory? I've been making some changes to the
 apache daemon mod_perl setup recently and I end up being locked out
 because something swamps out the system. I get `half-connects' where,
 for instance, ssh/telnet connects but then just sits there until it
 times out. This sounds exactly like a thrashing problem. I had a top
 running and kswapd
 seemed to be the last thing I saw before it hung.
 
 Maybe this is a kernel level thing? Is there something like a
 high-watermark process killer that I could use, ie something which
 started rampaging around when memory usage crept above a certain level?
 
 Note that I think the problem will probably settle out once I've stopped
 making so many
 changes to the apache setup but, for now, it is a pain to have to ask
 for my machine to be
 powercycled. Regardless, it would be nice to know I have a safety net,
 just in case.
 
 Thanks,
 
 --
 Mike
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 

-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: autoresponder at unix,inc / info@listbank.ne.jp

2001-02-08 Thread Erdmut Pfeifer
On Thu, Feb 08, 2001 at 02:52:30PM -0800, kmself@ix.netcom.com wrote:
 OT:  Is anyone else getting autmatic null-body responses from the source
 listed in the subject of this post.  Typical headers follow:
 
 Received: from st63.arena.ne.jp ([203.138.208.2])
   by mail00.dfw.mindspring.net (Mindspring/Netcom Mail Service)
 with ESMTP
 +id t84hj9.qrn.33qs884
   for kmself@ix.netcom.com; Thu, 8 Feb 2001 02:10:32 -0500 (EST)
 Received: (qmail 27095 invoked from network); 8 Feb 2001 16:10:31 +0900
 Received: from unknown (HELO t5) (210.153.132.87)
   by listbank.ne.jp with SMTP; 8 Feb 2001 16:10:31 +0900
 Message-ID: [EMAIL PROTECTED]
 From: unix,inc. [EMAIL PROTECTED]
 
 I've reported same to [EMAIL PROTECTED] and am filtering it as
 spam, but it's annoying all the same.

yes, I also got two of these (about 18 hours ago).


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: debian poster

2001-02-07 Thread Erdmut Pfeifer
On Wed, Feb 07, 2001 at 10:09:02PM +0100, Allan Andersen wrote:
 From: Michael Janssen (CS/MATH stud.) [mailto:[EMAIL PROTECTED]
 
 I made a big debian poster using the open use logo GIMP file available
 from http://dusknet.dhs.org/~deek/debian/ -- it works quite nicely. 
 
 Michael Janssen 
 
 Wouldn't you go in some problems with the logo when you enlarge
 them ?


why not use the .eps-version that you can download from here ?

http://www.debian.org/logos/

as this is PostScript, you can scale it to an arbitrary size without
loss in quality. If you need to render it into a pixel-based format,
you can use Ghostscript, which offers a wide range of output raster
formats like png, jpg, etc.

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [OT] perl regex problem

2001-02-06 Thread Erdmut Pfeifer
On Mon, Feb 05, 2001 at 06:31:03PM -0800, Hunter Marshall wrote:
 I am a long time debian and perl user. But obviously long enough!
 Forgive the slight misuse of the list, but can anyone shed light
 on what I'm doing wrong in this attempt to find \n\n in a text file 
 with perl? I'm sure I've done this before.

Hi,

depending on what exactly you want to do, it might help to set the
input record separator ($/) to something other than the default \n.
If you set it to \n\n for example, the input lines you get will
be broken up at double-newlines...

#!/usr/bin/perl

$/=\n\n;  # set input record separator

while () {
# every line ($_) will inlcude everything up to and including
# the \n\n, so you can match against it (if still needed...)
if (/\n\n/) {
print yup\n;
}
}

The input record separator can be any string, but it's _not_ interpreted
as a regular expression. If you need that, you can read the whole file
into a string in one go (if it fits into memory), and then do any kind
of processing on it, e.g. split() or repeated regex matching

#!/usr/bin/perl

$/=undef;   # 'undef' causes whole file to be read in 

$s = ;# file now in $s

while ($s =~ /\n\n/g) {   # do some repeated matching
print yup\n;
}

The advantage of the latter approach is that you can craft your
regex to do any kind of sophisticated matching, independently of some
concept of line termination...

BTW, if you don't want the modification of the input record separator
to apply globally for the script, you can use local $/ =... instead,
in soubroutines, for example (my doesn't work here).

Cheers,
Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Please Help: cut-n-paste problem while using hpterm on Debian 2.2 (kernel 2.4.1)

2001-02-06 Thread Erdmut Pfeifer
On Tue, Feb 06, 2001 at 06:40:30PM +, Sumit Sarkar wrote:
 This is 2nd time I am posting this message.
 
 sumit
 
 Hi There!,
 
 This is my first time in this mailing list. I am having cut-n-paste
 problem while using 'hpterm' on Debian 2.2 potato and kernel is:
  2.4.1 #2 SMP Thu Feb 1 16:22:58 PST 2001 i686

with using 'hpterm' on Debian you mean running the hpterm on the
HP-UX box and directing the display to your Debian box - do you?

 
 The problem is like this:
 
 I am displaying 'hpterm' from a HP-UX 11.00 box in my Linux box.
 I am using the following parameters while using 'hpterm':
 
 +mb -sb -sl 5000 -ls -display 
 
 First few minutes cut-n-paste will work, after that mouse will NOT
 be able to highlight (cut) and paste. I have tried 'mb', didn't help.

I just tried to reproduce your problem, but everything seems to working
fine over here... That doesn't mean too much, however (we might be
using different versions of whatever is involved).

I'm afraid that this is not going to be easy to solve, but at least you
may be able to narrow down things a little. Basically, the problem may
be caused on either side of the X connection:

(a) your debian-side X server may no longer send the appropriate X
events (at the time cut-n-paste stops working), or

(b) the X client (the hpterm on the HP-UX box) may not process the
events properly

To further clarify which side is responsible, you'd have to intercept
the connection from the hpterm to your X server, and take a closer look
at the X protocol messages to see whether the events in question are
still being sent.

I could give you a simple Perl script that more or less does just this.
It dumps the names of all X protocol requests and events being
exchanged into a file (in human-readable form). From this you can get
a rough understanding of what's going on. There may be other tools out
there for debugging X sessions, but for the purpose at hand the script
will probably do fine...

Email me privately if you are interested -- this is slightly off-topic
anyway (except that it turns out to be a Debian-specific X server
problem), so I think we shouldn't bother the list with all the details.

 
 It is very difficult to live without cut-n-paste. I tried using
 'xterm', but that doesn't have 'smooth scrolling'. in 'xterm'
 and 'dtterm' cut-n-paste always work. I don't like the appearance
 of 'dtterm'.
 
 My libc version is : libc-2.1.3.so and ncurses version is 5.0-6.0.

the debian-side ncurses certainly is not involved, the libc might be
involved indirectly (via the X server using some functions of it), but
what would probably be more interesting is the version of the X server
you are running :)

HTH,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: xterm

2001-01-30 Thread Erdmut Pfeifer
On Tue, Jan 30, 2001 at 11:36:44AM +0100, Ionel Mugurel Ciob?c? wrote:

 Do you know the reason why for characters 170 and 186 the default table
 use 170 and 186? They are in separate set. To solve my problem I put
 in my .Xdefault file:
 
 *VT100.charClass: 170:48,186:48

I have no idea what's so special about these two chars to justify a
separate set for each -- but what you have defined sounds like a
reasonable solution to me...


 
 xterm -version says that I have: XFree86 3.3.6(88c). I can't find
 this one on http://dickey.his.com/xterm/xterm.log.html

the more recent sections of this log mainly seem to be tracking the
changes to xterm along the pre-4.0 development branch of XFree -- I
think that's why your 3.3.6 does not appear there. You probably need
to guess a good keyword to search for instead...

BTW, xterm itself apparently has a somewhat unconventional versioning
scheme: no real version number, but patch 150 -- (perhaps it's
supposed to be tied directly to XFree, rather than having versions
of its own?)
(maybe we should adopt this and say: Debian - patch 20389, or so :))


 
 Maybe I should install a new version. Anyway I figure it out that
 I have a version with a older documentation.

well, I would say, if your current xterm accepts the X-Resource
settings you need (and does behave as expected), you might just as well
leave things as they are. But, of course, that's up to you...

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: xterm

2001-01-29 Thread Erdmut Pfeifer
On Mon, Jan 29, 2001 at 02:38:11PM +0100, Ionel Mugurel Ciob?c? wrote:
 Hi all,
 
 Someone knows where I can get the default behaviour of characters
 from 128 to 255, if they are separators or not.
 
 In man xterm it sais about class CharClass, but it refer only to
 0-127:
 
The default table is
 
static int charClass[128] = {
/* NUL  SOH  STX  ETX  EOT  ENQ  ACK  BEL */
32,   1,   1,   1,   1,   1,   1,   1,
 .
 .
 .
/*   xyz{|}~  DEL */
48,  48,  48, 123, 124, 125, 126,   1};
 
 
 Info xterm give the same answer as man xterm.

this seems to be a highly version-specific issue. The newer versions
of xterm seem to support the full 8-bit charset and even unicode
in CharClass definitions.

To quote from the manpage that comes with the most recent source
distribution of xterm (e.g. http://dickey.his.com/xterm/xterm.tar.gz):

(...)
CHARACTER CLASSES   

   Clicking the middle mouse button twice in rapid succession   

   will cause all characters of the same class (e.g. letters,   

   white space, punctuation) to be selected.  Since different   

   people have  different  preferences  for  what  should  be   

   selected  (for  example, should filenames be selected as a   

   whole or only the separate subnames), the default  mapping   

   can  be overridden through the use of the charClass (class   

   CharClass) resource. 



   This  resource  is  a   series   of   comma-separated   of   

   range:value pairs.  The range is either a single number or   

   low-high in the range of 0 to 65535, corresponding to  the   

   code for the character or characters to be set.  The value   

   is arbitrary, although the default table uses the  charac-   

   ter  number  of  the first character occurring in the set.   

   When not in UTF-8 mode, only the first 256 bytes  of  this   

   table will be used.  



   The default table starts as follows -



   static int charClass[256] = {

(...)


I can't tell you the exact version in which support for this was added.
If you are interested in all the details, you might want to browse the
changelog at

http://dickey.his.com/xterm/xterm.log.html

(though that's probably more than you want to know :)

Also, if you are not afraid of reading the source, have a look at
the file charclass.c from the source distribution...

HTH,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: jpeg bad bitmap format file (was: xsetroot -bitmap)

2001-01-26 Thread Erdmut Pfeifer
On Thu, Jan 25, 2001 at 12:04:09PM -0800, Xucaen wrote:
 
 --- Hall Stevenson [EMAIL PROTECTED]
 wrote:
   I just tried it and I got an error message
  saying
   bad bitmap format file
  
  
  Heh, don't know... sorry ;-(
  
  I've seen the program xv suggested before,
  but many get all worked up
  'cause it's not a *free* program in the Debian
  sense of the word free.
  
  What does xsetroot say if you use a jpg file
  you downloaded from
 
 I didn't try this, but I think it wn't work.
 someone else on the user list told me that a
 bitmap is 2(?) colors and that xsetroot expects a
 2(?) color bitmap. anything else won't work.
 I just discovered a program called xpmroot which
 uses an .xpm. but so far I haven't found anything
 that will use a .jpg. how are you doing it?

xli -- as has already been suggested.

$ xli -onroot -quiet your.jpg

http://packages.debian.org/stable/graphics/xli.html

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: GD::Image does not support PNG

2001-01-23 Thread Erdmut Pfeifer
On Tue, Jan 23, 2001 at 04:56:58PM -0200, Fabio Berbert de Paula wrote:
 Hi,
 
 I'm trying to make the GD Graphics Library works
 with PNG format support in my perl scripts.
 
 The following packages were installed in my Debian
 Potato:
 
 libgd-perl 1.18-2.1
 libgd1g  1.7.3-0.1
 libgd1g-dev 1.7.3-0.1
 libpng2  1.0.5-1
 libpng2-dev 1.0.5-1
 
 But when I run my script I get this error message:
 
 Can't locate object method png via package 
 GD::Image at ./grafico.pl line ***
 
 where *** = print FILE $im-png;
 
 
 The syntax of script is corretc. I tested
 it in other machine, where I installed the
 packages via tarball.
 
 Some idea?!? ;o)

maybe you have the wrong mix of versions...

I think that versions up to something around 1.19 of libgd-perl
did not support the PNG format (instead they did support the GIF
format that's been taken out in later versions because of patent
issues. These older versions are probably still around for people
who need/want GIF support)

Try to find a newer version of the GD Perl module (current version
is 1.32) ...
The version 1.7.3 of the C-library the Perl module is based upon,
should be ok, I guess -- though the most recent version is 1.8.3.

Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: install/partition weirdness; 60Gig ATA100 HD

2001-01-22 Thread Erdmut Pfeifer
On Mon, Jan 22, 2001 at 03:49:16AM -0800, Chris Palmer wrote:
 Hi, all...
 
 I'm trying to install a new system from a Debian 2.1 CD I 
 have (VA Linux Debian 2.1 Install from the store shelf that 
 has the 2.2.12 kernel on it) and I'm just running into some
 really weird stuff.
 
 Well, after partitioning (about 59Gig for / and 2 x 128MB
 for swap partitions) and then waited 10 hours for the 
 block tests to complete, it all looked good.
 
 Then the moment of truth... the reboot.  The system came
 up with all kinds of disk issues for fsck.  Lots of duplicate
 block warnings (can't remember exact words, but something
 like duplicate blocks and/or inodes, maybe?)
 
 (...)
 

I don't know whether you've already come across the large-disk-howto:

http://www.win.tue.nl/~aeb/linux/Large-Disk.html

Maybe you'll find something useful there (especially the 34 GB limit
that used to exist with somewhat older kernels...).


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: OT: check book balancer - xacc packages

2001-01-22 Thread Erdmut Pfeifer
On Mon, Jan 22, 2001 at 11:43:01AM -0800, Sean 'Shaleh' Perry wrote:
 
 On 22-Jan-2001 Xucaen wrote:
  hi all.
  has anyone ever installed either of the following
  debian packages?
  
  stable  100%  xacc-smotif 1.0.17-1   (1166.5k)  
 A personal finance tracking program. 
  stable  75%  xacc 1.0.18-4   (344.4k)  
 A personal finance tracking program. 
  
  I was wondering what the difference between them
  is (besides their version numbers ;-). 
  The xacc-smotif description says it is linked
  statically with Motif. It may be more stable than
  the Lesstif based version
  what does it mean to be statically linked to
  Motif? What is Motif? What is Lesstif? 
  
 
 Motif is an old UNIX standard GUI library.  It is closed source and you have 
 to
 spend money to own it.  Lesstif is a free recreation of this library based on
 the publicly available API.  (less is more).

just a small note:
Motif has recently been open-sourced for platforms that are open source
themselves, such as Linux and FreeBSD.
Anyone who's interested and doesn't know already, may read about the
details at

http://www.opengroup.org/openmotif/

or 

http://unix.oreilly.com/news/motifopen_0500.html



 To be 'statically' compiled means the libraries the program needs are compiled
 into the program.  This way you do not need a copy of Motif.
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 

-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Apache, SSL, proxy, etc.

2001-01-18 Thread Erdmut Pfeifer
On Wed, Jan 17, 2001 at 01:50:30PM +0100, Ola Muan wrote:
 
 We run IIS on Win2KServer as application server against Oracle inside
 our firewall.  Now we want to provide to our customers the ability to
 access reports and such things on that IIS via the Internet. 
 
 (..., ...)
 

... didn't see too many responses yet, so I thought you might be
interested in some more comments:

I'm not sure whether I understood every detail of what are trying to
achieve, in particular why you would want to bring the Apache into
play. But, if I didn't misunderstand you too much, the best/most simple
solution (imho) would be to leave the Apache out of the whole business
and do a transparent TCP proxying (some prefer to call it port-
forwarding) in your DMZ, so that the SSL packets get directly
routed to your IIS server, which will then do everything else --
authentication, authorization, content serving, etc., just as you have
it right now.

Although I feel tempted to sell you an Apache-based all-in-Linux
solution :-) , under the given circumstances, it's probably preferable
to not touch a running system (at least if it is... :)

With this solution your main task would be to talk to your firewall
expert, and have him set things up appropriately. The new 2.4-kernel
netfilter/iptables architecture offers very flexible configuration
options, though mere port-forwarding should not be a problem for
older kernels either.
(I'd rather not give you any recipes here, because I'm definitely not
a firewall expert :)

I think the main issue boils down to the question, at which point
to do the de-/encryption of the SSL protocol. With the above solution
you'd leave that to the IIS. The TCP proxying would occur at a
relatively low level, without any knowledge of the application-level
protocols (HTTP(S),SSL), which means that the proxy won't be able to
inspect URLs and HTTP headers, do caching of the content, or whatever
else you might think of. All information you have (and can control) at
this point is basically: IP address and port number -- the rest is
simply tunneled through encrypted.

If you really need to do proxying at the application level, then (and
only then) the Apache+some-SSL reverse proxying solution would make
sense, imho. In this case the Apache would handle the de-/encryption
for the outbound side. This however seems to be the hardest way...
especially if the IIS were still to speak HTTPS, for then, everything
would have to be re-encrypted before sending the request to the real
content server. Afaik, the usual Apache/mod_ssl/mod_proxy bundle would
not be able to handle this reencryption, anyway, without doing some
real programming (if anyone knows better, let me know...)

On the inbound side (proxy--IIS) you might also consider using HTTP
instead of HTTPS, which would considerably simplify things. That would,
of course, depend on your LAN-internal security policies...
Depending on where authentication/authorization is supposed to occur
(I guess your database is involved here), you would, however, have to
devise some means of passing on the relevant information you extracted
from the client certificates, for example by using the basic
authentication method from HTTP.

I think I won't elaborate on this any further, as I anticipate that
this is not really what you want...

So, to sum up, I'd simply let the firewall machine forward the packets
which are destined for port 443 and are originating from the allowed
hosts/customers.


 
 (Another assumption: Opening the firewall for https-traffic on port 443
 is just as dangerous as opening for http-traffic on port 80. Again:
 correct me if this isn't true.)

this is mainly a question of the protocol spoken at the port and the
application behind it, not so much the port number... :)

 
 I've had a look at Apache-SSL. But some recommend to use Apache and
 mod_ssl instead. 
 
 Which one is the best, and which proxy server works best in cooperation
 with SSL?

at the time I had to make that decision between Apache-SSL and
Apache+mod_ssl, I tried both, and both worked well. I then decided to
stick with the mod_ssl solution (for some minor reasons) and haven't
had any problems since.  (But that might not be relevant, anyway.)


Hope that clarified things a little...
Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: static vs modules

2001-01-17 Thread Erdmut Pfeifer
On Wed, Jan 17, 2001 at 08:36:18AM -0800, Jon Pennington wrote:
 On Wed, Jan 17, 2001 at 08:17:25AM +, Cliff Sarginson wrote:
   On Tue, Jan 16, 2001 at 12:59:34PM +0100, Sebastiaan wrote:
Hello,

I was wondering if there is a speed/operating difference when compiling
kernel daemons like knfs static in the kernel or in modules. 
Anyone know something about this?
   
   I think there is no measurable (is this spelled right?)
   difference. (That's what I think, I haven't tested it)
   
   Modules are more flexible. For example if you get a new soundcard you
   only have to insert the new module, you don't need to recompile the
   whole kernel.
  
  I expect there is a few picaseconds latency when the module is first
  loaded :) Other than that I should think not.
 
 It depends on what you're talking about.  Take, for instance, the
 Intel EtherExpressPro100 (eepro100) network card.  Loading it as a
 module on a HEAVILY laden web server exposed a major weakness in the
 overall robustness of the card.  The card started dropping packets and
 causing collisions under only 50% of what the interface would have been
 capable of if it were built-in to the kernel.

why is that (just curious) -- has anybody got an idea?

Afaik, the only performance related difference between static and
shared object code (once the module is loaded) comes from the
requirement of .so-code to be 'relocatable', which means that a few
extra machine code instructions need to be generated by the compiler.
The performance decrease is normally negligible, according to my
experiences far below 10%.

So, either the code of that module is somewhat weird, or there is some
highly nonlinear interaction between CPU load and network throughput...
In that case, I would expect the same drop in performance with the static
module version when the load rises just another few percent above the
point where the dynamic version gets into problems...

any other explanations?
I'd appreciate comments -- though I know it's somewhat off-topic... :)

Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [OT] sound card recommendations

2001-01-17 Thread Erdmut Pfeifer
On Wed, Jan 17, 2001 at 05:14:18PM +0100, Sebastiaan wrote:
 Hi,
 
 I belive that creative's soundblasters are well supported under Linux. The
 bigger versions (they vary between $200-$800 I belive) have loads of input
 and output types and they deliver very good sound. 


before bying one of these, though, I'd check that the special features
that cause the difference in price, are _really_ supported by the
corresponding driver -- I mean, there are varying degrees of
'supportedness'...

just my 5 cents :)

Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: LAMP Question -- Perl AND PHP

2001-01-17 Thread Erdmut Pfeifer
On Wed, Jan 17, 2001 at 04:17:28PM -0600, Robert A. Jacobs wrote:
 Running a Debian 2.2r2 system with a little bit of Woodage (nothing 
 extreme) and still using the 2.2.17pre-* kernel packaged with Potato.
 
 Am venturing into the world of LAMP (Linux Apache MySQL PHP) and
 would like to do the installations from source (for the experience --
 so please do not recommend that I use prepackaged .deb files).  
 I've already downloaded the sources and have been reading through the 
 various READMEs and INSTALL docs and now I have a few questions.
 
 I have gotten the (perhaps mistaken) impression that I cannot statically
 link Perl and PHP to Apache together.  If this is not correct, how do 
 you build Apache so that both mod_perl and PHP 4.0 are statically linked?  
 The Apache 'README.configure' file was not specific on how to do this or
 even whether it could be done at all (though it provided adequate examples for
 them individually).

you might want to search the mod_perl mailinglist archives. As far
as I remember, occasionally, similar PHP+Perl issues popped up there...
(I'm personally not using PHP, so I can't tell you any details)

You'll find a list of links to the archives at

http://perl.apache.org/#maillists

(very good mailinglist, btw)

At the same site (perl.apache.org) and at take23.org you'll also
find lots of other interesting mod_perl/Apache related stuff...

 
 If I must dynamically load mod_perl or PHP, which offers the best 
 performance improvement when statically linked?  Does dynamic linking of
 mod_perl and PHP reduce performance of both/either dramatically?
 
I don't think there's much difference in performance


Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: manually installing perl lib's

2001-01-16 Thread Erdmut Pfeifer
On Tue, Jan 16, 2001 at 08:33:36AM -0600, Erik Reuter wrote:
 I need to install the perl lib liblockfile-simple-perl, but there is no
 Debian package that I can find.
 
 Can anyone give me instructions (or pointers to instructions) on how
 to properly install a perl lib on Debian so that other perl scripts
 automatically recognize the new lib?

Hi,

the regular (non-debian) way is as follows:

(1) get the module/package from CPAN (if you don't have it already)

http://www.cpan.org/authors/id/RAM/LockFile-Simple-0.2.5.tar.gz

[in case you'd like to read some more info on the module, you
might want to do a

http://search.cpan.org/search?dist=LockFile-Simple

first...]

(2) unpack the tarball in some temporary directory

tar xzf LockFile-Simple-0.2.5.tar.gz

(3) then do a

perl Makefile.PL

this will create a Makefile with the default options -- you could
for example also specify a non-default installation path here...

(This is comparable to the ./configure step needed to build
many GNU-software source packages using autoconf)

(4) make

(hope you get no errors here :)

(5) make test

(to run the test-suite, if there is one)

(6) if there were no errors:

make install

if you have your perl-libs in the usual places, you need to
become root for the last step, so the libs can be written to
their destination directories.

(If you don't like/want this, you should consider installing
the module-libs in some non-default places where you have
write access to (eg. home-dir). Note, however, that this might
not work _by default_ under all circumstances... -- it usually
requires setting lib-paths, environment variables, etc.)

Then, you could immediately do a quick check to see whether the
loading of the new libs really works:

perl -MLockFile::Simple -e ''

(or equivalently:  perl -e 'use LockFile::Simple' )

This should return you to the command line without any error msgs,
in any case you should _not_ see something like:

  Can't locate bla-bla in @INC (@INC contains: bla-bla ...)


One word of caution: except if you really know what you're doing,
try to make sure that there's _only one_ Perl installation/version on
your system, or at least that you do get the one you expect when
you type a simple

perl -V

(at the end of the output you get from this, you should find the
installation paths for the perl libs and binaries...)

If you happen to have two ore more installations of Perl around
(which is basically possible), your new library stuff might
accidently end up in the wrong places.

If you know for sure, though, that you only have _one_ Perl, forget
about the last note ;)


More information is included in the files INSTALL / README that
should come with most perlmodule source packages (very small ones
occasionally don't have an INSTALL file...)

HTH,
Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [OT] Bash Script won't work in fvwm menu.

2001-01-13 Thread Erdmut Pfeifer
On Sat, Jan 13, 2001 at 08:44:59AM -0500, Ayman Haidar wrote:
 I think the problem is with the $HOME , since the script is run as
 root, $HOME is for /root not the user home directory.

hmmm, why is it run as root?
I don't know too much about fvwm2 -- it's quite a long time since I
last used it -- but my current window manager (wmx) runs programs
under my regular UID.
Anyhow, because of Netscape's legendary 'stability', I'd rather not
have it run as root... :)

Cheers,
Erdmut


 try to give it the absolute directory and see. or try to make the
 script owned by the user.
 
 good luck
 
 Ayman
 
 Once upon a time ktb ([EMAIL PROTECTED]) wrote:
 
 
 I've written the following script which works just fine at the
 command line (xterm) but doesn't work when I use it in .fvwm2rc
 in a menu -
 + myNetscape   exec mynet
 
 Netscape starts just fine but the lock file and pid aren't killed.
 The permissions of /usr/local/bin/mynet -
 -rwxr-xr-x1 root root
 
 What do I need to do to make this work from my menu?
 Thanks,
 kent
 
 
 ###
 #!/bin/sh
 
 net_pid=$(pidof -s netscape)
 lock_file=$HOME/.netscape/lock
 
 if [ $net_pid ]
 then
  kill -9 $net_pid
  echo -e \nKilled pid #$net_pid.
 fi
 
 if [ -L $lock_file ]
 then
 rm $lock_file
 echo -e \nLockfile  was removed.\n
 fi
 
 netscape -no-about-splash 
 
 ###
 
 -- 
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 

-- 
Erdmut Pfeifer
science+computing gmbh
Hagellocher Weg 71  phone: +49 (0)7071-9457-255
D-72070 Tuebingen   email: [EMAIL PROTECTED]

-- Bugs come in through open windows. Keep Windows shut! --



Re: MUAs and timestamps, was Re: how to grep without changing timestamps?

2001-01-12 Thread Erdmut Pfeifer
On Fri, Jan 12, 2001 at 02:11:58PM +, David Wright wrote:
 
 (...)
 
 The idea that mutt should have to scan all my inboxes to determine
 whether I have new mail is bad enough; the idea that the inboxes
 should be rewritten (not even just appended to) would be crazy.

right, I'd definitely agree!

 The status quo is automatic (that's how timestamps work), lightweight
 and works. If you must grep your active inboxes, it seems a small price
 to pay to have to reapply the access timestamps.


a small wrapper script (in Perl -- I know one could do that in at least
25 other languages as well ;-) to restore timestamps after running some
program over a set of files might look something like:


#!/usr/bin/perl

while ($ARGV[$c] =~ /^-/) {$c++};  # find first 'non-option'
@files = @ARGV[++$c..$#ARGV];  # filelist begins after search-regex
# (extraction of filelist from commandline might need to be improved...)

# get atime/mtime for all files
foreach $f (@files) {
push @times, [(stat $f)[8,9], $f];  # 8: atime, 9: mtime
}

# run your favourite grep or whatever here:
system grep @ARGV;

# restore atime/mtime for all files
foreach $f (@times) { utime @$f; }



You would call it more or less like grep. Assuming you name it mygrep:

  mygrep [options] search-regex files...

Cheers,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [WOT] sh script to relace chars in #1 w/ chars in #2?

2001-01-12 Thread Erdmut Pfeifer
On Fri, Jan 12, 2001 at 10:46:36AM -0800, Kenward Vaughan wrote:
 Hi, 
 
 I have written about 5 successful low-level scripts to do various things,
 and want to learn more about the ways, but I'm in a time crunch (classes
 are starting next week) and I can't begin to grok what's needed for this.
 I was hoping some kind, (bored?) soul could throw clues this way.. ??  :)

well, whatever I am (kind, bored, ... :), here is another one in Perl
for you to compare with the Python solution already posted:


#!/usr/bin/perl -i.bak
# if you don't want backup files, use just '-i' instead 

while () {  # for each line in the output file...

if ($outfile ne $ARGV) {
# make this 'if' match only once (at the beginning of each file)
$infile = $outfile = $ARGV;
# '.pdb' - '.m3d' for input filename
$infile =~ s/pdb$/m3d/;
# open associated input file
open IN, $infile or die Cannot open file '$infile'!\n;
$line = 1;
}
$in = IN;
if (/^ATOM/) {
# extract symbols
$isym = substr($in, 6, 2);
$osym = substr($_, 13, 2);
if ($osym eq 'Du') {
# replace out-symbol with symbol from infile
substr($_, 13, 2) = $isym;
} elsif ($isym ne $osym) {
# verify the assumption that symbols match if out-symbol is not 'Du'
print STDERR Warning: symbols don't match! ($ARGV:$line: 
'$osym'/'$isym')\n;
}
}
print $_;  # write out
$line++;   # (line counter for warning msg)
}


The script assumes that you have the same number of leading lines in both
files, as you described (input: '3rd line', output: 'line 3').
In your sketch of the file contents, however, it looks as if there is one
more leading line in the output file. If that's not just a typo, you'd
have to make a small change to the script to make corresponding rows align.
Let me know if you can't figure it out yourself...


Call it like this:

  script list of .pdb-files to change

e.g.

  script *.pdb

(where 'script' is the name you choose, of course)
Corresponding input files are assumed to reside in the same directory.
Original output files will be renamed to '*.pdb.bak'.


Enjoy,
Erdmut


-- Bugs come in through open windows. Keep Windows shut! --



Re: save msgs to default mailbox in Mutt

2001-01-12 Thread Erdmut Pfeifer
On Fri, Jan 12, 2001 at 05:59:39PM -0500, mike wrote:
   Since at least half-this list uses Mutt i thought i would ask 
 my Mutt question here.
I set up a save-hook to save any message to a default mailbox.
 So when i press 's' to save a message i get a prompt to save to the default
 mailbox. I have not yet found a way to eliminate the prompt.
   I have tried unset confirmappend, but that does'nt work even though
 the manual says When set, Mutt will prompt for confirmation when
 appending messages to an existing mailbox.

what about the setting

set confirmappend=no

I don't know about possible interactions with a default mailbox.
At least for non-default mailboxes it works for me as expected :)

I guess, if you unset it, the default applies, which is 'yes'...

Cheers,
Erdmut


-- Bugs come in through open windows. Keep Windows shut! --



[Sorry] Re: Command line search and replace

2001-01-09 Thread Erdmut Pfeifer
sorry guys,
I actually didn't intend to bounce that message -- I had the focus on
the wrong X window while typing something unrelated, which obviously
turned out to be a valid mutt key sequence...

Anyway, thanks for your effort :-)  (ok, ok, I'll shut up now...)



On Tue, Jan 09, 2001 at 02:33:34PM +0100, [EMAIL PROTECTED] wrote:
 csj schrieb:
  On Mon, Jan 08, 2001 at 12:56:45AM +0800, I wrote:
   Is there a tool to do a search-and-replace from the command line? 
   Something along the lines of:
   
   replace string one string foo files-to-process
  
  sed is it! Thanks to John [EMAIL PROTECTED], Michal 
  [EMAIL PROTECTED], und eechi von akusyumi 
  [EMAIL PROTECTED] for the replies. Now my problem is how to 
  make sed recurse through directories. I managed to chain commands 
  together as:
  
  for i in *.txt ; do mv $i $i.tmp ; sed s/foo/boo/g $i.tmp  $i ; done
  
  Can anybody comment on this little script? This appears to work, but 
  may be inefficient. And it's one step removed from what I want, 
  recursive processing. That is, to have sed process files in 
  subdirectories of the current directory. I prefer something that can 
  receive its input from find:
  
  find . -name *.txt
 
 try this:
 
 find . -name *.txt | xargs perl -pi -e 's/foo/boo/g;'
 
 joachim
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 

-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: invalidfontAladdin

2001-01-09 Thread Erdmut Pfeifer
On Tue, Jan 09, 2001 at 06:07:39PM +0100, Nikolaus Neumaier wrote:
 Hi
 I do run a testing system (woody)
 whenever I try to display a postscript file generated by a2ps, grace or
 scigraphica or another programm, I get following error message from
 ghostscript:
 
 Error: /invalidfontAladdin Ghostscript: Unrecoverable error, exit code 1
 
  in findfont
 Operand stack:
Times-Roman   Font   Times-Roman   32882   Times-Roman
 --nostringval--   Courier   NimbusMonL-Regu
 Execution stack:
%interp_exit   .runexec2   --nostringval--   --nostringval--
 --nostringval--   2   %stopped_push   --nostringval--
 --nostringval--   --nostringval--   false   1   %stopped_push   1   3
 %oparray_pop   .runexec2   --nostringval--   --nostringval--
 --nostringval--   2   %stopped_push   --nostringval--   1   3
 %oparray_pop   2   3   %oparray_pop   --no
 Error: PostScript interpreter failed in main window.
 
 The same happens with the helvetica font.
 The printing works fine. It seems that this only a problem of finding
 the proper fonts for X.  When I grep the files listed in the font path
 in my XF86Config I do find some files that seem to list those fonts:
 /usr/X11R6/lib/X11/fonts
 Type1/fonts.alias:-adobe-times-medium-r-normal--0-0-0-0-p-0-iso8859-1
 -urw-nimbus roman no9 l-regular-r-normal--0-0-0-0-p-0-iso8859-1
 Type1/fonts.alias:-adobe-times-medium-i-normal--0-0-0-0-p-0-iso8859-1
 -urw-nimbus roman no9 l-regular-i-normal--0-0-0-0-p-0-iso8859-1
 Type1/fonts.alias:-adobe-times-bold-r-normal--0-0-0-0-p-0-iso8859-1
 -urw-nimbus roman no9 l-medium-r-normal-medium-0-0-0-0-p-0-iso8859-1
 Type1/fonts.alias:-adobe-times-bold-i-normal--0-0-0-0-p-0-iso8859-1
 -urw-nimbus roman no9 l-medium-i-normal-medium-0-0-0-0-p-0-iso8859-1

Hi,

(I currently don't have access to a woody box, so I can only give
you rough suggestions where to look...)

What does gs -h show under Search path:?
In one of those paths, your ghostscript type1-fonts should reside.
These need not necessarily be the ones used by X, because (afaik)
ghostscript does its own font rendering ( that's also why you could
already have anti-aliased font rendering (by specifying the appropriate
gs-driver 'x11alpha') before X was able to so... )

Also, ghostscript has its own config file for mapping fontnames (eg.
Helvetica) to filenames (*.pfb or *.pfa) and defining aliases which
map one fontname to another, eg. /Helvetica to /NimbusSanL-Regu.
The file is called Fontmap and can be found in the directory
where ghostscript is installed (.../ghostscript/version/Fontmap,
I assume)

The default configuration should, however, be set up reasonably,
so at least the usual fonts like Times-Roman, Helvetica, etc.
should work...
So, I guess, you either haven't yet installed the gs font package,
or for some reason it ended up where it isn't found -- unlikely.

HTH,
Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: X Window Manager

2001-01-09 Thread Erdmut Pfeifer
On Tue, Jan 09, 2001 at 12:51:32PM -0500, cdryburgh wrote:
 Have got an old 386 with Debian Linux on it. Have loaded X and am
 looking for a window manager for it. There are a lot of them. I was
 hopping that if I give some specs that someone could narrow down the
 list.
 
 1. Have limited memory resources so must not use much.
 2. Have small monitor 14. Must allow for maximum screen viewing.
 3. I am a programmer so will probably be doing GUI's and CORBA related
 stuff at some point.

if you are looking for a really lightweight one, you should try wmx:

http://www.all-day-breakfast.com/wmx/

and if you're a programmer, you probably even won't be scared by its
type of configuration (editing a C header file and recompiling :)

Also, there are so few lines of code, that you can easily
comprehend what's going on, and modify it to suit whatever
special needs you have...

I personally like it very much -- even on machines which could
run anything. It has no gimmicks, but everything I really need,
like arbitrary number of virtual desktops, etc.

Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: eml attached file problem

2001-01-09 Thread Erdmut Pfeifer
On Tue, Jan 09, 2001 at 09:28:00PM +0100, Nicolas Bertolissio wrote:
 
 On Sun, 07 Jan 2001 00:11:44 Eric G . Miller wrote:
  On Sat, Jan 06, 2001 at 10:10:51PM +0100, Nicolas Bertolissio wrote:
   Hello,
   
   I've a potato upgraded in woody, and I'm using balsa_0.9.5-1.0.pre5-1.
   I've received a mail with an attachment that is an .eml file format
   (see below). I can read html parts even if there are html tags but I'd
   like to see the image and I don't know how I can do. I tried to make a
   file with the data but it didn't work.  Could someone help me please ?
   
   Nicolas.
  
  Save each attached image to a file, then use uudecode to convert it from
  ascii encoding.
  
  Eric G. Miller egm2@jps.net
 
 I think I had already tried this and forgot to mention it (but I tried again) 
 with and without the following 3 lines :
 Content-Type: image/gif
 Content-Transfer-Encoding: base64
 Content-ID: [EMAIL PROTECTED]
 
 but I get :
 ~$uudecode image.uu
 uudecode: image.uu: No 'begin' line
 
 any idea ?

well, as it says in the transfer-encoding header, it's base64-encoded,
which is similar to, but not the same as uuencoded...

I don't really understand why you can't convince your MUA to do the
conversion for you ;-) but if you want to, you can also do it by hand
with the following short perl script:


#!/usr/bin/perl -n

($s) = m#^([A-Za-z0-9+/]+)\s*$# or next;
$s =~ tr#A-Za-z0-9+/# -_#;
$len = pack(c, 32+0.75*length($s));
print unpack(u, $len.$s);


Copy these lines into a file, e.g. base64dec.pl, make it executable
and call it like this

base64dec.pl in.base64 out.gif

in.base64 is your chunk of cryptic-looking data you're having
problems with. It should consist of the part *between* the separator
lines (there should be two identical lines, one above the Content-*
lines and one below the data block) -- use an editor to cut out the
block in between.

If everything works as expected you should be left with your image
file. If not, let me know.
Good Luck!

Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: ugh... erectile dysfunction

2000-12-06 Thread Erdmut Pfeifer
On Wed, Dec 06, 2000 at 10:56:48AM -0500, Aaron Solochek wrote:
 Both of those (perl-5.6 and perl-5.6-base are installed.  By removing a large 
 portion of my X system I reduced the
 errors to just those relavent to debconf:
 
 Setting up debconf (0.5.32) ...
 Data::Dumper object version 2.101 does not match $Data::Dumper::VERSION 2.09 
 at

you seem to be having a version problem in your Perl installation,
which probably has nothing to do with debconf. I suspect you will get
the same error when you simply try to run

perl -e use Data::Dumper;

which just loads the module in question (if everything goes fine,
this command should not output anything...)

If you do get a version mismatch message similar to the one above, I
would guess that you have an ancient version of the module Data::Dumper
somewhere on your system, which for some reason gets pulled in before
the one that comes with the recent version of Perl (Data::Dumper
version 2.101 has been out since April 1999)

A little background info: Every architecture dependent Perl module like
Data::Dumper consists if two parts:
the Perl-side module (.../Data/Dumper.pm) and the architecture-specific
shared object file (.../arch/auto/Data/Dumper.so) that gets pulled in via
the Dynaloader module. Both parts have to be exactly the same version, or
else you get the above message...
In your case, the 2.101 belongs to the .so-file, while the 2.09 comes
from the .pm-file, which indicates that the Perl-side .pm-file is the
wrong (older) version.

You might want to try running the following statement, which should tell
you, in which directory the file is being found:

perl -e 'for (@INC) { print $_\n if -f $_/Data/Dumper.pm;}'

(If you have a look into the source of this Dumper.pm you should find
something like $VERSION = '2.09' at the very beginning...)

The search path where Perl looks for (.pm-)modules is determined by
the @INC-Array, which again may be manipulated by setting the environment
PERLLIB or PERL5LIB (among other things).

Maybe the place where Dumper.pm is found will give a clue as to what
is going wrong... (old version lying around somewhere, etc.?)

Other useful commands that might help to clarify things include

perl -V   (for search paths etc.)

or setting the environment variable PERL_DL_DEBUG to 1, which should
trigger the Dynaloader to output a little on what it's doing...

(You might also try wether you get different results from running
perl -V versus /usr/bin/perl -V, which would indicate that
you have several Perl versions (binaries) on your system/search path)

HTH,
Erdmut


 /usr/lib/perl5/5.6/i386-linux/DynaLoader.pm line 219.
 Compilation failed in require at /usr/lib/perl5/Debconf/ConfigDb.pm line 82.
 BEGIN failed--compilation aborted at /usr/lib/perl5/Debconf/ConfigDb.pm line 
 82.
 Compilation failed in require at /usr/share/debconf/frontend line 23.
 BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 23.
 dpkg: error processing debconf (--configure):
  subprocess post-installation script returned error exit status 2
 dpkg: dependency problems prevent configuration of bsdmainutils:
  bsdmainutils depends on debconf; however:
   Package debconf is not configured yet.
   Package debconf-tiny which provides debconf is not installed.
 dpkg: error processing bsdmainutils (--configure):
  dependency problems - leaving unconfigured



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Mutt: save without prompting --- how? Multi-message save?

2000-10-27 Thread Erdmut Pfeifer
On Fri, Oct 27, 2000 at 11:30:05AM +0930, Mark Phillips wrote:
 I use mutt to read my mail and I have a folder with debian-user in it.
 When I come to messages of interest to me, I want to be able to easily
 save them to another folder.  I've got it set up so it saves to the
 right place, but currently in order to save I need to press s and
 then hit return to accept the save location.

if you don't want to see the Append...? prompt you can

set confirmappend=no

in your .muttrc file. This setting applies globally, however.
I'm not sure whether you can somehow achieve a mixed configuration,
where you are being prompted for one keystroke but not for another...

 It would be nice to have
 a key which saves straight to the default location without prompting.


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Help: How replace comma with a tab in a text file?

2000-10-24 Thread Erdmut Pfeifer
On Tue, Oct 24, 2000 at 05:50:23PM +0200, Vee-Eye wrote:
  
  I have a comma delimited ex-database file and I want to replace the commas
  with tabs. I tried using:
  
  sed s/,/\tab/ filename but no go. It was a guess anyway. I tried replacing
  the tab with a * and it worked, but only for first line of items, mleaving
  the rest of the fields with commas.
  
  Any suggestions?
  
 You could use tr for this job:
 
 tr ',' '\t'  file  newfile


just a word of caution: if your ex-database file is in CSV format
(as for example used by some M$ programs) then you might get problems,
if there are strings in your data containing commas, as in

  1,Smith, Joe,3, ...

The tr method replaces *every* comma, so it's a little too simple
for the case mentioned above.

If you are sure that this cannot happen, then just forget about this
mail...


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: pdf2ps problem

2000-10-19 Thread Erdmut Pfeifer
On Thu, Oct 19, 2000 at 01:20:41AM +0200, Andre Berger wrote:
 I had to convert a 114 MB pdf-file to ps. This took several hours on
 my P133, the result is 908 MB... The command I used was 
 'pdf2ps -dPSLevel1 name.pdf name.ps'
 
 But I can't use the .ps file! Can I fix the following gv error? I have
 gs-aladdin, potato.
 
 -- Andre
 
 Error: /undefinedAladdin Ghostscript: Unrecoverable error, exit code 1
  in PS
 Operand stack:
595   842   a4
 Execution stack:
%interp_exit   .runexec2   --nostringval--   --nostringval--   
 --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   
 --nostringval--   false   1   %stopped_push   1   3   %oparray_pop   
 .runexec2   --nostringval--   --nostringval--   --nostringval--   2   
 %stopped_push   --nostringval--   --nostringval--   --nostringval--
 Dictionary stack:
--dict:874/941(G)--   --dict:0/20(G)--   --dict:51/200(L)--   
 --dict:44/62(L)--
 Current allocation mode is local
 Current file position is 1376
 Error: /undefined in PS
 Operand stack:
595   842   a4
 Execution stack:
%interp_exit   .runexec2   --nostringval--   --nostringval--   
 --nostringval--   2   %stopped_push   --nostringval--   --nostringval--   
 --nostringval--   false   1   %stopped_push   1   3   %oparray_pop   
 .runexec2   --nostringval--   --nostringval--   --nostringval--   2   
 %stopped_push   --nostringval--   --nostringval--   --nostringval--
 Dictionary stack:
--dict:874/941(G)--   --dict:0/20(G)--   --dict:51/200(L)--   
 --dict:44/62(L)--
 Current allocation mode is local
 Current file position is 1376
 Aladdin Ghostscript: Unrecoverable error, exit code 1
 
 Error: PostScript interpreter failed in main window.

can you load the pdf-file into some viewer, e.g. Acrobat Reader?
From there you might be able to print just the pages you want (the
Unix-Version of Acrobat Reader automatically converts to PostScript
when printing, so you could also save the output to a file, if you
need to...)

If that doesn't work, I would offer that you send me the first
1500 bytes of your PS-file that doesn't work (please not the
whole file :-)) and I have a closer look at it.  Then I might find
out what the problem is, and whether we can fix the corrupted file
manually.
(preferably send it to me privately, not to the list)

-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [OT] gcc-warning: more info

2000-10-18 Thread Erdmut Pfeifer
On Wed, Oct 18, 2000 at 11:08:23AM +0200, Daniel Reuter wrote:
 Hello there, 
 
 Thanks to all, who responded up to now. I think I'll give some more
 information, as I still don't understand, why the warning 
 main.c:158: assignment makes pointer from integer without a cast
 is generated in my case:
 
 I have the following (among some other function and structure 
 declarations) in my program-header-file 'bet.h':
 
   #include stdlib.h
   #include stdio.h
 
   struct provided_data{
   double sample_weight;
   struct datapoint *ppovolads;
   int value_count;
   };
 
   struct provided_data *read_data(char *);
 
 
 This function is in file 'scanner.c' and does the following:
 
   #include bet.h
 
   struct provided_data *read_data(char *input_file_name)
   {
   struct provided_data *prov_data_buffer;
   
   Read in some data and put them into structure provided_data.
   Then return pointer to structure provided data using the
   following statement:
   
   return(prov_data_buffer);
   }
 
 
 In file main.c I have the following:
 
   #include bet.h
 
   int main(int argc, char **argv)
   {
   some code that reads commandline opts and so on.
   
   Here I define input_data:
   struct provided_data *input_data;
   
   Now call read_data:
 ---  input_data=read_data(input_file_name);
   }
 
 line marked with --- is the line, the compiler complains about.
 I don't quite understand this, because I never declared function read_data
 to return an int. Is something wrong with my function declaration?

sorry, I can't tell you what the problem is -- doesn't seem to be
in the syntax. My gcc (egcs-2.91.66) doesn't have any problems with
this code fragment.


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: [OT] gcc-warnings

2000-10-17 Thread Erdmut Pfeifer
On Tue, Oct 17, 2000 at 05:03:35PM +0200, Daniel Reuter wrote:
 Hello there,
 
 I never quite understood the following warning message from gcc:
 
 sourcefile.c: linenumber: warning: assignment makes pointer from integer
 without a cast

Hi,

this basically means exactly what it says:

at that specific point in your code you have an assignment where an
integer (a function return value, expression or whatever) is being
assigned to a pointer, without anyone telling the compiler what type
of pointer your integer is supposed to represent.

It's a warning and not an error, because under various circumstances
integers and pointers are assigment-compatible (at the machine-level,
both are just integer numbers).

Although there are usually better ways of doing it, if you really need
such an assigment, you have to tell the compiler precisely what you
mean -- by using a type cast, e.g.

  double * p;
  p = (double *) func_that_returns_int();
  ^^
Keep in mind, however, that in this case you are fully responsible
for what you do. You can no longer rely on type-checking assistance
from the compiler...

HTH,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: mySQL errors

2000-09-21 Thread Erdmut Pfeifer
On Wed, Sep 20, 2000 at 12:15:55PM -0700, Account for Debian group mail wrote:
 
 New mySQL install (3.22.32-3) on Debian 2.2 (kernel 2.2.17). I can easily
 issue a /etc/init.d/mysql start|reload|stop okay, but once it's running I
 try a simple mysqladmin version and the tty hangs. 
 
 This is what I see in mysql.err:
 
 mysqld started on  Tue Sep 19 14:02:07 PDT 2000
 /usr/sbin/mysqld: Can't create/write to file '/var/log/mysql.log'
 (Errcode: 13)
 /usr/sbin/mysqld: ready for connections
 
 Number of processes running now: 0
 mysqld restarted on  Tue Sep 19 14:02:14 PDT 2000
 000919 14:02:14  Can't start server: Bind on TCP/IP port: Address already
 in use000919 14:02:14  Do you already have another mysqld server running
 on port: 3306 ?
 000919 14:02:14  Aborting
 
 mysqld ended on  Tue Sep 19 14:02:14 PDT 2000
 
 I've already tried giving the mysql.log file 777 perms and changed the
 ownership from root to mysql and back, but no luck on getting rid of the
 initial Can't create/write error. When doing a ps, there are no other
 mysqld processes listed.

occasionally it helps to remove the socket file mysql.sock (if you
have one), which is by default created in /tmp. I don't know why
sometimes mySQL cannot do this by itself (permissions?), and also I
don't really know, what this is created for, but I assume it is
supposed to allow local clients to connect via a unix domain socket
instead of the usual inet socket on port 3306.

Maybe that helps,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh
Hagellocher Weg 71  phone: +49 (0)7071-9457-255
D-72070 Tuebingen   email: [EMAIL PROTECTED]

-- Bugs come in through open windows. Keep Windows shut! --



Re: INI like package

2000-09-20 Thread Erdmut Pfeifer
On Wed, Sep 20, 2000 at 09:39:23AM +0200, François Chenais wrote:
 Hello
 
   Is there any perl package for using windows ini files ?
 

have a look at these modules:

  http://www.perl.com/CPAN-local/modules/by-module/Win32/

especially the module Win32::Tie::Ini. I've never used it, but
from the description it sounds like it is what you are looking for...

It seems to be targeted at the windows platform, but if it's a
Perl-only module it shouldn't be too hard to get it running under
a decent OS ;-)

Erdmut



-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: Perl @INC - include NFS mounted repository

2000-09-18 Thread Erdmut Pfeifer
On Mon, Sep 18, 2000 at 11:26:02AM +0200, Thomas Gebhardt wrote:
 Hi,
 
 I'd like to include a NFS mounted repository (actually a very
 comprehensive Debian installation) into the Module Search Path
 of Perl.
 
 Boundary conditions:
 
  * transparent for the users
 
  * Modules that are also locally available should not be loaded
by NFS.

Hi,

there are several ways of achieving this, although none of them
is without drawbacks:

(1) you can setup the PERL5LIB environment variable (or PERLLIB
if you want it to apply for both Perl4 and Perl5) to contain
the appropriate search path for the Modules (a colon-separated
list of directories)
If you want local modules to be found first, just put the NFS
directory at the end of the path list.

This method has the disadvantage that you somehow have to make sure
that PERL5LIB gets set in the environment of the individual users.
This is basically a system administration task. You might consider
one of the following strategies:

* put PERL5LIB in the system-wide login-environment, so users do
  not have to set it themselves
* have users set it by themselves (error-prone)
* write a wrapper script around the call of the perl binary that
  sets the environment
(or whatever else you could think of...)


(2) you can modify the @INC array at the beginning of your perl
scripts. To have the NFS directory be found last you would
insert the following code snippet:

BEGIN { push @INC, /your-NFS-libpaths-here; }

(this gets executed before any use or require statements,
so the modified include path applies)

You probably want to include both the normal path (for Perl-only
modules) and the path for architecture-specific modules
(containing e.g. something like i686-linux), although
the latter is not always required (just try and see...)

This obviously has the disadvantage that all of your scripts have to
be modified, so depending on how much scripts you have and whether you
could edit them automatically, this approach might not be feasible.


There may be other ways that I'm not aware of, e.g. fiddling around
with symlinks in the ordinary perl lib-dir or changing the compiled-in
lib-path...(?); but as far as I know, this way its not possible to
achieve the kind of fall-through behaviour you need.
(if anyone knows how to do this, I'd like to know)

Good luck,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: How to use libhtml-embperl-perl as cgi

2000-09-08 Thread Erdmut Pfeifer
Peter S Galbraith wrote:
 
 [also sent to prior Debian maintainer in case he can answer in 30 seconds!]
 
 I have installed the libhtml-embperl-perl package, and I'm trying
 to get some perl commands within [+ brackets +] in html files to
 be processed (first as cgi, then perhaps using mod-perl).
 
 After simply installing the package, the embperl brackets get
 passed on as text as output in the HTML, so the preprocessor
 isn't invoked by default.
 
 The HTML::Embperl man page says to copy embpcgi.pl to the
 cgi-bin directory.  I've done this.  Then it says:
 
  If you are running the Apache httpd, you can also define
  embpcgi.pl as a handler for a specific file extention or
  directory.
 
Example of Apache srm.conf:
 
Directory /path/to/your/html/docs
Action text/html /cgi-bin/embperl/embpcgi.pl
/Directory
 
 So I tried variants of this but when I restart apache I get the
 error:
 
  Invalid command 'Action', perhaps mis-spelled or defined by a
  module not included in the server configuration
 

you need to have the apache module mod_actions compiled-in and enabled
for the directive Action to be available. mod_actions belongs to the
base modules and is compiled-in by default -- but this doesn't mean that
it cannot be disabled ;-) 
(sorry, I don't know about the debian package). 

Do a 

  httpd -l

for a list of compiled-in modules (among these you should see
mod_actions.c)

or

  httpd -L

for a list of available directives (here you should see Action
(mod_actions.c)).


See the apache documentation for details on how to enable modules...


BTW: as far as I know, the debian embperl-package isn't quite the most
recent version, so you might consider building your own newer one
(1.3b5) -- if you can live with a non-debian package on your system)
Also, just in case you didn't know, there is an embperl mailing list
(embperl@perl.apache.org), for the kind of problems where you need real
experts ;-)

Good luck,
Erdmut


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: rewrite and change document root on apache

2000-09-06 Thread Erdmut Pfeifer
[EMAIL PROTECTED] wrote:

 thank you Craig,
 I've setup Document Root for each virtual host on
  /var/www/www.virtualhost1.com

 I made stats for each virtual host on
 /var/reports/www.virtualhost1.com
 
 On Apache I want to forward http://virtualhost1.com/stats to
 /var/reports/virtualhost1.com
 
 I've setup:
 
 Alias/reports/  /var/reports/
 RewriteRule  ^/stats(.*)/reports/%{SERVER_NAME}$1
 
 but Alias has no effect on rewrite and is looking from default Document
 Root:
 /var/www/www.virtualhost1.com/reports/www.virtualhost1.com
 
 and produces a 404  :(
 
 
 bests,
 jaume.

Hi,

( I just saw this on the archives ... as I'm not subscribed to the list,
I don't know whether someone already answered this, anyway: )

try putting [PT] (pass-through) at the end of the line with the rewrite
rule. This has the effect of giving mod_alias also a chance to process
the URL...

RewriteRule  ^/stats(.*)/reports/%{SERVER_NAME}$1  [PT]

In this particular case: why not directly rewrite it to
/var/reports/...?


Hope that helps,

-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



Re: rewrite and change document root on apache

2000-09-06 Thread Erdmut Pfeifer
Jaume Teixi wrote:
 
 problem is that
 
 Document Root for each virtual host is on
   /var/www/www.virtualhost1.com
 and I'm trying to forward http://www.virtualhost1.com/stats  to
   /var/reports/www.virtualhost1.com
 
 so rewrite rule
 
 RewriteRule  ^/stats(.*)/var/reports/%{SERVER_NAME}$1  [PT]
 
 really looks for
   /var/www/www.virtualhost1.com/var/reports/www.virtualhost1.com
 
 not to
 /var/reports/www.virtualhost1.com
 
 how to handle this:(  

I'm not sure whether I understood in every detail what you are trying to
achieve -- maybe you could briefly state which document is supposed to
be served if a user requests e.g.
http://www.virtualhost1.com/stats/index.html ( is it the file
/var/reports/www.virtualhost1.com/index.html ?)

If so, the combination of rewrite and Alias (as you originally had it)
should do the job when you specify [PT] -- and only then [PT] is needed.
In this case the Alias statement may map to any path outside of the
DocumentRoot. So when using

Alias  /reports  /var/reports

the rewritten string (not the document root) would have to supply the
host part.


When using the RewriteRule alone, you would have to make sure that the
string portion, which comes out of the rewrite rule, will give the final
document path when concatenated to the DocumentRoot. For this you would
either have to use a different DocumentRoot or a different rewrite rule
to avoid getting the duplicate occurences.


-- 
Erdmut Pfeifer
science+computing gmbh

-- Bugs come in through open windows. Keep Windows shut! --



  1   2   >