Re: GnuPG::Interface module on OS X

2006-09-24 Thread Wiggins d'Anconia
Dennis Putnam wrote:
 Although I don't think this is an OS X specific issue I can't find  any
 place to seek help (there seems to be a GnuPG list but it is  defunct or
 inactive). If someone knows of a better resource please  let me know.
 
  I have installed GnuPG on a Tiger (10.4.7) server and it seems to  be
 working fine. I then installed GnuPG::Interface in perl and wrote  a
 script that tries to decrypt a file. Everything seems to be working 
 fine and the file gets decrypted. My problem occurs when I try to run 
 the script in background (cron or nohup). I get an error pointing to 
 the line that calls the 'decrypt' method. It says fh is not  defined.
 I don't have a variable by that name so I don't have a clue  what it is
 referring to other then it must be in the decrypt method  somewhere. I
 tried setting $gnupg-options-batch(1); but that did  not help. Can
 someone help me figure out what is wrong? Thanks.
 

Can you show some code?  Note that cron generally runs in a different
environment and may not be detecting the proper home directory which
would likely cause gpg to have issues.

http://danconia.org


Re: question on Perl's ssh

2005-06-24 Thread Wiggins d'Anconia
Ted Zeng wrote:
 Hi,
 
 Why the system behaves differently when I ssh to a machine
 from Terminal than when I ssh to the same machine by Perl's SSH module?

 Here is the problem:
 I added a tool to /usr/local/bin. I updated the profile file.
 Now if I ssh to the machine, I could use which tool to find it.
 
 But when I try to do the same in Perl, the reply is
 no tool in /usr/bin /bin /usr/sbin /sbin
 
 ted zeng
 Adobe Systems
 
 

The difficult part is that the answer is really just because :-). When
you use the 'ssh' from the terminal you are using the local ssh client.
That client establishes a connection to a remote ssh server and tells
that server that it wants to run a command, that command is to init a
shell and then interact.  So when you send your 'which' command you are
interacting with the remote shell over the ssh tunnel. But when you
use Net::SSH::Perl (which I am assuming is the module you are using) you
are establishing a connection to a remote SSH session, but the
command(s) you send are being exec'd directly (presumably by /bin/sh)
which may or may not have loaded the same environment as the normal user
shell (for instance, skipping any .bashrc/.bash_profile, etc.). I
believe (though haven't tested) that the same would occur if you
provided a command to the local ssh client instead of requesting an
interactive shell.

Net::SSH provides a wrapper around the local 'ssh' command but I have
not used it. I tested it once quite a while ago and preferred
Net::SSH::Perl *for my purposes*.

HTH,

http://danconia.org


Re: CamelBones on Intel? Maybe not.

2005-06-07 Thread Wiggins d'Anconia
Ian Ragsdale wrote:
 On Jun 7, 2005, at 11:51 AM, Joseph Alotta wrote:
 
 I used to be a NeXt developer.  This announcement is very  reminiscent
 of the NeXt announcement to stop making those little  black boxes and
 bring NeXt OS on Intel chips.  We had just bought a  ton of hardware
 and they demo this clunky 386 PC.  First of all, it  looked nasty.  We
 were used to that elegant design. Secondly, it  kept crashing.  It
 destroyed the culture.  It was like putting  Haydn into the juke box
 at a disco.  Everyone went home. The vice  president of our division,
 who bet his career on NeXt, resigned and  NeXt languished for years.

 It is the same scenario playing out again.  Will Steve Jobs never  learn?
 
 
 Did NeXT produce their own boxes, or did they allow installs on any  PC
 with supported hardware.  I believe that is a key difference.   Apple
 boxes will be exactly the same as they would have been, except  they
 will have a different CPU.  You still won't be able to install  OS X on
 a commodity PC without jumping through a lot of hoops.
 

Why wouldn't you?  Memory, drives, video, etc. are all the same right
now. Motherboard has pretty standard features, other than it is setup
for a Power processor. Apple has been going cheap for a while, SCSI -
IDE ring any bells? It would be a real shame if they didn't allow you to
install OS X on any commodity PC, once again back to that whole volume
issue. Without a different chip, Macs really are just a pretty looking
box with a nice software package preinstalled. Darwin runs on Intel
already (mostly) which is the real key, if Apple goes through with this
and won't let you install on a commidity PC then they really missed the
boat, in fact I would say they couldn't even find the dock.

 I think the only way that you look at it is that if IBM couldn't or 
 wouldn't deliver the processors Apple needed at a reasonable price, 
 what else could Apple do?
 

Will definitely agree with you there. Though you have to love the media
spin making it seem like this is Apple's choice to drop IBM, uh huh.

 Ian
 

I like Macs as much as the next person, but if they are going to go the
Intel route, they might as well go the whole way. In fact being able to
install on a normal Dell, would be one way for them to win back some
huge user spaces, lots of companies would love to get out from the M$
licensing structure, but just aren't willing to fork out that much cash
for all new hardware when they shouldn't need to, aka just to run
another Intel based OS, and admittedly Linux is much harder to learn (or
at least seems it). Not to mention theoretically (ask your lawyer,
anyone know for sure?) they should be able to transfer over their
Adobe/Office licenses which run natively.

http://danconia.org


Re: CamelBones on Intel? Maybe not.

2005-06-07 Thread Wiggins d'Anconia
Brian McKee wrote:
 
 On 7-Jun-05, at 1:57 PM, Wiggins d'Anconia wrote:
 

 Why wouldn't you?  Memory, drives, video, etc. are all the same right
 now. Motherboard has pretty standard features, other than it is setup
 for a Power processor. Apple has been going cheap for a while, SCSI -
 IDE ring any bells? It would be a real shame if they didn't allow you  to
 install OS X on any commodity PC, once again back to that whole volume
 issue. Without a different chip, Macs really are just a pretty looking
 box with a nice software package preinstalled. Darwin runs on Intel
 already (mostly) which is the real key, if Apple goes through with this
 and won't let you install on a commidity PC then they really missed the
 boat, in fact I would say they couldn't even find the dock.
 
 
 Quoting cnet  
 http://news.com.com/Apple+throws+the+switch%2C+aligns+with+Intel+-
 +page+2/2100-7341_3-5733756-2.html?tag=st.next
 
 After Jobs' presentation, Apple Senior Vice President Phil Schiller 
 addressed the issue of running Windows on Macs,
 saying there are no plans to sell or support Windows on an
 Intel-based  Mac.
 That doesn't preclude someone from running it on a Mac. They
 probably  will, he said. We won't do anything to preclude that.
 However, Schiller said the company does not plan to let people run
 Mac  OS X on other computer makers' hardware.
 We will not allow running Mac OS X on anything other than an Apple 
 Mac, he said.
 
 
 Shades of Sony...
 
 

Bon Voyage! ;-) (Thanks for the quote though.) We will see...
iTunes/iPod for windows anyone? How long ago was it that they said they
weren't moving to Intel? The market has a funny way of dictating what a
company will and won't do, no matter how pouty the President.

Make me a believer...

http://danconia.org



Re: CamelBones on Intel? Maybe not.

2005-06-06 Thread Wiggins d'Anconia
Ian Ragsdale wrote:
 On Jun 6, 2005, at 5:18 PM, Joel Rees wrote:
 
 Jobs is insane.

 
 I'm not so sure about that.  IBM seems unwilling or unable to produce 
 mobile G5s, which is a market that Apple considers very important.  
 They also are 2 years behind schedule on 3.0Ghz G5s, and appear to be 
 focusing on video game processors instead of desktop and mobile 
 processors.
 
 Apple might be OK in a speed comparison right now (on desktops, they 
 are clearly losing in laptop comparisons), but how about in two  years? 
 Perhaps IBM has told Apple that they won't attempt a laptop  chip, since
 the volume is way higher for video game consoles?  What  should Apple do?


They should have released Mac OS X for Intel as soon as they had it
ready. Why wait? It seems Apple is too caught up in their own keynotes
to understand volume sales. One thing M$ was definitely *always* better
at. IBM will probably laugh this one to the bank, not exactly going to
put a dent in that $99 billion in revenue...

 Personally, it looks like it will be a bit painful for a few years,  but
 a far better move in the long run.
 

Unless they become just another cheap clone maker with a pretty software
interface. (Did I hear someone say Sun?)

 Ian
 

http://danconia.org


Re: Installing WebService::GoogleHack

2005-05-17 Thread Wiggins d'Anconia
Lola Lee wrote:
 Morbus Iff wrote:
 
 
  /Library/WebServer/Documents/GoogleSearch.wdsl

 
 When I ran this again, it died with this message:
 
 Illegal WSDL File Location - /Library/WebServer/Documents/GoogleSearch.wdsl


The test is trying to open the file just to test for existence,
readability, etc. (not sure why Perl ops couldn't be used rather than
opening and then closing, yikes) but you might want to hack the file
t/1.t and add $! to the error message to see why it is failing. Could be
any number of reasons, I assume the file is readable by your user, etc.
but $! will tell us why it is failing.

http://danconia.org

 
 
 Next time, I left off GoogleSearch.wdsl and it died again, I got this:
 
 Can't locate object method new via package WebService::GoogleHack; at
 t/1.t line 85, STDIN line 5.
 # Looks like you planned 2 tests but only ran 1.
 # Looks like your test died just after 1.
 t/1dubious
 Test returned status 255 (wstat 65280, 0xff00)
 DIED. FAILED tests 1-2
 Failed 2/2 tests, 0.00% okay
 Failed Test Stat Wstat Total Fail  Failed  List of Failed
 ---
 
 t/1.t255 65280 23 150.00%  1-2
 Failed 1/1 test scripts, 0.00% okay. 2/2 subtests failed, 0.00% okay.
 make[1]: *** [test_dynamic] Error 2
 make: *** [test] Error 2
   /usr/bin/make test -- NOT OK
 Running make install
   make test had returned bad status, won't install without force
 
 
 I do have the file in the Documents folder.
 


Re: sendmail question

2005-03-14 Thread Wiggins d'Anconia
Matt Doughty wrote:
On Wed, Mar 09, 2005 at 09:42:00AM -0800, Ted Zeng wrote:
Hi,
When  I used perl on Windows, I used Mail:sender module to send emails,
with attachments.
Now I realized that Mac OS X doesn't have this installed (I installed 
it on Windows myself)
and it has sendmail as a UNIX tool, which can be an option.

My question is:
Do you use sendmail to send emails in your perl tools?
Or use a Perl email module to send emails? Which way you prefer?

So you have heard one position on this subject. I'll give you the other.  
Using the command line sendmail client gives you queueing if the SMTP 
server you are talking to is down, or temporarily unreachable. I'm not 
certain if there is a module out there that will use the sendmail command 
line client directly, but this is definitely the way to go if you don't 
want to lose mail, and you don't want to worry about queueing yourself.

--Matt

Ok, so backing up a step. The key here is that there are two steps to 
the process.

1. Build the message (probably in memory)
2. Send the message
For #1 you absolutely want to use a module. Period. There are far, far, 
far too many intricacies of the message format protocol to try to do 
this by hand. And you *don't* want to rely on the log messages of any 
smtp client/server to find problems in your message building. And the 
minute you get into including attachments, you have really screwed 
yourself. Take a look at the documentation and structure of Mail::Box if 
you think e-mail done right is easy. The module maybe overkill for a lot 
of applications, but it is about as thorough as you can get. There are 
many modules that will build a correct message.

For #2 it matters less. Because #1 and #2 are usually intertwined most 
modules that provide #1 will provide #2 too. And because talking to 
sendmail at the command line can get very hairy very quickly you are 
still better off letting a module do it. The interface has been designed 
for ease of use, the module has (hopefully) been tested, possibly on 
many platforms, and most provide an option to set which local mail 
client/server you wish to talk to. So most can handle #2 using postfix, 
sendmail, etc. Net::SMTP is probably one of the ones that can't, but 
then you wouldn't want to build a message with it anyways.

I spent 2 years working on an application that 90% of the time was 
dealing with mail inbound or out, you need to be an absolute expert in 
mail handling (which I am not by a long stretch!) to do so directly.

http://danconia.org


Re: First CGI Setup

2005-03-11 Thread Wiggins d'Anconia
Chris Devers wrote:
On Sat, 12 Mar 2005, Joel Rees wrote:

(One of these days I'm going to get version control running to my 
liking, and I'll keep everything under /etc in version control. For 
now, I just make a copy to work on and rename the old one *_nnn.bak or 
something, keeping track of the editing sequence in the _nnn portion.)

Try this:
$ cat ~/bin/stamp
#!/bin/bash
#
# stamp is a utility which makes a backup of a conf file
[ $# -ne 1 ]  echo usage: `basename $0` filename  exit 100
old=$1
new=$1.`date +%Y%m%d`.$$
[ ! -f $old ]  echo $old does not exist  exit 100
cp $old $new
status=$?
[ -x $new ]  chmod -x $new
exit $status
$
It's crude, but it works well enough.
$ cd /etc/httpd
$ sudo stamp httpd.conf
  # I get a file like httpd.conf.20050311.15629.
$ vim httpd.conf  apachectl configtest
  # I make a royal mess of things. Damn.
$ cp httpd.conf.20050311.15629 httpd.conf
$ apachectl configtest
  # All is right with the world again.
Something like CVS / SVN / BitKeeper would be better, but not easier.
 


Nice. You could also just op for RCS until you can get a more managed 
solution up, it works well enough for this type of thing and only (or 
not even) requires an RCS directory be created, and for you to follow 
the standard procedures,

 co -l filename
... make edits
 ci -u -M filename
Pretty simple stuff. It appears rcs comes with Mac OS X, or at least 
when the dev tools are installed.

 man rcs
For more info.  The others are excellent, but they can be overkill for 
simple, non-distributed, configuration files.

http://danconia.org


Re: sendmail question

2005-03-09 Thread Wiggins d'Anconia
Ted Zeng wrote:
Hi,
When  I used perl on Windows, I used Mail:sender module to send emails,
with attachments.
Now I realized that Mac OS X doesn't have this installed (I installed it 
on Windows myself)
and it has sendmail as a UNIX tool, which can be an option.

My question is:
Do you use sendmail to send emails in your perl tools?
Or use a Perl email module to send emails? Which way you prefer?
ted zeng
Adobe Systems

Use a module. Several of them can send via sendmail, but regardless let 
them do the message construction, it will save you many headaches in the 
end.  There are a number of them on CPAN, each with its own features and 
pecularities.

http://danconia.org


Re: What Perl editor do you recommend?

2005-03-02 Thread Wiggins d'Anconia
John Delacour wrote:
At 9:45 pm + 2/3/05, Phil Dobbin wrote:
I'm thinking that if he's not comfortable with pico maybe emacs is not 
the best idea...

I'd love to hear a convincing explanation from someone why anyone would 
use such tools in preference to TextWrangler, BBEdit or Affrus. I can 
imagine they'd make it a chore to write code in us-ascii and either a 
nightmare or an impossibility to deal with non-ascii, but maybe that's 
because I'm just an unreformed Mac user :-)

JD

They aren't free (well BBedit and Affrus), they aren't cross platform 
(why learn a different editor for each platform), and they require lots 
of clicky.

I have never logged into a system where I couldn't use vi. (well maybe a 
windows box, but it didn't take long to install gvim or cygwin.)

http://danconia.org


Re: TextWrangler

2005-01-22 Thread Wiggins d'Anconia
Joel Rees wrote:
While we're playing around with Editor Wars...
there's no need for that sort of language...
Boy,, there's nothing like a good old-fashioned editor war!
But this one doesn't seem to have much punch to it. More like a dust 
devil than a cyclone.

Vim.
http://danconia.org


Re: Simple perl script send email

2004-04-25 Thread Wiggins d'Anconia
Mark Wheeler wrote:

Hi,

I just installed 10.3 and am trying to get a cron job to fire off a perl 
script which will send an email saying the cron job was completed.

crontab listing

* * * * * /Users/blah/Library/Scripts/test.pl

Here is the script:

test.pl

#!/usr/bin/perl -w
use strict;

my $from = '[EMAIL PROTECTED]';
my $to = '[EMAIL PROTECTED]';
my $body = Success.;
open (SENDMAIL, | mail -t);
Check that open succeeded, and use a full path to 'mail', especially 
since under cron your PATH may be different/restricted.

open (SENDMAIL, | /usr/bin/mail -t) or die Can't pipe to sendmail: $!;

Having said that, I would suggest not using mail directly at all, 
instead install a mail handling Perl mod from CPAN, there are lots of them.

print SENDMAIL Subject: Backup Email Test\n;
print SENDMAIL From: $from\n;
print SENDMAIL To: $to\n\n;
print SENDMAIL $body;
close (SENDMAIL);
exit;
--
I have enabled Postfix to be running and have sent and received an email 
from the command line. I've also executed the script run from the 
command line. But the script doesn't seem to be sending an email. Do I 
need to get perl set to run in a settings file? I thought that I only 
needed to mess with settings files if I was going to use the web server. 
A little help would be appreciated.

HTH,

http://danconia.org


Re: Simple perl script send email

2004-04-25 Thread Wiggins d'Anconia
Mark Wheeler wrote:

Thanks. I'll give it a try. That makes sense. When you are talking about 
mail handling Perl mod, you are talking about NET::SMTP or something 
like that, right? Also, why would you not want to use mail directly?

Mail is an incredibly complex thing, combine that with trying to handle 
IPC issues when shelling out, then you are reinventing a wheel that 
should definitely not be re-invented.   Net::SMTP is an example, though 
probably a more difficult one, there are lots,

http://search.cpan.org/modlist/Mail_and_Usenet_News/Mail

Mail::Mailer
Mail::Sender
MIME::Lite
Are some good choices, I use Mail::Box but generally it is way overkill, 
but since I know I will have it installed I usually default to it.

Generally using a module will be less error prone, easier to maintain, 
and more portable.

http://danconia.org

Thanks,

Mark

On Apr 25, 2004, at 2:53 PM, Wiggins d'Anconia wrote:

Mark Wheeler wrote:

Hi,
I just installed 10.3 and am trying to get a cron job to fire off a 
perl script which will send an email saying the cron job was completed.
crontab listing
* * * * * /Users/blah/Library/Scripts/test.pl
Here is the script:
test.pl

#!/usr/bin/perl -w
use strict;
my $from = '[EMAIL PROTECTED]';
my $to = '[EMAIL PROTECTED]';
my $body = Success.;
open (SENDMAIL, | mail -t);


Check that open succeeded, and use a full path to 'mail', especially 
since under cron your PATH may be different/restricted.

open (SENDMAIL, | /usr/bin/mail -t) or die Can't pipe to sendmail: 
$!;

Having said that, I would suggest not using mail directly at all, 
instead install a mail handling Perl mod from CPAN, there are lots of 
them.

print SENDMAIL Subject: Backup Email Test\n;
print SENDMAIL From: $from\n;
print SENDMAIL To: $to\n\n;
print SENDMAIL $body;
close (SENDMAIL);
exit;
--
I have enabled Postfix to be running and have sent and received an 
email from the command line. I've also executed the script run from 
the command line. But the script doesn't seem to be sending an email. 
Do I need to get perl set to run in a settings file? I thought that I 
only needed to mess with settings files if I was going to use the web 
server. A little help would be appreciated.



Re: Web servers with cable DSL

2004-03-17 Thread Wiggins d'Anconia
Bill Stephenson wrote:
Well,

I think that Kevin (morbus) really did a good job of pointing out why I 
can't entirely do this yet. Some of the sites I host are critical to the 
businesses that use them and Verio has always provided a great service. 
Because they host on FreeBSD, developing on the Mac and porting to Verio 
is almost seamless even though Verio has never done anything special to 
accommodate this.

However, the fact that so many on this list are hosting sites with cable 
DSL indicates that I can possibly move some of the sites I host to a 
home office based server and still save a little money. I'll spend some 
time reviewing the sites and costs and see how the numbers crunch.

What about using http://directv.direcway.com/ to host servers? Anyone 
doing that?

Just my $.02, I host development at home over DSL and it is sufficient 
for development purposes. But have found hosting cheap enough for the 
features I must have to warrant it, and I don't have to worry about 
power failures (long ones), backups (as many), support, etc.

Direcway is rumored to have incredible upward latency I would think 
hosting would be the last thing (next to hosting games) that you would 
want to do over their service.  There was a fair amount of discussion of 
it on a /. story sometime in the last couple months.

Using dyndns, hosting on linux over ameritech DSL.

http://danconia.org


Re: advanced stdout/stderr capturing?

2003-02-14 Thread Wiggins d'Anconia
Wiggins d'Anconia wrote:



I can't remember

completely whether you can use it outside of the rest of the POE 
environment or not

Nope can't, Unlike Components, Wheels do not stand alone. Each wheel 
must be created by a session, and each belongs to their parent session 
until it's destroyed.

But still have a look...

http://danconia.org



Re: konfabulator -- something to ponder

2003-02-12 Thread Wiggins d'Anconia


Puneet Kishor wrote:

 Some time in early 2000 Arlo Rose came up with an idea for a cool
little application. It would use XML to structure images, and a
scriptable language, like Perl, in such a way that someone who knew
the basics of Perl could put together cool little mini-applications.
The goal was that these mini-applications would just sit around on
your desktop looking pretty, while providing useful feedback.

All he ever really wanted was to have a cool looking battery monitor
and something that told him the weather, but he knew the
possibilities for something like this could potentially be limitless.

Fast forward a couple of years when Arlo began working with Perry
Clarke at Sun Microsystems. Over lunch one afternoon Arlo gave Perry
the basics of this dream app. Perry suggested that JavaScript would
be far easier for people to digest. He was right. It's the basis for
Flash's ActionScript, and Adobe's scripting engine for Photoshop. Of
all the choices, JavaScript made the most sense. Shortly after that
lunch, the two began to spend their nights and weekends making this
thing a reality.

A half year later Konfabulator is ready for download, and now it's
up to you to see if they were right about how cool this thing can be!




http://gkrellm.net anyone ;-) (granted it isn't javascript) ...sorry 
that must be my linux background showing through again...back in the 
cage you nasty little penguin

http://danconia.org



Re: unix or mac-style text files?

2002-11-19 Thread Wiggins d'Anconia
There is some discussion of this issue in the docs, check out:

perldoc perlport

And page through to a section called Newlines...
I guess the real question I have is does Perl on OS X qualify as MacPerl 
or Unix perl ... I defer to the mac os x experts, but would guess Unix perl.

http://danconia.org


Heather Madrone wrote:
I've already encountered a few text file anomalies on OS X. Most GUI 
applications
seem to default to Mac-style text files (linefeeds only), but shell 
programs such as
vi do not handle Mac-style text files gracefully.

Is perl on the Mac going to care whether source files are Mac-style or 
Unix-style?
Is it going to have difficulty reading and operating on either kind of 
file?  What
kind of text files will it write?

Thanks in advance for any illumination.

-hmm
[EMAIL PROTECTED]






Re: hard links on HFS+ (now even further off topic...)

2002-11-18 Thread Wiggins d'Anconia


Ken Williams wrote:


On Monday, November 18, 2002, at 06:13  AM, Wiggins d'Anconia wrote:


Heather Madrone wrote:


Most of my career was spent as a C/C++ systems programmer.
The damage I can do with a command line as root is nothing
compared to the damage I can do with a C compiler.



This makes no sense? Compiling as a non-root user can cause more 
damage than a root enabled user?


She's saying that she's writing systems programs, which (when run) can 
cause a great deal of damage if they contain errors or malicious code.


But then are we to assume that the programs are getting written in the 
production environment, and put into place for execution without testing 
or code audits?  again the discussion was about running as a privileged 
user for every day activities (granted we are way off the original topic 
which didn't start out as this, but that is where it had been taken), 
and naturally a program can cause damage when run in a privileged 
manner, but that damage should be prevented several phases before being 
put into a place where damage can be caused.

http://danconia.org



Re: hard links on HFS+ (now even further off topic...)

2002-11-17 Thread Wiggins d'Anconia


Heather Madrone wrote:

At 03:29 PM 11/17/2002 -0500, William H. Magill wrote:


We're saying much of the same thing, however, this problem which you 
describe is not an  OS or vendor level problem and not even an ACL problem. 
It's a programmer/admin attitude problem, exemplified by the constant stream 
of questions asking how to login as root under OS X, or why they can't su to 
root anymore. It's a basic mentality; a way of thinking about the problem - 
issue.


I'm dyed with that mentality from head to toe; I like having a
root password in my pocket.  On personal systems, I always use
an account with full administrator privileges.  It seems silly
to have one account for me as a human being and another for me
as God.



You've never typed a wrong command?


In a corporate environment, I can certainly understand wanting
layers of protection, but, in many cases, the layers of
protection seem much more complicated than they need to be.
You can waste a lot of time if you have to wait for someone
with the right password to move a file or install a printer.
People, being people, almost invariably configure systems
like clearcase so that they are more trouble than they are
worth.



I think this is where the real distinction comes in, if we are talking 
about corporate environments that are a 30 person single office company 
that uses a 386 linux system to run their laser printer that is one 
thing, how many people could actually have access anyways, but it is 
entirely different in a large company that is more focused on security 
and has to be, or in a college environment where you may have ten 
thousand or more students on a system, 10% of which are trying to see if 
they can crack root.



The concepts of distributed authority are simply foreign to the Unix (and 
Linux) community. And the problems are acerbated by the fact that the 
traditional Unix System Administrator still expects to do everything as 
root. The vendors are just responding to customer demand -- or more 
accurately, the lack thereof -- for security features. Tru64 Unix (aka OSF/1 
aka Digital Unix) has supported a C2 environment out-of-the-box since it's 
first release back in about 1990. But is it used? No. The few who wanted 
enhanced security only wanted a shadow password file, because that's all 
that BSD and Sun offered. They were not interested in taking the time to 
learn the ins and outs of C2 because we don't need that level of security.


Well, do they?  Are the reduced risks worth the increased
administrative costs?


Depends on what is at stake, again, if it is a printer won't be used for 
a couple hours who cares, if it is several billion dollars of transfers 
won't happen for a day or two then it is a real problem and it is worth 
the extra admin costs to know that the only people dorking with your 
systems *should* know what they are doing, and even they may make 
mistakes occasionally.


I worked in hard and soft crash recovery systems for years.  My job
was to be able to get database systems back online fast if someone
ran a forklift through the machine room.  I spent my time devising
systems that wouldn't crash, and, when they did crash, would come
back up quickly without losing a scrap of data.

Aside from enterprise-critical database operations, most installations
didn't care.  If their disks crashed, they could hire a bank of
secretaries to type their data back in.  


This is the wrong logic to use. Why would anyone use a computer at all, 
I mean why talk on the phone when someone could just meet in person? 
Why use a database when you could just hire a million secretaries to 
remember 10 phone numbers.

I can't imagine many Mac installations that justify the sorts of
protections you're suggesting.  Protect the servers, sure, but
don't wall the users off from their own systems so they have to
call ops in every time they insert a CD.



Not now, but in the future...Apple is trying to enter this space, and 
like I mentioned earlier the higher education space.

Personally I keep my account with sudo shell access so that when I need 
to do something as root it is a conscious effort. And I will admit it 
has still come back to bite me on occasion.

Which brings me to a new point in the discussion, I am surprised no one 
has mentioned sudo, I like it as a method of control, that is control 
what a user can do rather than what files they can and can't read/write. 
Obviously this requires knowledge about the relationships between the 
files and the applications, and allows for a different kind of access.

But then again I am biased I come from the linux side of things rather 
than windoze or classic mac.

http://danconia.org



Re: hard links on HFS+ (now even further off topic...)

2002-11-17 Thread Wiggins d'Anconia
Heather Madrone wrote:

At 10:38 AM 11/17/2002 -0500, Wiggins d'Anconia wrote:


Heather Madrone wrote:


At 03:29 PM 11/17/2002 -0500, William H. Magill wrote:


We're saying much of the same thing, however, this problem which you describe is not an  OS or vendor level problem and not even an ACL problem. It's a programmer/admin attitude problem, exemplified by the constant stream of questions asking how to login as root under OS X, or why they can't su to root anymore. It's a basic mentality; a way of thinking about the problem - issue.


I'm dyed with that mentality from head to toe; I like having a
root password in my pocket.  On personal systems, I always use
an account with full administrator privileges.  It seems silly
to have one account for me as a human being and another for me
as God.


You've never typed a wrong command?



Most of my career was spent as a C/C++ systems programmer.
The damage I can do with a command line as root is nothing
compared to the damage I can do with a C compiler.



This makes no sense? Compiling as a non-root user can cause more damage 
than a root enabled user?

I'm careful, and I install safety nets when I need them.  I'm fanatical
about backups.  There isn't anything I could do to my Mac on the command
line that would cause any permanent harm. 


Safety nets? Sounds like running in a non-privileged environment.  How 
often do you run backups, every ten seconds? how about ten minutes? 1 
hour? 8 hours?  My place of business easily crawls through $300,000 
worth of transactions in an 8 hour period, and there are networks doing 
a whole lot more than we are, backups are great for yesterday's data, 
but demanding clients seem to want all of their data, I know they are 
rather pesky that way ;-).

Permissions are not much of a hedge against sloppiness.  If you're
careless, then how much difference is it going to make if you have
to log into an administrative account before you start typing commands?


This is true. But it is the re-enforcement of having to enter a password 
every 5 minutes or so or even each time you log in to do those specific 
tasks that you are working in such a way that you can do damage.



I worked in hard and soft crash recovery systems for years.  My job
was to be able to get database systems back online fast if someone
ran a forklift through the machine room.  I spent my time devising
systems that wouldn't crash, and, when they did crash, would come
back up quickly without losing a scrap of data.
Aside from enterprise-critical database operations, most installations
didn't care.  If their disks crashed, they could hire a bank of
secretaries to type their data back in.  

This is the wrong logic to use. Why would anyone use a computer at all, I 
mean why talk on the phone when someone could just meet in person? Why use a 
database when you could just hire a million secretaries to remember 10 phone 
numbers.


It's not my logic.  I had a platter on my wall with most of the
oxide scraped off from a head crash with a little sign that said
Your data was here.  It was the logic of the people making the
budget decisions for large telecommunications firms.

I spent a lot of time evangelically promoting backups and mirrored
safe disks and whatnot.  A lot of installations only did backups
sporadically and played head crash roulette.  Almost all of them
won.



I can't imagine many Mac installations that justify the sorts of
protections you're suggesting.  Protect the servers, sure, but
don't wall the users off from their own systems so they have to
call ops in every time they insert a CD.


Not now, but in the future...Apple is trying to enter this space, and like I mentioned earlier the higher education space.

Personally I keep my account with sudo shell access so that when I need to 
do something as root it is a conscious effort. And I will admit it has still 
come back to bite me on occasion.


I haven't done any appreciable damage to a system since 1983,
when I accidentally formatted a hard drive (criminally easy on
DOS).

Come to think of it, it was criminally easy to do serious damage to
every computer I worked on before 1995 or so.  And programmers have
fought every change to make systems more secure as long as I can
remember.  The true hacker (in the old sense) doesn't want anything
between himself and the hardware.



A true hacker wants his job to be as hassle free while getting the most 
done, lest we would all still be writing assembler. C, Java, and most 
certainly Perl would have never come to be.  But getting the job done 
means the proper methodologies must be put in place, aka security, 
backups as you stated, other such processes.  I am most certainly glad 
the programmers you speak of don't have access to my machines.  Security 
to a programmer should rank right up there with efficiency, it is the 
only way to predict a sane environment for our concoctions.


The challenge, I think, is to design security systems that are
as simple

Re: hard links on HFS+

2002-11-16 Thread Wiggins d'Anconia
I suppose that is better than one word. RTFM ;-)

http://danconia.org

Lou Moran wrote:

Please understand this is no flame... but I got two words for you:

Goo Gle

Look it up.


On Saturday, Nov 16, 2002, at 23:17 America/New_York, Joseph Kruskal wrote:


On 11/1/02 3:47 AM, William H. Magill at [EMAIL PROTECTED] wrote:


... journaled file system ...


What is a journaled file system?


... the user level -- REAL ACLs being one of particular interest ...


What are ACLs?
What are REAL ACLs?


... especially for C2 type enterprise applications...


What are C2 type enterprise applications?

Thanks, Joe
--
Joseph B Kruskal  [EMAIL PROTECTED]







--
Lou Moran
[EMAIL PROTECTED]
http://ellem.dyn.dhs.org:5281/resume/