Re: [OpenAFS] Integrating previous volumes into new AFS installation

2006-05-11 Thread Marcus Watts
[EMAIL PROTECTED] writes:
> From: [EMAIL PROTECTED]
> To: openafs-info@openafs.org
> Subject: [OpenAFS] Integrating previous volumes into new AFS installation
> Date: Thu, 11 May 2006 19:48:46 -0700 (PDT)
> 
> I recently had a server hacked so I had to do a fresh reinstall of the OS.
> My afs volumes are still intact. I have my old KeyFile and old db files
> from my previous setup but am having problems getting my AFS server back
> up using my old files. Is there a way to do a fresh installation of AFS
> and bring back my old volumes using newly generated keys and db files?
> 
> Right now I have my .vol's mounted on vicep* and have installed the
> openafs-1.2.10, along with the kernel, and server, and am using my old
> KeyFile and db files. When I run /etc/init.d/ i get the following error:
> libafs-2.4.18-14-i386.mp.o
> Failed to load AFS client, not starting AFS services.
> 
> I'm kinda new to all those so if none of this makes sense let me know and
> I'll try to be a little more clear
...

You want to scrap and remake your keyfile, as was just recommended for
"Gabe Listaccount".  If your kdc (krb5kdc or kaserver) was on the same
machine, that's just the start of what you need to consider compromised
- *every* other principal in your kdb is also potentially compromised.

prdb does not contain secrets, nor does vldb, so you can reuse that no
problem.  Note that ptserver does not automatically make all things
visible, so be aware your vandal now knows even your hidden
associations.  It is harmless to rebuild vldb from scratch.  You almost
certainly don't want to rebuild prdb from scratch; if you do, figure
out how you're going to preserve viceids.

The volumes themselves do not contain any secrets or signatures.  You
should be able to continue to use them with no problems.  You may want
to consider doing backups and worrying about DRP; if you had been doing
this already you would have additional options now.

You probably want to run openafs-1.2.13 not 1.2.10.  There should
not be any problems just moving the data to the new version ( note
in theory, this is not totally guaranteed between any two versions
although in practice it nearly always works. )

The kernel extension you apparently built (libafs-2.4.18) isn't compatible
with your kernel.  Some part of your build or install process failed
to do the right thing.  Check timestamps on the kernel module and
on your kernel, and make sure one isn't old.  To build the kernel
extension, in general, you need to get kernel source & .config
for *exactly* the kernel that you plan to run, do at least "make config",
then point the AFS build process at the result.  In slightly more detail:
To just configure virgin kernel source:
< put what you already used in .config >
make oldconfig
#(use this if I'm wrong in thinking you don't need this:)
# make dep
(If you're using something like RH, beware; get their source
not that from kernel.org.)
To build just the openafs cache manager:
configure --with-linux-kernel-headers= ...
make SYS_NAME=i386_linux24 make dest_only_libafs
It is also a valid choice to build the kernel from scratch,
install that, then point openafs at that, build it all from scratch,
and run the result.  If you don't give the openafs configure
a pointer to the right linux kernel headers, it will probably pick the
kernel headers that were used to build libc, which probably does not
match the kernel you are running.  Some distributions have a separate
package you can install which contains the linux kernel headers
specifically for the corresponding bootable kernel that you
might run.

linux kernel version 2.4.18 sounds like really really old.
I don't need or want to know what you're running, but you may want
to investigate your distribution to see if what you've got is really
up to date on security patches or whether you should be running something
quite different.

-Marcus
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] OpenAFS and Kerberos Workshop 2006

2006-05-11 Thread Esther Filderman

Sadly, the page used [from acis.as.cmu.edu] is out of our control.  I
can complain to CMU about it, but who knows what will happen.

[The info for the page does claim that it is AES-256 256 bit
encrypted, if that means anything.]

On 5/11/06, Robert Petkus <[EMAIL PROTECTED]> wrote:

Well, the frame is https (https://acis.as.cmu.edu/cc/gather_info.cgi) --
still, probably a good idea to fix this.

Robert

Robert Petkus wrote:

> I'd like to register for the conference, but the credit card payment
> page is not SSL encrypted...
>

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Integrating previous volumes into new AFS installation

2006-05-11 Thread daniel
I recently had a server hacked so I had to do a fresh reinstall of the OS.
My afs volumes are still intact. I have my old KeyFile and old db files
from my previous setup but am having problems getting my AFS server back
up using my old files. Is there a way to do a fresh installation of AFS
and bring back my old volumes using newly generated keys and db files?

Right now I have my .vol's mounted on vicep* and have installed the
openafs-1.2.10, along with the kernel, and server, and am using my old
KeyFile and db files. When I run /etc/init.d/ i get the following error:
libafs-2.4.18-14-i386.mp.o
Failed to load AFS client, not starting AFS services.

I'm kinda new to all those so if none of this makes sense let me know and
I'll try to be a little more clear

-Daniel


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] /usr/afs/etc/KeyFile from krb4?

2006-05-11 Thread Gabe ListAccount
There is currently only 1 server.  Will generating a new KeyFile corrupt data or the database, or somehow lose user accounts. Can someone give me  a quick rundown on how to do this, it's been quite a while.  Also, if there is a way to convert the old KeyFile and db to something usable, any pointers would be much appreciated.  Thanks,     Gabe "Christopher D. Clausen" <[EMAIL PROTECTED]> wrote: Gabe ListAccount  wrote:> Hello,>I have a server that was hacked, and thus a new OS (CentOS4) was> installed. I setup OpenAFS 1.4 , openafs-krb5-1.4.1 was installed. I> dropped the old db files as well as the KeyFile into their
 respective> directories. I don't think this was appropriate. How do I convert the> old KeyFile and db (from OpenAFS  1.2.10) to be compatble with krb5?Uhh, well, if your server was hacked you likely do not want to the use the old KeyFile and instead generate a new one.  You would need to add the updated key to all AFS servers in your cell and you should remove the old key as quickly as possible.In thet past people have used something called the Kerberos 5 Migration Kit to go from AFS kaserver to Kerberos 5.  I'm not sure if that is still the recomended thing to do or not though.  I thought that at least MIT Kerberos 5 could read the older Kerberos db file from kaserver.<-- Christopher D. Clausen[EMAIL PROTECTED] SysAdminLove cheap thrills? Enjoy PC-to-Phone  calls to 30+ countries for just 2�/min with Yahoo! Messenger with Voice.
		New Yahoo! Messenger with Voice. Call regular phones from your PC and save big.

Re: [OpenAFS] File corruption, 1.4.1 on linux

2006-05-11 Thread Derrick J Brashear

On Thu, 11 May 2006, Miles Davis wrote:


Sorry -- clone on same site. I haven't tried making a clone on another site yet
(I didn't used to have it replicated, but decided to test that out).


So if the copy in the RO had not also gone bad I would have been scared.
Is it changing size, or content?

Use md5 and ls -l on the file. You can find it with help from 
/afs/andrew.cmu.edu/usr/shadow/volid.pl (give it the RW's volume id number 
as input)


Derrick

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] File corruption, 1.4.1 on linux

2006-05-11 Thread Miles Davis
On Thu, May 11, 2006 at 04:36:42PM -0400, Derrick J Brashear wrote:
> On Thu, 11 May 2006, Miles Davis wrote:
> 
> >release the volume, RO copy looks good to. Everything right with the world.
> >Now, wait some amount of time, do another install, and whamo -- bad rpm 
> >again,
> >*even on the ro volume*.
> 
> Which is a clone on the same site? Or elsewhere?

Sorry -- clone on same site. I haven't tried making a clone on another site yet 
(I didn't used to have it replicated, but decided to test that out).

-- 
// Miles Davis - [EMAIL PROTECTED] - http://www.cs.stanford.edu/~miles
// Computer Science Department - Computer Facilities
// Stanford University
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] File corruption, 1.4.1 on linux

2006-05-11 Thread Derrick J Brashear

On Thu, 11 May 2006, Miles Davis wrote:


release the volume, RO copy looks good to. Everything right with the world.
Now, wait some amount of time, do another install, and whamo -- bad rpm again,
*even on the ro volume*.


Which is a clone on the same site? Or elsewhere?
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] OpenAFS and Kerberos Workshop 2006

2006-05-11 Thread Robert Petkus
Well, the frame is https (https://acis.as.cmu.edu/cc/gather_info.cgi) -- 
still, probably a good idea to fix this.


Robert

Robert Petkus wrote:

I'd like to register for the conference, but the credit card payment 
page is not SSL encrypted...




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] OpenAFS and Kerberos Workshop 2006

2006-05-11 Thread Robert Petkus
I'd like to register for the conference, but the credit card payment 
page is not SSL encrypted...


--
Robert Petkus 


Brookhaven National Laboratory
Physics Dept. - Bldg. 510A
Upton, New York 11973
Tel.   : +1 (631) 344 3258
Fax.   : +1 (631) 344 7616

http://www.bnl.gov/RHIC
http://www.acf.bnl.gov

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] File corruption, 1.4.1 on linux

2006-05-11 Thread Miles Davis
OK, I'm having deja vu, but I can't remember when I saw this 
last...1.3.80something probably, and I think I blamed hardware at the time or 
something.

I've got a mirror of fedora, among other things. After an upgrade on the server 
side to openafs-1.4.1, I had some problems doing installs from my mirror 
because of bad RPMS. Sure enough, a handful were bad, e.g.

$ rpm -qp hdparm-5.9-1.i386.rpm
error: hdparm-5.9-1.i386.rpm: headerRead failed: tag[7]: BAD, tag 1006 type 4 
offset 1009741886 count 218762506

OK, don't know why that happened, but I need to do installs, so nuke it and 
resync to parent mirror. Check the RPM again, everything is good.

$ rpm  -K hdparm-5.9-1.i386.rpm
hdparm-5.9-1.i386.rpm: sha1 md5 OK

release the volume, RO copy looks good to. Everything right with the world. 
Now, wait some amount of time, do another install, and whamo -- bad rpm again, 
*even on the ro volume*.

$ rpm -qp hdparm-5.9-1.i386.rpm
error: hdparm-5.9-1.i386.rpm: headerRead failed: tag[7]: BAD, tag 1006 type 4 
offset 636236390 count 1482459716

It only seems to happen if I do alot of reads (like, an kickstart install out 
of that dir, or checking all the rpms). Once the file is corrupt, if I replace 
it with a good version, the RO is still corrupt until release. The corruption 
appears on all clients.

Things I've tried to no avail:

Moved volume to new partition on server
Salvage volume
Tried from different clients (i386, x86_64, 1.4.1 and 1.3.86)
Tried disk cache vs memcache on 1.4.1

I don't see any problems on the file server end with either the hardware (no 
errors reported) or underlying filesystem (XFS).

-- 
// Miles Davis - [EMAIL PROTECTED] - http://www.cs.stanford.edu/~miles
// Computer Science Department - Computer Facilities
// Stanford University
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] /usr/afs/etc/KeyFile from krb4?

2006-05-11 Thread Ken Hornstein
>In thet past people have used something called the Kerberos 5 Migration 
>Kit to go from AFS kaserver to Kerberos 5.  I'm not sure if that is 
>still the recomended thing to do or not though.  I thought that at least 
>MIT Kerberos 5 could read the older Kerberos db file from kaserver.

The migration kit is mostly going away; aklog and asetkey now have
permament homes in OpenAFS, fakeka is now part of MIT Kerberos.  The
database conversion tool unfortunately does not have a good home right
now; one of these days I'm going to get around to putting it into OpenAFS,
and hopefully remove all of the internal Kerberos dependencies.

--Ken
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] /usr/afs/etc/KeyFile from krb4?

2006-05-11 Thread Christopher D. Clausen
Gabe ListAccount <[EMAIL PROTECTED]> wrote:
> Hello,
>I have a server that was hacked, and thus a new OS (CentOS4) was
> installed. I setup OpenAFS 1.4 , openafs-krb5-1.4.1 was installed. I
> dropped the old db files as well as the KeyFile into their respective
> directories. I don't think this was appropriate. How do I convert the
> old KeyFile and db (from OpenAFS 1.2.10) to be compatble with krb5?

Uhh, well, if your server was hacked you likely do not want to the use 
the old KeyFile and instead generate a new one.  You would need to add 
the updated key to all AFS servers in your cell and you should remove 
the old key as quickly as possible.

In thet past people have used something called the Kerberos 5 Migration 
Kit to go from AFS kaserver to Kerberos 5.  I'm not sure if that is 
still the recomended thing to do or not though.  I thought that at least 
MIT Kerberos 5 could read the older Kerberos db file from kaserver.



[OpenAFS] kas:setpasswd: ticket contained unknown key version number

2006-05-11 Thread Erwin Broschinski
Hi,

we are running 5 DB-servers for our AFS cell:

4 are Solaris 8 OpenAFS 1.4.1,
1 is Solaris 2.6 OpenAFS 1.2.11

still with AFS-Kerberos (klog) :^((

Since yesterday it is not possible to create new users or change passwords for
AFS accounts:

>kas setpass testaccount something -admin someadmin

kas:setpasswd: ticket contained unknown key version number so can't set
password for testaccount


>kas exa afs -admin someadmin

Administrator's (afssdadmin) Password: 

User data for afs
  key (2) cksum is 3294702545, last cpw: no date
  password will never expire.

>bos listkeys someDBserver
key 0 has cksum 1169391585
key 1 has cksum 1182590319
key 2 has cksum 3294702545

Which looks OK?

I then bos stop vl, ka and pt   on the Solaris 2.6 DB server, bos restart
-bosserver  the sync site and start the instances on the Solaris 2.6 server
again. Then, everything is running fine - for some time between 20min and 1
hour.

Any idea (apart from replacing the old DB-server)


Thanks for any help.


Erwin
 ''`'
~O-O~~~
ETH Tel:+41 44 632 4281
Erwin Broschinski   Fax:+41 44 632 1022 
Informatikdienste   E-Mail: [EMAIL PROTECTED]
Clausiusstrasse 59  PGP-key:  
8092 Zurich, Switzerlandwww.tik.ee.ethz.ch/~pgp/Search.html
~~~

"Ceterum censeo, 'Parvam Mollim' esse delendam."  (nach Cicero)
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] multiple afs client, or have two caches

2006-05-11 Thread Rainer Toebbicke

Todd M. Lewis wrote:

If you've got a list of files that you really want to keep cached, just 
put the file names into a list and do something like this:


while (1) ; do
   for f in `cat ~/.afs.cache.list` ; do
   head $f > /dev/null
   sleep 10
   done
   sleep 600
done



as AFS does not cache files but chunks, the above solution would just 
keep the first chunk in the cache, not the complete file.


Also, Ted did not say that files were only used for read. If yes,
the need for a RAM cache is relative: on modern systems, if files are 
read frequently then they will reside in the system's buffer cache anyway.


I agree though that being able to pin chunks in the cache could have 
its merits, for example on a notebook if you know that the next time 
you connect it'll be over a slow line. Can't be that big a change...


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Rainer Toebbicke
European Laboratory for Particle Physics(CERN) - Geneva, Switzerland
Phone: +41 22 767 8985   Fax: +41 22 767 7155
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] /usr/afs/etc/KeyFile from krb4?

2006-05-11 Thread Gabe ListAccount
Hello,    I have a server that was hacked, and thus a new OS (CentOS4) was  installed. I setup OpenAFS 1.4 , openafs-krb5-1.4.1 was installed. I  dropped the old db files as well as the KeyFile into their respective  directories. I don't think this was appropriate. How do I convert the  old KeyFile and db (from OpenAFS 1.2.10) to be compatble with krb5? Thanks in Advance,    Gabe  
		Love cheap thrills? Enjoy PC-to-Phone  calls to 30+ countries for just 2¢/min with Yahoo! Messenger with Voice.

Re: [OpenAFS] multiple afs client, or have two caches

2006-05-11 Thread Todd M. Lewis
You might be surprised at the differences between what you think should 
be in the cache and what actually should be there. But in any case, the 
client doesn't have any sort of multi-policy cache capabilities that I'm 
aware of.


If you've got a list of files that you really want to keep cached, just 
put the file names into a list and do something like this:


while (1) ; do
   for f in `cat ~/.afs.cache.list` ; do
   head $f > /dev/null
   sleep 10
   done
   sleep 600
done

That's just off the top of my head and may be busted or dead wrong here 
and there (I have no intention of trying to run it), but you get the 
idea, which is to keep using the files you want to keep in the cache.


The cache does a pretty good job. It's not usually worth trying to 
outsmart it, but then I haven't really had the need to try. YMMV.

(Oh BTW, your shift key seems to be malfunctioning a lot, too. :^)
--
   +--+
  / [EMAIL PROTECTED]  919-445-9302  http://www.unc.edu/~utoddl /
 /   I just got lost in thought. It was unfamiliar territory.   /
+--+


ted leslie wrote:

i'd like to take advantage of the afs caching for added performance,
is it possible to have two seperate caches?
that is, i know there will be a common set of files
used over and over again, that really should remain in RAM cache
all the time, but other large files may also be cached and
wipe out the other ones that i'd like to always have cached.
One way I can fix this , is if i could have two afs clients on the same 
workstation,
one that has access to the afs that have the very popular files,
and other that has access to the infrequent, but large files that otherwise 
would
spoil the cache. Can one have two seperate afs clients running on a workstation?

Another sol'n might be , to be able to control what is and isn't cachable ?

any ideas/thought/sol'n would be appreciated.

thanks,

-tl
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] multiple afs client, or have two caches

2006-05-11 Thread Horst Birthelmer

On May 11, 2006, at 12:19 PM, ted leslie wrote:

i'd like to take advantage of the afs caching for added performance,
is it possible to have two seperate caches?
that is, i know there will be a common set of files
used over and over again, that really should remain in RAM cache
all the time, but other large files may also be cached and
wipe out the other ones that i'd like to always have cached.
One way I can fix this , is if i could have two afs clients on the  
same workstation,

one that has access to the afs that have the very popular files,
and other that has access to the infrequent, but large files that  
otherwise would
spoil the cache. Can one have two seperate afs clients running on a  
workstation?


Another sol'n might be , to be able to control what is and isn't  
cachable ?


any ideas/thought/sol'n would be appreciated.


Just some stupid questions/comments ... to your 'solutions'.

If you have files which are always in the cache and never change, why  
don't you copy them over to that particular machine? (even on a  
ramdisk, if you need it) You can't get faster than your actual  
hardware anyway.


Who's going to decide what is going to be in what cache?
Whoever is doing that, sooner or later it's gonna be wrong ;-)

I'm sure it helps a lot more to rethink the location of your data.
AFS is a network file system, which means most of the data is on some  
server.
The cache was never thought to hold your complete cell data (that  
idea got more popular with the decreasing costs of storage).

If you don't need it there, don't put it there ... :-)

Of course, I don't know what you're doing with that particular  
client, but a proper configured cell with a good client configuration  
should really be sufficient for the daily usage of a workstation.


Horst
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] multiple afs client, or have two caches

2006-05-11 Thread ted leslie

i'd like to take advantage of the afs caching for added performance,
is it possible to have two seperate caches?
that is, i know there will be a common set of files
used over and over again, that really should remain in RAM cache
all the time, but other large files may also be cached and
wipe out the other ones that i'd like to always have cached.
One way I can fix this , is if i could have two afs clients on the same 
workstation,
one that has access to the afs that have the very popular files,
and other that has access to the infrequent, but large files that otherwise 
would
spoil the cache. Can one have two seperate afs clients running on a workstation?

Another sol'n might be , to be able to control what is and isn't cachable ?

any ideas/thought/sol'n would be appreciated.

thanks,

-tl
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info