ted wrote:
Ok, after copy of libafs-2.6-xxx to /lib/modules/2.6-xx/../afs/libafs.koGo with the config.sh I sent, it has namei on do a make ;make installlook at the contents of /usr/vice and usr/afs before and after the make install cd /usr/vice;mkdir cache;cd etc;ls insmod libafs-2.6.18.2-34-default-ted.ko #if it doesn't complain, the kernel module compiled OK If it does complain, probably the kernel you built the .ko for is not the one your running... reboot into the correct kernel or fix the compile - cp libafs-2.6.18.2-34-default-ted.ko /lib/modules/2.6.18.2-34-default-ted/kernel/fs/afs/libafs.ko #put the .ko in the library tree +;cd /lib/modules/2.6.18.2-34-default-ted/kernel;depmod;- modprobe libafs #should automatically insmod the libafs.ko and running "depmod" I can use "modprobe" to install "libafs" without trouble. The "sunrpc" modules gets pulled in automatically. So far so good! I copied the "afs-client" script to /etc/init.d/ and created an /etc/sysconfig/afs-client from the listing you provided. I also made the aliases for starting and stopping the client and server.#########I think that the client can be tested against any cell in the standard CellServDB off the openafs website - it will obviously show only unauthenticated files - play around dyno:/usr/vice/etc # ls /afs .:mount .grand.central.org .home.ted-doris.fam grand.central.org home.ted-doris.fam I have tried setting the variables for a couple of the cells from the standard CellServDB and it seems to work fine, only a bit slow when doing a 'ls'. I can copy files from the mounted afs dirs and only the first time the copy takes time. After that it seems to be cached and then it's a fast copy. I can't reach (or even nslookup) your nome.home.ted-doris.fam, but , since the other cells seems to work with my client I guess I got the client working correctly. So thanks a lot so far :) I suppose I am ready for the Kerberos and server setup! Will try to read a little bit about Kerberos until I hear from you again! -Martin Lütken My /usr/vice/etc looks like this: CellServDB ThisCell cacheinfo libafs-2.6.18.2-34-default-ted.ko cacheinfo is set up initially by /etc/sysconfig/afs-client but it can be set manually: /afs:/usr/vice/cache:800000 CellServDB: #Cell name, generated from /etc/sysconfig/afs-client 10.1.1.193 #nome.home.ted-doris.famgrand.central.org # Grand Central Communications18.7.14.88 #grand-opening.mit.edu 128.2.191.224 #penn.central.org #nome.home.ted-doris.fam must be resolvable either in /etc/hosts or via DNS ThisCell: home.ted-doris.fam #note my domain is ted-doris.fam - this is covered in the krb5.conf file put the following in your .bashrc and restart your xterm: alias starts='/etc/init.d/afs-server start' alias startc='/etc/init.d/afs-client start' alias stopc='/etc/init.d/afs-client stop' alias stops='/etc/init.d/afs-server stop' alias startkdc='/etc/init.d/krb5kdc start;/etc/init.d/krb524d start;/etc/init.d/kadmind start' alias stopkdc='/etc/init.d/krb5kdc stop;/etc/init.d/krb524d stop;/etc/init.d/kadmind stop' past the following into /etc/sysconfig/afs-client: ##################################################################### ## Path: Network/File systems/AFS client ## Description: AFS client configuration ## Type: yesno ## Default: no # # Set to "yes" if you want to generate CellServDB and ThisCell files # from THIS_CELL and THIS_CELL_SERVER variables. # If you want more complicated setting, set REGENERATE_CELL_INFO to "no" # and edit the files manually. # REGENERATE_CELL_INFO="no" ###initially yes ## Type: string ## Default: "" # # This cell name # THIS_CELL="home.ted-doris.fam" ## Type: string ## Default: "" # # IP address of afs server for this cell # THIS_CELL_SERVER="10.1.1.193" ## Type: string ## Default: "" # # DNS name of afs server for this cell # THIS_CELL_SERVER_NAME="nome.home.ted-doris.fam" ## Type: yesno ## Default: yes # # Set to "yes" if you want to use data encription (secure, slower) # DATA_ENCRYPTION="no" ## Type: yesno ## Default: yes # # Set to "yes" if you want to generate cacheinfo file # REGENERATE_CACHE_INFO="no" ###initially yes ## Type: string ## Default: "" # # AFS client configuration options # XXLARGE="-stat 4000 -dcache 4000 -daemons 6 -volumes 256 -files 50000" XLARGE="-stat 3600 -dcache 3600 -daemons 5 -volumes 196 -files 50000" LARGE="-stat 2800 -dcache 2400 -daemons 5 -volumes 128" MEDIUM="-stat 2000 -dcache 800 -daemons 3 -volumes 70" SMALL="-stat 300 -dcache 100 -daemons 2 -volumes 50" ## Type: yesno ## Default: yes # # Instead of mounting the home cell's root.afs volume at the AFS mount # point (typically /afs) a fake root is constructed from information # available in the client's CellServDB. # With this option enabled openafs can start up even on network outage. # DYNROOT=yes" ###initially no ## Type: yesno ## Default: yes # # use memory-only cache # MEMCACHE="no" ## Type: string(AUTOMATIC) ## Default: AUTOMATIC # # if you set CACHESIZE to "AUTOMATIC", it will automatically be chosen # deduced by parition sizes (does not work if your cache is on / or # /usr or /var) or by machine memory size for memory-only cache, # otherwise the values specified here will be used. # #CACHESIZE="AUTOMATIC" ## Type: string(AUTOMATIC,$XXLARGE,$XLARGE,$LARGE,$MEDIUM,$SMALL) ## Default: AUTOMATIC # # If you set OPTIONS to "AUTOMATIC", the init script will choose a set # of options based on the cache size, otherwise the values specified here # will be used. # OPTIONS="AUTOMATIC" ## Type: string(/var/cache/openafs) ## Default: /var/cache/openafs # # Path to cache directory, it is recommended to use separate partition. # It does not work on reiserfs. A valid directory must be specified # even if memory only cache is used. # Recommended cache directory is "/var/cache/openafs" # CACHEDIR="/usr/vice/cache" ## Type: string(/afs) ## Default: /afs # # AFS directory. You should never need to change this # AFSDIR="/afs" ## Type: yesno ## Default: no # # Set to "yes" for a lot of debugging information from afsd. Only # useful for debugging as it prints _a lot_ of information. # VERBOSE="no" ###initially yes ###################################################################### #########I think that the client can be tested against any cell in the standard CellServDB off the openafs website - it will obviously show only unauthenticated files - play around #start the client - no kerberos required - yet startc ls /afs .:mount .grand.central.org .home.ted-doris.fam grand.central.org home.ted-doris.fam Next the server and kerberos............. tedc Martin Lütken wrote:ted creedon wrote:If you compile with the inode option use ext3 since it is a journaling filesystem and doesn't need an fsck on reboot.The only place I can find 'inode' in the configure options for OpenAFS is this line: --enable-namei-fileserver force compilation of namei fileserver in preference to inode fileserver Should I leave this option out then when compiling OpenAFS , now I chose to go with ext3 ? -MartinI used to user resierfs but have changed to ext3 The reason for two partitions /usr/vice and /vicepxx is that when I move a raid set of drives all the afs stuff goes with, except for the binaries, e tc. i.e. on /dev/sda, sda1 could be /usr/vice and sda2 could be vicepa. Vicepa can also be loopback mounted too -----Original Message----- From: Martin Lütken [mailto:[EMAIL PROTECTED]] Sent: Wednesday, March 21, 2007 8:27 AM To: [EMAIL PROTECTED] Subject: SV: [OpenAFS] Initial server setup Ok! Just one (two) questions: Should both client cache and /vicepa be 'etx3' and not 'ext2' ? Seems the other information I have come across so far says to use ext2. -Martin -----Oprindelig meddelelse----- Fra: ted creedon [mailto:[EMAIL PROTECTED]] Sendt: ti 20-03-2007 23:48 Til: Martin Lütken Emne: RE: [OpenAFS] Initial server setup PS if you make a new opensuse system use ext3 filesystems and make a partition: /usr/afs 1gig #client cache /vicepa however many gig you want , I use 250gig #server volumes and data This way if you blow the os away, you'll probably be able to save the client and server data _____ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Martin Lütken Sent: Tuesday, March 20, 2007 8:40 AM To: openafs-info@openafs.org Subject: Re: [OpenAFS] Initial server setup Christopher D. Clausen wrote: Martin Lütken <mailto:[EMAIL PROTECTED]> <[EMAIL PROTECTED]> wrote: I tried for a couple of weeks now to set up an openAFS server. I read through the IBM documentation and surfed the net. It seems the IBM documentation is somewhat outdated or ? Should I still use 'kasserver' . Sometimes I find statement saying not to but IBM documentation use that. I know I shoould perhaps ask more specifically on my current problem, but I really have tried a lot and sometimes I get a little step forward .... IS THERE SOMEWHERE A STEP BY STEP GUIDE ?Not this detailedIf you connect to the #openafs IRC channel on freenode, there are many wonderful people who can help you get started. <<CDC Thanks I'll try that :-) _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info |
- [OpenAFS] Initial server setup Martin Lütken
- Re: [OpenAFS] Initial server setup Christopher D. Clausen
- Re: [OpenAFS] Initial server setup Martin Lütken
- Re: [OpenAFS] Initial server setup Martin Lütken
- [OpenAFS] Initial server setup again Martin Lütken
- Re: [OpenAFS] Initial server setup... Russ Allbery
- RE: [OpenAFS] Initial server setup Martin Lütken
- Re: [OpenAFS] Initial server setup david l goodrich
- Re: [OpenAFS] Initial server setup Martin Lütken