Hello,
I have 2 servers (lets called them A and B). The A holds all of the read/write
volumes, all of these volumes are replicated to B (read only). The OpenAFS
services
run on both, in pretty much identical set-up. Both servers also have the
OpenAFS clients.
Hello Dirk,
Create r/o replicas on A (and on the same vice partition), too.
Will do.
The problem with your setup is that OpenAFS clients use the r/o path to
access
data in replicated volumes, but you have put all your r/o replicas on one
single machine, so you have created a single
Hello Andrew,
Cheers, I forgot to say _by hand_.
You can do this with 'vos convertROtoRW', but it's intended to be more
of a tool for disaster recovery (when you've permanently lost the RW,
and all you have are ROs). Not generally for keeping up availability
while a server is temporarily
Thank you for explanations and pointers. I understand this better now...
Kind regards,
Vladimir
--
because it reverses the logical flow of conversation + it is hard to follow.
why not?
do not put a reply at the top of the message, please...
Please access the attached hyperlink for an
Hello,
One of my servers is running out of inodes on /vicep?? partition...
Is it safe to:
1. shut down the OpenAFS file-server in question.
2. copy the content of /vicep?? partition somewhere safe
3. re-create ext3 file-system on the affected partition
with better settings (for inodes)
4.
Hello,
Have you already done the optimization of not doing a vos dump unless the
volume modify timestamp has changed since the last time you did one?
Nope, thanks for the pointer.
Also, note that vos dump can do incrementals. I know that doesn't work
very well with Tivoli's normal
Hello,
Do you know a way to persuade IBM Tivoli client to do backup of openafs
file-system?
Currently, I do vos dump on each volume, compress that and the result goes to
tapes. But most
of the data does not change and it takes very large chunk of space in the
backup vault.
Any pointers on
Hello,
I am running an OpenAFS version 1.4.2 server on Linux Debian Etch AMD64.
Recently, I was extending the RAID5 volume which had one partition,
holding pvolume of an LVM2 volume group, which holds LVM2 logical
volumes holding the vicep? partitions:
[sda1] - [pvolume] - [volume-group] -
Hello,
I (+ my users) would like to run long running jobs under screen command, but
currently
the job looses access to afs after a user logs out.
I tried running kinit + aklog within the screen session, but this makes no
difference.
Is there a way to open a screen command and get tokens
Hello Thomas,
My standard formula for running a screen session with long
running credentials is:
* Put yourself into a new pag by running
pagsh
* Make sure you have an independent kerberos credentials
cache:
export KRB5CCNAME=FILE:`mktemp /tmp/krb5cc.screen.XX`
Thanks to all who replied, I tested the -rw + replicas and it works :-)...
Vlad
Please access the attached hyperlink for an important electronic communications
disclaimer: http://www.lse.ac.uk/collections/secretariat/legal/disclaimer.htm
___
Hello,
When I was creating mount points in our cell, I did not ask specifically for
-rw (read/write)
mount point.
Not understanding (at the beginning) how the afs client works (preference for
read only volumes), after
adding replicas and releasing the volumes, the clients could not write
Anyone have seen this?
--- [cut here ] - [please bite here ] -
Kernel BUG at ...fs/src/libafs/MODLOAD-2.6.18-6-amd64-MP/afs_lock.c:133
invalid opcode: [1] SMP
CPU 2
Modules linked in: vmnet parport_pc parport vmmon openafs isofs nls_iso8859_1
cifs nfsd exportfs
On Fri, 25 Apr 2008 08:03:42 -0400
Derrick Brashear [EMAIL PROTECTED] wrote:
On Fri, Apr 25, 2008 at 4:50 AM, Vladimir Konrad [EMAIL PROTECTED]
wrote:
Anyone have seen this?
Yup. Fixed it, too, if you upgrade to something newer.
Is this fixed in 1.4.7 client?
Vlad
Please access
Hello again,
Is it OK to have multiple vos backupsys -cmd for the same cron bos
job?
I would like to run dump on multiple volume sets at the same time, do I
have to do this from system cron?
Vlad
Please access the attached hyperlink for an important electronic communications
disclaimer:
Will there be a window when 2nd download job (while authenticating)
renders 1st download job (already in progress) without access?
Your choice...
keytab -- credential cache --- token
Depending on the value of KRB5CCNAME you can choose different cache
files and depending on
On Apr 16, 2008, at 9:34 , Vladimir Konrad wrote:
I am thinking of running several data downloads to AFS (scheduled
from cron) and using the same keytab for all jobs. The jobs can be
scheduled at the same time (on the same server).
Will there be a window when 2nd download job (while
Hello,
I am thinking of running several data downloads to AFS (scheduled from
cron) and using the same keytab for all jobs. The jobs can be
scheduled at the same time (on the same server).
Will there be a window when 2nd download job (while authenticating)
renders 1st download job (already in
Hello,
Is it possible to dump volumes on debian woody and restore them on
debian etch?
Vlad
Please access the attached hyperlink for an important electronic communications
disclaimer: http://www.lse.ac.uk/collections/secretariat/legal/disclaimer.htm
Hello,
I had a test afs server set up (which is not in existence any-more).
But when I run:
vos listaddrs
The test server still shows there (even it is not running any-more).
Is this harmless?
I could not find a method to remove the stale entry.
Vlad
Please access the attached hyperlink for
vos listaddrs
The test server still shows there (even it is not running any-more).
Is this harmless?
I could not find a method to remove the stale entry.
There is no direct harm in the address being there but it
will slow down administration tools that query each server
listed in
Hello!
I have a server with two kerberos realms in /etc/krb5.conf running
OpenAFS 1.4.4 (built from source) on Debian Sarge. AFS authenticates
against d1.x.x.x. and SAMBA against x.x.x. The default realm is
d1.x.x.x.
When I do kinit and aklog by hand (without any parameters), it works
without a
I am sorry, the problem was my PAM configuration file. It works now with
pam_afs_session.so .
Vlad
Please access the attached hyperlink for an important electronic communications
disclaimer: http://www.lse.ac.uk/collections/secretariat/legal/disclaimer.htm
I'm getting troubles to access my afs folders after loggin with
pam_openafs_session.so (with using aklog).
I'm running Debian Etch with custom kernel 2.6.18
and openafs 1.4.2-4.
When I use kinit, I get the correct kerberos and afs tickets and
tokens :
what happens when you run aklog
I created an account, lets say user37
did you create the user with pts adduser?
this would be afs user - unix/linux system does not recognises this on
it's own (not sure if there is name service switch component for this).
our set up:
ldap - user details (user name, group membership)
i am in the process of upgrading clients from debian sarge to tebian
etch. the servers run debian woody:
what is running where:
servers: openafs 1.2.11, kerberos 5 with krb524 daemon running
sarge client: aklog from opeaafs-krb5 1.3
etch client: aklog from openafs-krb5 1.4.2-2
under
thank you for your help,
Right. It means that you're running krb524d to return K4 tickets to
applications that needed them, like AFS. As of OpenAFS 1.2.8, the
server supports native K5 tickets, so you shouldn't have to do this
any longer. The aklog that ships with OpenAFS 1.4 is the new
How is Debian, Ubuntu , Slackware , Mandriva in this regard?
i am using debian sarge with openafs 1.4.2 (build from source) and
kernel 2.6.18.2. works well... update (to kernel or openafs client) +
rebuild is quite simple.
also, i have created a simple custom script to load the correct
i am trying (and failing) to set a new encryption key for the server.
i did:
kadmin.local: ktadd
asetkey
bos addkey
the documentation says that the password has to be set also in the afs
authentication database.
the bos addkey asks to supply a password (which i did), the question is how do
can openafs fileserver 1.3.81 use other part of the disk then /vicepa?
i would need to specify the path where the volumes are to be stored...
vlad
pgpF2tCy6oJK0.pgp
Description: PGP signature
thank you for the prompt reply...
To answer your question... No, the fileserver cannot be configured to
look elsewhere (unless you change some code).
What's the problem with mounting your storage for AFS to /vicepXX?
one of our main afs servers went down (i still have the volume dumps on
just a crazy thought, would not symlink work (/vicepa -
/some/directory/elsewhere)?
i just realised that i could do a loop device and mount that on /vicepa . that
should work (maybe bit
slover but this does not matter at the moment)...
pgpA0vRrE8oqg.pgp
Description: PGP signature
i have 2 openafs servers, each of them holding a set of volumes. one of the
servers (old server)
has developed a hard-disk problems but i can still dump all afs volumes without
triggering an error.
i have installed openafs-fileserver and openafs-dbserver on a different machine
(new box), have
forgot to write that both original servers are parts of the same afs cell:
current situation:
server1, old server (both afs cell x)
wanted:
server1, new server (both afs cell x)
where new server would hold all that the old server did...
vlad
pgpPPUY00Sgew.pgp
Description:
Are there any plans for completing FreeBSD's openafs client
functionality in the near future?
i found arla working fine (did not do much testing though), i build it from
source.
http://www.stacken.kth.se/project/arla/
vlad
pgpbQWZdH3HoG.pgp
Description: PGP signature
is there any reason that the dump to a pipe would be less reliable then the
dump to a file?
apologies for the wasted bandwidth and time, there were unrelated other
problems with the box
that i was not aware of. it works now.
also, does a volume read-only snapshot have to be created on the
is there any reason that the dump to a pipe would be less reliable then the
dump to a file?
we have openafs 1.2.11 running on debian woody and i am investigating a strange
crash which happened during
vos dump command. i made a change to the dump command to go to a pipe instead
of a file (i.e.
it's not that much work to recreate from scratch, but does anyone have
a simple set of vos dump based backup scripts they can share?
we have a trivial solution, this script (run from cron):
#!/bin/bash
USERS=bob john
for i in $USERS; do
vos dump $i.backup -file
London School of Economics does...
it is on small scale, though (~ 30 users, 2 servers).
vlad
pgpFrtlKMJGDw.pgp
Description: PGP signature
hello,
we have a openafs server (configured before i turned up) with two
ehernet network interfaces (one for normal network activity, one for
backup access). this is a production server.
the operating system is Debian Woody, openafs 1.2.11...
the fileserver currently tries to use both network
Now this is rather unusual but possible ;-)
Since we don't know what the error is, it's kinda difficult to guess.
# vos changeaddr -remove the-ip
Could not remove server the-ip from the VLDB
VLDB: volume Id exists in the vldb
If you don't know where to put VosRestrict, call the fileserver
Create a file NetInfo and put all IPs you want to use inside (one per line).
I had to put the file in /etc/openafs/server-local but I think
the woody-version of openafs expects it in /var/lib/openafs .
tried both locations, restarted the server, no change :-(...
vlad
the NetInfo trick worked, i just did not wait long enough for it to
propagate...
thank you all for the help...
vlad
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info
What is the current status on OpenAFS and Linux kernel syscall table?
Does the table need to be exported?
Also where can I find what Linux kernels work with what OpenAFS kernel
modules?
I found that OpenAFS 1.2.13 does not work with 2.4.24 kernel (it builds
and loads, but afsd fails). It works
Thank you,
1.2.13 needs to be able to find the syscall table. Except for certain
redhat kernels where the hacks that are in place can find the syscall table
even when it is not exported, it needs to be exported.
How to export the syscall table in 2.4.x vanilla (kernel.org) kernel? I
am not
oops:
now the afsd loads. In the process I upgraded _to_ the kernel 2.4.30.
Vlad
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info
Hello!
After upgrade from OpenAFS 1.2.? to 1.3.81, the afsd refuses to start
(the other components of OpenAFS start ok). If I try running afsd from
the command line, I get:
afsd: Error -1 in basic initialization.
Adding cell 'linux.lse.ac.uk': error -1
afsd: No check server daemon in client.
this strongly implies that the kernel module is not loaded. does lsmod
actually say that a module named 'openafs' is loaded?
Yes, the module loads and is loaded (checked with lsmod) before starting afsd
from command line...
is openafs instead of openafs.mp
the 'openafs.mp.o' concept was
Can you start afsd with -verbose -debug?
It'll tell you what syscall returned what error code.
The log file is attached... It is edited as the full log is 500K...
Vlad
afsd: My home cell is 'linux.lse.ac.uk'
ParseCacheInfoFile: Opening cache info file '/etc/openafs/cacheinfo'...
you upgraded afsd and the module, yes?
Yes, dpkg -l | grep -i openafs:
atalab1:~# dpkg -l | grep -i openafs
ii openafs-client 1.3.81-3 The AFS distributed
ii openafs-dbserv 1.3.81-3 The AFS distributed
ii openafs-filese 1.3.81-3 The AFS distributed
ii openafs-krb5 1.3-10
50 matches
Mail list logo