If you only need to export publicly-readable stuff, and possibly load is not a
consideration...you might consider making a webserver be an AFS client, and
people can access stuff via the webserver. You would need to do some tuning and
scaling for serious use, but it might work for you. It's been
First thing to check: Is the AFS cache in its own, dedicated filesystem
partition.
Second thing to check: When you say you are downloading the same file multiple
times, is it exactly the same file, or is there anything happening to the file
so that it changes (ie, some process rewriting the fi
on simple way would be:
- add a new fileserver to the cell
- 'vos move' all the volumes off the old fileserver to the new fileserver
then you can either leave the old one running, but don't use itor remove
the old fileserver service from the cell.
basically, you don't move the files, you mo
Nice script., Dan! I was going to suggest running tcpdump to see if one client
is accounting for most of the traffic. Some misconfiguration or a hardware
problem out at the client end can definitely cause a headache for a server. (I
dimly recall finding some client system that appeared to have t
what about the cache? how big is it, and is it on its own disk partition?
anne
From: Timothy Balcer
To: Andrew Deason
Cc: openafs-info@openafs.org
Sent: Monday, June 24, 2013 6:58 PM
Subject: Re: [OpenAFS] Re: Client Cache Question
Thanks in advance for
To re-phrase your steps...in the most general way:
1) You make your /vicepa partition on the server you are setting up, just like
the way you would make any unix partition on the system.
2) You use the 'vos create' command on that same server in order to create a
the AFS volume with the name "
if you're aiming for 100% "guaranteed" availability for those RW
volumes, a few other considerations:
- make the volumes as small as practical, to keep the 'vos'
operations short
- make your afs database servers equally robust
- make sure your network people provide the same level of ro
errr...sorry about the inevitable typo in my message i meant to say
"...to keep the user from getting confused during writes..."
not "...to keep the user confused during writes..."
i think that was the only one...8-)
anne
anne salemme wrote:
general advice:
- m
general advice:
- make sure the network connectivity between your three AFS "database
servers" is always up...they depend on the network to communicate with
each other, and if they are always up and always reliable, it will
enhance the perceived performance of afs
- if most of the users do m
1) got backups? verify that your backups are correct and complete
before you start your updates, and that you know how to do a restore
(i'm not kidding!)
2) how much outage is acceptable? how many servers do you have? if
only one, you will have an outage, but if more than one, there are
hat's wrong on the server(s), you might find that helpful.
just a suggestion, based on actual experience...
anne
Andreas Hirczy wrote:
anne salemme <[EMAIL PROTECTED]> writes:
one thing that i haven't seen in all this mail...are you running the client
commands on one of the
one thing that i haven't seen in all this mail...are you running the
client commands on one of the afs servers? in other words, do your
servers have afs clients installed on them?
when you do certain things that could cause the client to hang, you
don't want the client that's hanging be the on
if the goal is to make a copy of a quiescent RW volume, you could do a
'vos dump' of the .backup volume, piped to a 'vos restore'.
as in 'vos dump volume.whatever.backup' | 'vos restore volume.newname'
using appropriate arguments.
if the goal is to make a volume unavailable for a short time,
this is great practical advice. another useful thing to look at is cron
jobs running on the afs servers, or cron jobs that affect the afs servers.
you can find things like really inefficient creation of backup volumes
with respect to the actual backups you run, really inefficient volume
replicati
and to add to what derrick says, 'vos release' when there are very large
volumes involved and RO sites listed at geographically remote sites (ie,
many thousands of miles away), or when there are many users involved
(ie, all users of the cell) must be done in a purposeful, coordinated
way. and j
Steven Jenkins wrote:
I hadn't really thought about people intentionally keeping their RWs &
ROs out of sync w/each other. I'm not clear why someone would want to
do that -- could you elaborate?
Steven
some examples:
- putting in a new version of some software, and and trying it
In fact, you should be using 'vos delentry' and 'vos zap'
VERY rarely as well.
Used properly, vos addsite, vos release, and vos remove do
everything you will want 95% of the time... and they do the
right thing with the VLDB. Instead, you are using "emergency"
commands on a regular basis which
i have been following this thread, and i haven't seen any description
of what you are actually trying to accomplish. are you trying to make
it so that only the users at "headquarters" can write to certain
files, and that the users at the "district sites" will only be able to
read the files,
where are the districts physically located relative to the headquarters,
and where do you plan to locate the afs servers for each "domain"?
typically, for cells that span big geographical distances, you want to
be sure that the cell database servers always stay up and in contact
with each othe
Ron Croonenberg wrote:
oh. with -localauth you don't need root in UserList. you have some
other issue. without -localauth you need tokens for user root.
Yes I know, that is why I wanted to use -localauth. That way I can run
the script as a crontab without having to mess with tokens
why don
Adam Megacz wrote:
So, is there any way to make a backup volume less accessible than its
rw? If not, then it means that reducing access to any backed-up file
always has to wait until the next backup...
if you're in a big hurry, you can do a 'vos backup' manually, no need to
wait for the n
Jonathan Dobbie wrote:
Should I just automate vos release every minute and then do a vos
convertROtoRW?
In most cells, a "vos release every minute" would be a very bad idea. If
you bog down your AFS database servers, you effectively bog down your
whole cell. Not generally desirable.
a couple of things to think about before plowing ahead...first, don't
forget about the afs cache on your webservers, and think about how you
will update files and coordinate the vos releasing of the volumes.
if it is critical for the webservers to all see exactly the same data at
exactly the s
in the case of the mit cells, we wanted to have multiple cells under one
kerberos realm...it was easier to manage a single realm, rather than a
realm for each cell.
cheers,
anne
Derrick J Brashear wrote:
You mean like say a krb5 realm named ATHENA.MIT.EDU supporting a cell
named sipb.mit.edu
we got a note from symantec as well...we use afs and netbackup for
general backups here (the northstar afs cell is just a small part of
the stuff that gets backed up) and we have our homegrown scripts to
get the data ready for netbackup. if we hear anything interesting from
symantec, i'll
check with your local systems or afs administrator to see if/how you
can do that. the way afs "login" is incorporated into the user account
management is done on a site-specific basis. as far as i know, that is.
anne
Quoting Sanjay Dharmavaram <[EMAIL PROTECTED]>:
Hi,
What's the command
have you got logs showing that the scheduled release finishes before
the next on e starts? i recall working somewhere huge volumes were
being released automatically, but they weren't checking for successful
completion before starting the next vos release. you might check that,
after you get
good luck with your presentation, and keep us posted! i like the
"here's why it makes sense financially" argument, followed by a
demonstration of nfs vulnerablity. just be prepared for, "oh, that's
so obscure, it'd never happen hereand, besides, we have
firewalls!"...8-)
anne
Quoting
an even more compelling argument for AFS over NFS or other large
filesystems is the way you can essentially add unlimited space by
simply adding servers. the steep learning curve is bringing in afs and
getting it into production initially. after that, it's easy to add
space, move volumes ar
what version of gnome? sorry if it was in your mail and i missed it.
for purposes of debugging, you can set the acl on the .Trash directory
to systema:anyuser read and see if that makes a difference. also, you
can turn debugging up in /etc/syslog.conf and see if anything gets
logged when yo
Quoting Mike V <[EMAIL PROTECTED]>:
You're right, I get 'permission denied'
I have a token and permissions are okay as far as I know. These
same files can be accessed from a Solaris system.
cd to the directory on a system where it works and on the one where
it's "weird".
look at the
the most likely thing would be, ummm, operator error...
- remove link and do vos release
- go ooops, webserver needs that link, put back the link
- forget to do vos release
it can happen...
anne
Quoting Andrew Bacchi <[EMAIL PROTECTED]>:
We were getting a time out from our web servers when
It sounds like you are setting up AFS for the first time. So, one
suggestion is: install the server (and not the client) on one system,
and install only the client on another system.
Then, you'll get a better sense of what the server does, and what the
client does.
There's a pretty steep
look at the access permissions on the directory where you can write
("fs la") and you'll probably be able to figure it out.
anne
Quoting Edward Quick <[EMAIL PROTECTED]>:
Hi,
I have 2 boxes with AFS clients installed. On one box, I can log on as
user jpyxcom1 and write to a directory on A
depending on the nature of the batch job, it may be simpler to make it
not use afs at all, working with the local resources only. depends on
the site, the user needs and awareness of how things work, and how
much effort the admins want to put into making long jobs run with afs
authenticatio
how many servers in the cell? which services are still working? what
services were running on the one that's down?
anne
Quoting ph rhole oper <[EMAIL PROTECTED]>:
The hd holding the root directory of an afs server has crashed, thus the
/var/openafs/ files are lost with it,
But the /vicepa pa
by what method did you determine that the volume and the fileserver are
"having problems"? did you look directly on the fileserver, or from
another client system? if you're looking from the same client that's
running the 'rm' or whatever commandyou're getting a biased look.
anne
Quoting
37 matches
Mail list logo