mån 2008-01-21 klockan 08:51 +0100 skrev Harald Barth:
> > The reason I'm asking is that we may want to make use of the ability for
> > people to create
> > and maintain their own groups, and that would require something more
> > user-friendly than
> > the command-line stuff :-)
>
> I wonder wh
>> There are a lot of stupid gui thingies out there that want to stat
>> every dir they see. And color-ls and such things. Shrug.
> But it still were not a problem when we tried it. If people likes to do ls
> in /home, well,
> I consider that their problem, as long as references of files below
Anders Magnusson wrote:
Hm, ok.
Is kaserver used if the servers are running on Windows? And would it
be difficult to fix
the account manager to work against any server?
The Account Manager is essentially a graphical 'kadmin' for kaserver.
If you want a graphical 'kadmin' for Heimdal or MI
Anders Magnusson wrote:
Jeffrey Altman skrev:
I tested by
adding 64k mountpoints (volumes) in one directory, and accessing
directories below did
not become noticeably slower. The only peculiarity I noted was
that Windows explorer
only showed the first 1000 entries, but I don't know if it
Jeffrey Altman skrev:
I tested by
adding 64k mountpoints (volumes) in one directory, and accessing
directories below did
not become noticeably slower. The only peculiarity I noted was that
Windows explorer
only showed the first 1000 entries, but I don't know if it is a bug
in explorer or
Harald Barth skrev:
Ok, thanks. A related question to this is how database replication works, the
databases
became quite large. 80MB prdb and 40MB vldb. Will it only copy changed
entries or
may the whole database be copied?
Oh, I'm a bit astonished...
-rw--- 1 root bin 4557888
On Fri, Jan 18, 2008 at 05:14:29PM +0100, Anders Magnusson wrote:
> Ok, thanks. A related question to this is how database replication
> works, the databases
> became quite large. 80MB prdb and 40MB vldb. Will it only copy changed
> entries or
> may the whole database be copied?
Here at UMic
Anders Magnusson wrote:
- What are the actual limitations on users/groups/volumes/sizes? I'm
interested in both
practical and theoretical. I have tested with 200k volumes/partition,
which seems to
work fine with the exception of that zapping volumes take 12 seconds each.
The max partitio
> I used plain UFS and the namei version of the server. Use of this machine
> was only
> for testing purposes, we'll probably only use Redhat machines if we start
> using it
> in production environment.
We use CentOS with xfs for the /vicepX, everything else ext2/3.
> Ok, thanks. A related q
Harald Barth wrote:
my name is Anders Magnusson and I work at Lule} University in Sweden.
Hej Ragge!
Tjenare Harald!
Test environment has been Heimdal 1.0 + OpenAFS 1.4.5, servers Redhat 4 (Dell
1950)
and Solaris 10 (Sun V880). Clients was Redhat and for Windows (using 1.5.28).
> my name is Anders Magnusson and I work at Lule} University in Sweden.
Hej Ragge!
> Test environment has been Heimdal 1.0 + OpenAFS 1.4.5, servers Redhat 4 (Dell
> 1950)
> and Solaris 10 (Sun V880). Clients was Redhat and for Windows (using 1.5.28).
On Sol10, which file system do you have fo
Hi,
my name is Anders Magnusson and I work at Lule} University in Sweden.
We are currently evaluating different ways of handling data storage
here, and one of the
products we have looked at is OpenAFS. And the results are very promising.
I have some questions though. I assume most of them c
Thanks,
Although, when I try to append the initial /sunday dump with 'backup dump
/sunday/monday 0 -append', the butc process is looking for a tape
labeled 'volset.monday.1'. The tape in the drive has the label
volset.sunday.1. Any idea why the -append flag isn't working correctly?
Thanks
Mike
Mike Aldrich <[EMAIL PROTECTED]> writes:
> This is where I got an error. I cannot dump a volume if it hasn't been cloned
> since the last dump?
No, since you are (or should) dump the backup volumes, it does not
make sense to dump them a second time if they are not changed.
> When are volumes
Hi,
I created my volset full of .backup volumes on my primary AFS server (RH 7.1
OpenAFS 1.2.3). My initial dump is labeled /sunday, with increments to
/saturday. I have about 8G of data housed in AFS volumes.
My questions are:
I ran my first dump (/sunday) today. It ran fine (except for the can
15 matches
Mail list logo