Re: [OpenAFS] Status on 2.6 Kernel?

2004-10-13 Thread David S.
 
 I applied the sys_call_table.patch and the syscalls.h.patch from
 'http://www.linux.ncsu.edu/projects/openafs-rpms/' to 'openafs-1.3.71'.
 It builds and installs on a 2.6.8.1 kernel, but when I try the
 fs mkmount /afs/cell name root.cell from the Unix quick-start
 guide, I get a No space left on the device error message from 'fs'.
 The same sources seem to work fine on a 2.6.7 kernel.

Err, forget this.  I've figured out what's up, or at least I think I
have.  The '/vicepa' partition I was larger than 2 TB.  With one less
than that size, things seem to work.

David S.


 
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Marcus Watts
I don't know if our fileserver crashes are related to what Matthew Cocker
and others are seeing, but we are indeed seeing problems here at umich.

Background information:
The machines in question are dual pentium 4 machines with
hyperthreading enabled running linux 2.4.26 (SMP) and glibc 2.3.2.  The
actual file storage is on cheap raid devices that use multiple IDE
drives but talk SCSI to the rest of the world.  These raids have their
own set of problems, so I would not count them as super-reliable file
storage.  We're running the pthreads version of the fileserver.

I think we're seeing at least 3 distinct problems with openafs 1.2.11.

The first may actually be networking.  We get these with varying
frequency in VolserLog:
Sun Oct 10 22:05:09 2004 1 Volser: DumpVolume: Rx call failed during dump, error -1
Tue Oct 12 11:38:07 2004 1 Volser: DumpVolume: Rx call failed during dump, error -1
Tue Oct 12 13:39:23 2004 1 Volser: DumpVolume: Rx call failed during dump, error -1
Tue Oct 12 15:06:46 2004 1 Volser: DumpVolume: Rx call failed during dump, error -1
Helpful message, eh?  Can't tell what volume was being dumped,
or where it was going.

Most of these probably occur while backing things up to TSM.
If I understand it right, we run buta there and there are
apparently issues with the amount of CPU it eats.  I've
wondered what % of our backups are failing, but I haven't
heard of any negative consequences here, so far.

We have at various times gotten problems with read-only replicas that
are oddly truncated.  This might or might not be the consequence of the
previous problem.

Another probably completely different problem we have concerns volumes
with really small volume IDs.  Modern AFS software creates large 10
digit volume IDs.  But we have volumes that were created long before
AFS 3.1, with small 3 digit volume IDs.  Those volumes are rapidly
disappearing as one by one, during various restarts, the fileserver and
salvager proceed to discard all the data, then the volume header.

The latest problem is of course signal 6.  That's generally from an
assertion error, which probably writes a nice message out to stderr
(probably out to /dev/null), and might leave an equally useful core
dump, except linux defaults to no core dumps.  Oops.  We've now managed
to harvest one core dump, so here's interesting data from it:

[ in the below, I've obscured the following for privacy reasons:
@IPADDR@ is a hexadecimal umich IP address belonging to
some random machine about which we know little (though
a transient portable or lab machine running windows or
MacOS is quite possible.)
@VICEID@ is a vice ID of some random and likely innocent person
who might logically be using @[EMAIL PROTECTED]
]

(gdb) where
#0  0x400a6281 in __kill () at __kill:-1
#1  0x40021811 in pthread_kill (thread=17978, signo=0) at signals.c:65
#2  0x40021b1b in __pthread_raise (sig=1073904656) at signals.c:187
#3  0x400a5ec4 in *__GI_raise (sig=17978)
at ../linuxthreads/sysdeps/unix/sysv/linux/raise.c:34
#4  0x400a75ed in *__GI_abort () at ../sysdeps/generic/abort.c:117
#5  0x08096603 in osi_Panic (msg=0x0, a1=0, a2=0, a3=0) at ../rx/rx_user.c:199
#6  0x08096637 in osi_AssertFailU (expr=0x0, file=0x0, line=0) at ../rx/rx_user.c:208
#7  0x080a47fc in rx_SetSpecific (conn=0x4002a510, key=1, ptr=0x0) at ../rx/rx.c:6632
#8  0x0805defc in h_TossStuff_r ([EMAIL PROTECTED]@) at ../viced/host.c:765
#9  0x0805cb6d in h_Release_r ([EMAIL PROTECTED]@) at ../viced/host.c:280
#10 0x0805ca71 in h_Release (host=0x0) at ../viced/host.c:258
#11 0x0805e14e in h_Enumerate (proc=0x8060fa0 CheckHost, param=0x0)
at ../viced/host.c:913
#12 0x080613cd in h_CheckHosts () at ../viced/host.c:2080
#13 0x0804b6c7 in HostCheckLWP () at ../viced/viced.c:731
#14 0x4001ed03 in pthread_start_thread (arg=0xb31ffbe0) at manager.c:300

In case that's not completely obvious,
0x80a47e8 rx_SetSpecific+312: movl   $0x80c8a00,0x4(%esp,1)
0x80a47f0 rx_SetSpecific+320: movl   $0x80c7480,(%esp,1)
0x80a47f7 rx_SetSpecific+327: call   0x8096610 osi_AssertFailU
(gdb) x/8x $ebp
0xb31ffa14: 0xb31ffa44  0x080a47fc  0x080c7480  0x080c8a00
0xb31ffa24: 0x19e5  0x400f0ce6  0x40a00010  0x08f3d84c
(gdb) x/i 0x080a47fc
0x80a47fc rx_SetSpecific+332: jmp0x80a46db rx_SetSpecific+43
(gdb) x/s 0x080c7480
0x80c7480 rcsid+3972:  pthread_mutex_lock(conn-conn_data_lock) == 0
(gdb) x/s 0x080c8a00
0x80c8a00 rcsid+80:../rx/rx.c
(gdb) print/d 0x19e5
$8 = 6629

Basically, it's dying at line 6629 in rx/rx.d, which reads thusly:
MUTEX_ENTER(conn-conn_data_lock);
The return code from pthread_mutex_lock would be useful, but
the assertion logic in openafs doesn't save that.

Here's that data structure:
(gdb) print *conn
$7 = {next = 0x0, peer = 0x0, conn_call_lock = {__m_reserved = -1289749536, 
__m_count = -1291841536, __m_owner = 0x0, __m_kind = 0, __m_lock = {__status = 0, 
  

[OpenAFS] Question about RO/RW Volumes...

2004-10-13 Thread Lars Schimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
Just a question about RO/RW copies.
We have set up 3 volumes for every user (home, work, ftp) and few others
with CVS, svn, data,...
For easy backup we've made RO copies of nearly all volumes.
But now, with 2 database servern and all the RO copies, we run into a
problem not thought about before.
With the 2nd database server in the cellservdb, most machine use the RO
copies of the volumes. With some volumes (archive, cdonline) that's OK
for working (but hey, these data isn't really small to hold a RO copy),
but with CVS, svn or home dirs, a RO copy-mount isn't really nice.
How can we be sure, to have RW Access to these volumes?
It would be nice, if OpenAFS would loadbalance the read to all the RW 
RO volumes, but write only to the RW volume and than automaticly release
this volume...
The only dirty solution I found is to mount the root.cell volume RW as
/afs/.url.to.domain to have guranteed RW access to the volumes.
Cya
Lars
- --
- -
Technische Universität Braunschweig, Institut für Computergraphik
Tel.: +49 531 391-2109E-Mail: [EMAIL PROTECTED]
PGP-Key-ID: 0xB87A0E03
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Debian - http://enigmail.mozdev.org
iD8DBQFBbOo/VguzrLh6DgMRAjcjAKCZOu57oAGC4UCu7uiVgMCCjg5OnwCeP6hn
wLaX2jZOksBZfo7iA6bI+40=
=GIK6
-END PGP SIGNATURE-
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Question about RO/RW Volumes...

2004-10-13 Thread Frank Burkhardt
Hi,

On Wed, Oct 13, 2004 at 10:41:35AM +0200, Lars Schimmer wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi!
 
 Just a question about RO/RW copies.
 We have set up 3 volumes for every user (home, work, ftp) and few others
 with CVS, svn, data,...
 For easy backup we've made RO copies of nearly all volumes.
 But now, with 2 database servern and all the RO copies, we run into a
 problem not thought about before.
 With the 2nd database server in the cellservdb, most machine use the RO
 copies of the volumes. With some volumes (archive, cdonline) that's OK
 for working (but hey, these data isn't really small to hold a RO copy),
 but with CVS, svn or home dirs, a RO copy-mount isn't really nice.
 How can we be sure, to have RW Access to these volumes?
 It would be nice, if OpenAFS would loadbalance the read to all the RW 
 RO volumes, but write only to the RW volume and than automaticly release
 this volume...
 The only dirty solution I found is to mount the root.cell volume RW as
 /afs/.url.to.domain to have guranteed RW access to the volumes.

Are you *really* sure, the mountpoints to these volumes (home, work, ftp)
are correct?

Try this:

 $ fs lsm /afs/path/to/home
 '/afs/path/to/home' is a mount point for volume '%home'

The '%' is important.

Another possible mistake is to have a difference between
RO and RW of the volume containing '/afs/path/to' (=the
volume containing the mountpoint to 'home'). Maybe you
changed the mountpoint for 'home' but you didn't release
the containing volume.

HTH.

Regards,

Frank
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Question about RO/RW Volumes...

2004-10-13 Thread Hartmut Reuter
use the -rw  option for fs mkm to force use of the RW volume.
We do the same: all user volumes are mounted with -rw and have 2
RO copies one in the same partition to make the reclone fast and
another one on another fileserver as a backup in case the 1st
partition gets lost.
We also have another tree where the RO-volumes are mounted to
allow users to get back their files from yesterday.
The automatic release of volumes theat have changed is done in
a cron job on each fileserver machine during the night.
Hartmut
Lars Schimmer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
Just a question about RO/RW copies.
We have set up 3 volumes for every user (home, work, ftp) and few others
with CVS, svn, data,...
For easy backup we've made RO copies of nearly all volumes.
But now, with 2 database servern and all the RO copies, we run into a
problem not thought about before.
With the 2nd database server in the cellservdb, most machine use the RO
copies of the volumes. With some volumes (archive, cdonline) that's OK
for working (but hey, these data isn't really small to hold a RO copy),
but with CVS, svn or home dirs, a RO copy-mount isn't really nice.
How can we be sure, to have RW Access to these volumes?
It would be nice, if OpenAFS would loadbalance the read to all the RW 
RO volumes, but write only to the RW volume and than automaticly release
this volume...
The only dirty solution I found is to mount the root.cell volume RW as
/afs/.url.to.domain to have guranteed RW access to the volumes.
Cya
Lars
- --
- -
Technische Universität Braunschweig, Institut für Computergraphik
Tel.: +49 531 391-2109E-Mail: [EMAIL PROTECTED]
PGP-Key-ID: 0xB87A0E03
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Debian - http://enigmail.mozdev.org
iD8DBQFBbOo/VguzrLh6DgMRAjcjAKCZOu57oAGC4UCu7uiVgMCCjg5OnwCeP6hn
wLaX2jZOksBZfo7iA6bI+40=
=GIK6
-END PGP SIGNATURE-
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info

--
-
Hartmut Reuter   e-mail [EMAIL PROTECTED]
   phone +49-89-3299-1328
RZG (Rechenzentrum Garching)   fax   +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver 1.2.11 problem

2004-10-13 Thread Thomas Mueller
On Tue, 12 Oct 2004, Derrick J Brashear wrote:

 So if the fileserver was still responsive to any Rx traffic, this backtrace
 isn't particularly meaningful; In any case it's unlikely to hang in sendmsg().

sorry, I think you got me wrong.
the fileserver is not responsive to any Rx traffic (rxdebug hangs) and
the fileserver is not responsive to signals (XCPU, TSTP, TERM, QUIT).
Only signal ABRT and KILL will stop the fileserver.

I agree that it is unlikely to hang in sendmsg(). I suppose there is a
loop in a function down the stack.

I have a second core dump of a fileserver, which has the same stack backtrace:

(gdb) thread apply all where

Thread 1 (process 18292):
#0  0x420e8412 in sendmsg () from /lib/i686/libc.so.6
#1  0x0808a1a3 in rxi_Sendmsg ()
#2  0x08092042 in osi_NetSend ()
#3  0x08092d8e in rxi_SendPacket ()
#4  0x0808e157 in rxi_SendList ()
#5  0x0808e368 in rxi_SendXmitList ()
#6  0x0808e6ea in rxi_Start ()
#7  0x080964f8 in rxevent_RaiseEvents ()
#8  0x08089cd8 in rxi_ListenerProc ()
#9  0x0808a014 in rx_ServerProc ()
#10 0x080982ac in Create_Process_Part2 ()
#11 0x080988e6 in savecontext ()
(gdb)

Before I killed the fileserver I tcpdump'ed the traffic on port 7000.
There are thousand of such packets:

16:02:57.198479 134.109.132.17.afs  81.66.109.65.20042:  [udp sum ok] rx data cid 
04b0954c call# 52032 seq 1 ser 110202645 secindex 0 serviceid 1 
client-init,req-ack,last-pckt (32) (DF) (ttl 64, id 0, len 60)
16:02:57.198489 134.109.132.17.afs  81.66.109.65.20042:  [udp sum ok] rx data cid 
04b0954c call# 52032 seq 1 ser 110202646 secindex 0 serviceid 1 
client-init,req-ack,last-pckt (32) (DF) (ttl 64, id 0, len 60)
16:02:57.198497 134.109.132.17.afs  81.66.109.65.20042:  [udp sum ok] rx data cid 
04b0954c call# 52032 seq 1 ser 110202647 secindex 0 serviceid 1 
client-init,req-ack,last-pckt (32) (DF) (ttl 64, id 0, len 60)
16:02:57.198506 134.109.132.17.afs  81.66.109.65.20042:  [udp sum ok] rx data cid 
04b0954c call# 52032 seq 1 ser 110202648 secindex 0 serviceid 1 
client-init,req-ack,last-pckt (32) (DF) (ttl 64, id 0, len 60)
...

Please note: 
iptables is configured to accept packets coming in from external networks
(! 134.109.0.0/16) only if they come from port 7001.
Any requests coming in from 81.66.109.65.20042 would be dropped and we
see no such packets (dropped packets will be logged).
So wherefrom does the fileserver get this destination 81.66.109.65.20042?
Why tries the fileserver continuously to send packets to this destination?

Thomas.
-- 
--
Thomas Mueller, TU Chemnitz, URZ, D-09107 Chemnitz
--
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Lukas Kubin
I'm trying to setup registry keys for OpenAFS so that Integrated login 
failed is not displayed when logging in Windows XP using local account. 
It is similar setup as that one displayed in the registry.txt file 
attached to release 1.3.7100.
I created two keys under NetworkProvider/Domain. One is LOCALHOST and 
the another is OUR.REALM. LOCALHOST contains FailLoginsSilently=1 
and LogonOptions=0 values. OUR.REALM only contains LogonOptions=1.

1. The problem is OpenAFS client ignores this setup and always displays 
Integrated login failed: ... when logging into local account.
2. Another problem is when I log into Samba Domain I only have AFS 
tokens and no K5 tickets in KfW 2.6.5

How can I solve these problems?
Thanks.
lukas
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Jeffrey Altman
Lukas Kubin wrote:
I'm trying to setup registry keys for OpenAFS so that Integrated login 
failed is not displayed when logging in Windows XP using local account. 
It is similar setup as that one displayed in the registry.txt file 
attached to release 1.3.7100.
I created two keys under NetworkProvider/Domain. One is LOCALHOST and 
the another is OUR.REALM. LOCALHOST contains FailLoginsSilently=1 
and LogonOptions=0 values. OUR.REALM only contains LogonOptions=1.

1. The problem is OpenAFS client ignores this setup and always displays 
Integrated login failed: ... when logging into local account.
File a bug with [EMAIL PROTECTED] and someone will take a look at 
it when I have time.

2. Another problem is when I log into Samba Domain I only have AFS 
tokens and no K5 tickets in KfW 2.6.5
Integrated login cannot pass Kerberos 5 tickets from the Network 
Provider to the users logon session.  Therefore KFW will never have
tickets.  If you want the users to obtain K5 tickets and have them
be used in the logon session, the workstation must obtain them via
Microsoft's Kerberos LSA.

Jeffrey Altman


smime.p7s
Description: S/MIME Cryptographic Signature


Re[2]: [OpenAFS] Fwd: your OpenAFS posting

2004-10-13 Thread Ron Croonenberg
Hi Jeff,

where in the client source is the drive mapping done ?

thanks,

Ron

Then I don't think your problem is related because Allen's problem
is the fact that his servers are multi-homed and his clients do
not reliably find the correct IP address.

You have already narrowed your problem down to something related
to the Network Drive mapping configuration.  That is where additional
time will have to be spent.



Ron Croonenberg wrote:
 Uhm..  multi-homed as in having multiple ip addresses ?  nope.
 The machine has 3 ethernet cards, but 2 of them are not activated on startup.

 Ron



I think Allen Moy is trying to explain to you what might be wrong
with your servers as visible from your Windows client.  Is your
new server multihomed?

Jeffrey Altman


Ron Croonenberg wrote:


Apparently there's someone who experienced similar problems with windows

 clients

and a second afs server/cell.

I attached the msg


any ideas or comments ?

thanks,

Ron
=

1879:
 Thomas Edison gets an idea, and his brother Timmy says,
 Hey, what's that thing over your head?
=

 Ron Croonenberg   | Phone: 1 765 658 4761
 Technology Coordinator| Fax:   1 765 658 4732
   |
 Department of ComputerScience | e-mail : [EMAIL PROTECTED]
 DePauw University |
 Julian Science  Math Center  |
 602 South College Ave.|
 Greencastle, IN  46135|
=

 http://www.depauw.edu/acad/computer/RonCroonenberg.asp
=





MIME-Version: 1.0
Content-Type: Message/rfc822

Return-path: [EMAIL PROTECTED]
Received: from barracuda.depauw.edu
(SURFCONTROL.depauw.edu [163.120.7.24])
by depauw.edu; Tue, 12 Oct 2004 08:57:30 -0500
Received: from maz015.math.ust.hk (maz015.math.ust.hk [143.89.17.15])
by barracuda.depauw.edu (Spam Firewall) with SMTP id 4A4B8D95
for [EMAIL PROTECTED]; Tue, 12 Oct 2004 09:00:27 -0500 (EST)
Received: from moy (helo=localhost)
by maz015.math.ust.hk with local-esmtp (Exim 3.36 #1 (Debian))
id 1CHNCJ-RL-00; Tue, 12 Oct 2004 22:00:15 +0800
Date: Tue, 12 Oct 2004 22:00:15 +0800 (HKT)
From: Allen Moy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: your OpenAFS posting
Message-ID: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Virus-Scanned: by Barracuda Spam Firewall at depauw.edu


I found, by google, your posting

[OpenAFS] still problem with windows client

I recently setup, for the first time, an afs
fileserver on a Linux machine running debian.
The afs cell can be accessed from Linux and Windows
clients.  When I repeated the installation process
on a second machine to solidify my new knowledge,
I discovered that the 2nd afs cell was accessible
from Linux clients, but not accessible from Windows
clients.

After much confusion, I realized that both machines
had two IP numbers (for internal and external networks).
In the case of the 1st machine, the external IP number
is assigned to eth0, and the afs fileserver is assigned
to this IP number.  In the 2nd case, the external IP number
is assigned to eth1, and so is the afs fileserver.

When I did a
  vos examine root.cell
on the properly working 1st machine, it showed the root.cell
had an assigned machine name which matched the external
machine name on eth0.

The same
  vos examine root.cell
on the 2nd machine showed the root.cell to be assigned to
the machine name of the internal interface eth0.

On the troubled 2nd machine, I edited files to turn off
the configuration of eth0 at boot, and then reboot.
After reboot,
  vos examine root.cell
showed an assignment of root.cell to the machine name of
the external eth1 interface.  Most importantly, Windows
client access started working.

If your troubled afs fileserver machine has two network
interfaces, the cause of the Windows client problem may
be the same as mine.

Regards,
Allen Moy




 =
 1879:
  Thomas Edison gets an idea, and his brother Timmy says,
  Hey, what's that thing over your head?
 =
  Ron Croonenberg   | Phone: 1 765 658 4761
  Technology Coordinator| Fax:   1 765 658 4732
|
  Department of ComputerScience | e-mail : [EMAIL PROTECTED]
  DePauw University |
  Julian Science  Math Center  |
  602 South College Ave.|
  Greencastle, IN  46135|
 =
  http://www.depauw.edu/acad/computer/RonCroonenberg.asp
 =



Re: [OpenAFS] Fwd: your OpenAFS posting

2004-10-13 Thread Jeffrey Altman
/src/WINNT/client_config/drivemap.cpp
Ron Croonenberg wrote:
Hi Jeff,
where in the client source is the drive mapping done ?
thanks,
Ron



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] Status on 2.6 Kernel?

2004-10-13 Thread e r0ck


works fine with several 3rd party modules i use (loading from path outside of /lib/modules).
Even if i put the module in the /lib/modules tree, it still no workie.  it seems the module is being built incorrectly and the module loader does not accept it.

i won't have much impetus to get this working either, since i have arla working (with a module being loaded from /usr/lib/arla/bin  BTW)

i appreciate your help though.

<-Original Message->e r0ck wrote:
>
>> of course i depmodded.
>>
>> modprobe /usr/vice/etc/whateverthenameofthemodulewas.ko
>
>No, no.
>
>Put the module somewhere in /lib/modules/`uname -r` and then run
>
>modprobe whateverthenameofthemodulewas.ko
>
>With no path...
>.
>



Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Jeffrey Altman
The MSI certainly creates registry entries of type DWORD.  If it
didn't, OpenAFS would probably crash when it received configuration data
of the wrong type.  Since you have not specified what entries are
not being set properly it is not possible to check to see if those
entries are configured properly in the MSI.  If they are not configured
properly it would be a bug that should be filed at [EMAIL PROTECTED]
Jeffrey Altman

Lukas Kubin wrote:
Thank you for your answer.
I found a problem with creating the registry values of .msi installer 
using Orca: I set the registry values correctly, however the installer 
creates them in registry as REG_SZ type instead of DWORD. Don't you 
know if there's a way how to force them to be DWORD?

Thank you.
lukas
Jeffrey Altman napsal(a):
Lukas Kubin wrote:
I'm trying to setup registry keys for OpenAFS so that Integrated 
login failed is not displayed when logging in Windows XP using local 
account. It is similar setup as that one displayed in the 
registry.txt file attached to release 1.3.7100.
I created two keys under NetworkProvider/Domain. One is LOCALHOST 
and the another is OUR.REALM. LOCALHOST contains 
FailLoginsSilently=1 and LogonOptions=0 values. OUR.REALM only 
contains LogonOptions=1.

1. The problem is OpenAFS client ignores this setup and always 
displays Integrated login failed: ... when logging into local account.

File a bug with [EMAIL PROTECTED] and someone will take a look 
at it when I have time.

2. Another problem is when I log into Samba Domain I only have AFS 
tokens and no K5 tickets in KfW 2.6.5

Integrated login cannot pass Kerberos 5 tickets from the Network 
Provider to the users logon session.  Therefore KFW will never have
tickets.  If you want the users to obtain K5 tickets and have them
be used in the logon session, the workstation must obtain them via
Microsoft's Kerberos LSA.

Jeffrey Altman


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Rodney M Dyer
At 09:40 AM 10/13/04, Jeffrey Altman wrote:
Integrated login cannot pass Kerberos 5 tickets from the Network Provider 
to the users logon session.  Therefore KFW will never have tickets.  If 
you want the users to obtain K5 tickets and have them be used in the logon 
session, the workstation must obtain them via Microsoft's Kerberos LSA.
Is ms2mit.exe still supported?  And if so, will it be supported indefinitely?
Rodney
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Jeffrey Altman
Rodney M Dyer wrote:
At 09:40 AM 10/13/04, Jeffrey Altman wrote:
Integrated login cannot pass Kerberos 5 tickets from the Network 
Provider to the users logon session.  Therefore KFW will never have 
tickets.  If you want the users to obtain K5 tickets and have them be 
used in the logon session, the workstation must obtain them via 
Microsoft's Kerberos LSA.

Is ms2mit.exe still supported?  And if so, will it be supported 
indefinitely?

Rodney
ms2mit.exe is part of kfw and it is still supported.
however, it cannot work unless the user obtains tickets via the
Microsoft LSA.
OpenAFS does not require the use of ms2mit.exe as it can use
the MSLSA: krb5_ccache to access those tickets directly.
Jeffrey Altman



smime.p7s
Description: S/MIME Cryptographic Signature


[OpenAFS] Re: Symantec AntiVirus 9 problems

2004-10-13 Thread Joe Buehler
The problem we are seeing with Symantec 9 is probably a Cygwin issue
caused by Symantec -- socket() creation is slowed by Symantec 9 for some
reason and Cygwin's select() algorithm calls socket() for every select().
A fix was checked in to Cygwin that supposedly corrects this behavior.
See the Cygwin X list for the details.
--
Joe Buehler
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Lukas Kubin
Sorry I didn't reply into mailing list.
The entries I'm working with are FailLoginsSilently and LogonOptions (I 
wrote it in the top-thread letter). By looking better (or/and using my 
brain more extensively than previously) into the .msi tables in Orca I 
found that DWORD values were preceeded with #  string. So I instead of 
typing 1 or 0 into appropriate fields I had to type #1 or #0 
respectively.
It might be a beginner's only problem. I 've never edited .msi before. 
Sorry for disturbing you with it.
May I have one more question? Is there any option how to tell OpenAFS 
not to show the AFS login window after logging in using local account only?
Thank you.

lukas
Jeffrey Altman napsal(a):
The MSI certainly creates registry entries of type DWORD.  If it
didn't, OpenAFS would probably crash when it received configuration data
of the wrong type.  Since you have not specified what entries are
not being set properly it is not possible to check to see if those
entries are configured properly in the MSI.  If they are not configured
properly it would be a bug that should be filed at 
[EMAIL PROTECTED]

Jeffrey Altman

Lukas Kubin wrote:
Thank you for your answer.
I found a problem with creating the registry values of .msi installer 
using Orca: I set the registry values correctly, however the 
installer creates them in registry as REG_SZ type instead of 
DWORD. Don't you know if there's a way how to force them to be 
DWORD?

Thank you.
lukas
Jeffrey Altman napsal(a):
Lukas Kubin wrote:
I'm trying to setup registry keys for OpenAFS so that Integrated 
login failed is not displayed when logging in Windows XP using 
local account. It is similar setup as that one displayed in the 
registry.txt file attached to release 1.3.7100.
I created two keys under NetworkProvider/Domain. One is LOCALHOST 
and the another is OUR.REALM. LOCALHOST contains 
FailLoginsSilently=1 and LogonOptions=0 values. OUR.REALM only 
contains LogonOptions=1.

1. The problem is OpenAFS client ignores this setup and always 
displays Integrated login failed: ... when logging into local 
account.


File a bug with [EMAIL PROTECTED] and someone will take a 
look at it when I have time.

2. Another problem is when I log into Samba Domain I only have AFS 
tokens and no K5 tickets in KfW 2.6.5


Integrated login cannot pass Kerberos 5 tickets from the Network 
Provider to the users logon session.  Therefore KFW will never have
tickets.  If you want the users to obtain K5 tickets and have them
be used in the logon session, the workstation must obtain them via
Microsoft's Kerberos LSA.

Jeffrey Altman
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Domain registry keys problem in 1.3.7100 for Windows

2004-10-13 Thread Jeffrey Altman
Lukas Kubin wrote:
May I have one more question? Is there any option how to tell OpenAFS 
not to show the AFS login window after logging in using local account only?
the startup options for the AFS Systray tool (afscreds.exe) are stored 
in  AfscredsShortcutParams.  Removing the -A' parameter will prevent
the display of the login window in all cases when you do not have 
tokens.  There is no mechanism for only displaying it during a DOMAIN
login.




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Jeffrey Hutzelman
I don't know if our fileserver crashes are related to what Matthew Cocker
and others are seeing, but we are indeed seeing problems here at umich.
Background information:
The machines in question are dual pentium 4 machines with
hyperthreading enabled running linux 2.4.26 (SMP) and glibc 2.3.2.  The
actual file storage is on cheap raid devices that use multiple IDE
drives but talk SCSI to the rest of the world.  These raids have their
own set of problems, so I would not count them as super-reliable file
storage.  We're running the pthreads version of the fileserver.
I think we're seeing at least 3 distinct problems with openafs 1.2.11.
The first may actually be networking.  We get these with varying
frequency in VolserLog:
Sun Oct 10 22:05:09 2004 1 Volser: DumpVolume: Rx call failed during 
dump, error -1
Tue Oct 12 11:38:07 2004 1 Volser: DumpVolume: Rx call failed during 
dump, error -1
Tue Oct 12 13:39:23 2004 1 Volser: DumpVolume: Rx call failed during 
dump, error -1
Tue Oct 12 15:06:46 2004 1 Volser: DumpVolume: Rx call failed during 
dump, error -1
Helpful message, eh?  Can't tell what volume was being dumped,
or where it was going.
Well, -1 basically means the rx connection timed out.  There should be a
corresponding error on whatever client was doing the dump, unless the
issue was that that client decided to abort the call.  We see that all the
time, because there are cases where our backup system will parse the start
of a volume dump, decide it doesn't want it after all, and abort.

We have at various times gotten problems with read-only replicas that
are oddly truncated.  This might or might not be the consequence of the
previous problem.
Hm.  That sounds familiar, but I thought that bug was fixed some time ago.
In fact, Derrick confirms that the fix is in 1.2.11
Another probably completely different problem we have concerns volumes
with really small volume IDs.  Modern AFS software creates large 10
digit volume IDs.  But we have volumes that were created long before
AFS 3.1, with small 3 digit volume IDs.  Those volumes are rapidly
disappearing as one by one, during various restarts, the fileserver and
salvager proceed to discard all the data, then the volume header.
That's... bizarre.  I've never heard of such a thing, but then, we don't
have any Linux fileservers in our cell.  I understand the Andrew cell was
seeing this for a while, but it went away without anyone successfully
debugging it.
The last problem you describe sounds suspiciously like something Derrick
has been trying to track down for the last 2 or 3 weeks.  I'll leave that
to him, since he has a better idea than I of the current status of that.
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Derrick J Brashear
On Wed, 13 Oct 2004, Jeffrey Hutzelman wrote:
We have at various times gotten problems with read-only replicas that
are oddly truncated.  This might or might not be the consequence of the
previous problem.
Hm.  That sounds familiar, but I thought that bug was fixed some time ago.
In fact, Derrick confirms that the fix is in 1.2.11
The fix was for the top inode. It's conceivable some bug affects other 
inodes.

Another probably completely different problem we have concerns volumes
with really small volume IDs.  Modern AFS software creates large 10
digit volume IDs.  But we have volumes that were created long before
AFS 3.1, with small 3 digit volume IDs.  Those volumes are rapidly
disappearing as one by one, during various restarts, the fileserver and
salvager proceed to discard all the data, then the volume header.
That's... bizarre.  I've never heard of such a thing, but then, we don't
have any Linux fileservers in our cell.  I understand the Andrew cell was
seeing this for a while, but it went away without anyone successfully
debugging it.
It may have recurred once recently, but we can't cause it to happen on 
demand, so debugging ity has proven almost impossible.

The last problem you describe sounds suspiciously like something Derrick
has been trying to track down for the last 2 or 3 weeks.  I'll leave that
to him, since he has a better idea than I of the current status of that.
We're still seeing a problem, but ours involves callback rxcon peers being 
garbage collected while there are still references to those peers in 
conns. It looks like you have a problem with a connection being 
garbage collected while something has references to it. As it happens, in 
the process of trying to fix my problem, we found and fixed several of 
those. If you want to try a patch (which may move your crashes elsewhere, 
but should not increase the frequency of crashing, and may decrease it) 
let me know.

___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Marcus Watts
Jeffrey Hutzelman [EMAIL PROTECTED] writes:
 From: Jeffrey Hutzelman [EMAIL PROTECTED]
 To: Marcus Watts [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: Re: [OpenAFS] fileserver crashes
 Message-ID: [EMAIL PROTECTED]
 In-Reply-To: [EMAIL PROTECTED]
 References:  [EMAIL PROTECTED]
 Date: Wed, 13 Oct 2004 11:04:34 -0400
 
  I don't know if our fileserver crashes are related to what Matthew Cocker
  and others are seeing, but we are indeed seeing problems here at umich.
 
  Background information:
  The machines in question are dual pentium 4 machines with
  hyperthreading enabled running linux 2.4.26 (SMP) and glibc 2.3.2.  The
  actual file storage is on cheap raid devices that use multiple IDE
  drives but talk SCSI to the rest of the world.  These raids have their
  own set of problems, so I would not count them as super-reliable file
  storage.  We're running the pthreads version of the fileserver.
 
  I think we're seeing at least 3 distinct problems with openafs 1.2.11.
 
  The first may actually be networking.  We get these with varying
  frequency in VolserLog:
  Sun Oct 10 22:05:09 2004 1 Volser: DumpVolume: Rx call failed during 
 dump, error -1
  Tue Oct 12 11:38:07 2004 1 Volser: DumpVolume: Rx call failed during 
 dump, error -1
  Tue Oct 12 13:39:23 2004 1 Volser: DumpVolume: Rx call failed during 
 dump, error -1
  Tue Oct 12 15:06:46 2004 1 Volser: DumpVolume: Rx call failed during 
 dump, error -1
  Helpful message, eh?  Can't tell what volume was being dumped,
  or where it was going.
 
 Well, -1 basically means the rx connection timed out.  There should be a
 corresponding error on whatever client was doing the dump, unless the
 issue was that that client decided to abort the call.  We see that all the
 time, because there are cases where our backup system will parse the start
 of a volume dump, decide it doesn't want it after all, and abort.

That's nice, but right off the top of my head I can think of 3
possibilities for the client -- hdserver, vos run by an
administrator, or the afs backup software, and each of those poses
problems in terms of collecting error messages.  hdserver keeps a log,
but there aren't any obviously related failures there (there aren't any
messages at all for some of these time periods.)  Even if there was a
failure, I don't know how much sense we could make of it; hdserver is
capable of doing multiple vos releases more or less in parallel at
once.  Our backup system apparently was running out of TSM client
licenses for a while and aborting - so that could have been the cause
of many of these, but as you already observed, there's no way to tell
which of those failures is related to which of these messages.  And,
presumably contrary to the experience at most sites, our administrators
have been notoriously reluctant to undergo brain surgery to install the
necessary hardware so that we can screen scrap their heads to collect
error messages on demand.  Is there any reason that error message
couldn't be a little more helpful?

 
 
  We have at various times gotten problems with read-only replicas that
  are oddly truncated.  This might or might not be the consequence of the
  previous problem.
 
 Hm.  That sounds familiar, but I thought that bug was fixed some time ago.
 In fact, Derrick confirms that the fix is in 1.2.11
 
  Another probably completely different problem we have concerns volumes
  with really small volume IDs.  Modern AFS software creates large 10
  digit volume IDs.  But we have volumes that were created long before
  AFS 3.1, with small 3 digit volume IDs.  Those volumes are rapidly
  disappearing as one by one, during various restarts, the fileserver and
  salvager proceed to discard all the data, then the volume header.
 
 That's... bizarre.  I've never heard of such a thing, but then, we don't
 have any Linux fileservers in our cell.  I understand the Andrew cell was
 seeing this for a while, but it went away without anyone successfully
 debugging it.

Well, it may be going away in our cell too -- clearly, natural selection
is busy removing problem volumes one by one.  I can't say that it's leaving
me with a comfortable feeling though.

 
 
 The last problem you describe sounds suspiciously like something Derrick
 has been trying to track down for the last 2 or 3 weeks.  I'll leave that
 to him, since he has a better idea than I of the current status of that.

I'll respond to that separately.  Thanks!

-Marcus Watts
UM ITCS Umich Systems Group
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread John W. Sopko Jr.
Our linux/AFS 1.2.11 file server has been hanging the last few weeks.
We have been upgrading machines to Windows XP SPII and OpenAFS 1.7.x
over the last month or so. Here is one issue I found that was causing
the problem:
We have a user who uses a Windows application called Matlab for
generating and processing hundreds of files in AFS space from a Windows
XP machine. He was running OpenAFS 1.2.x client. His machine was upgraded
to Service pack II and OpenAFS 1.3.71. His Matlab application hangs in
windows and our file server eventually melts down.
I am not an expert at debugging AFS, let me know if you want me to try
something. I cranked up the debug on the FileLog to 25. I could see his
machine was constantly logging messages like this, (the user name really
is debug):
Wed Oct 13 12:15:47 2004 FindClient: authenticating connection: authClass=2
Wed Oct 13 12:15:47 2004 FindClient: rxkad conn:
name=debug,inst=,cell=,exp=1097688735,kvno=8
Wed Oct 13 12:15:47 2004 FindClient: authenticating connection: authClass=2
Wed Oct 13 12:15:47 2004 FindClient: rxkad conn:
name=debug,inst=,cell=,exp=1097690546,kvno=8
Wed Oct 13 12:15:47 2004 SAFS_FetchStatus,  Fid = 1769554818.9542.9001, Host
152.2.128.179, Id 5269
Wed Oct 13 12:15:47 2004 SAFS_FetchStatus returns 0
I also ran scout, (I used rxdebug but do not know how to interpret the
results but they did not look suspicious). Within scout the left most
column shows the number of rpc calls to the server. I restarted the file
server and the number of rpc calls went up dramatically, it hit  in
about 2 minutes, then it just shows a *xxx since it limited to 4 columns,
but you can see it constantly counting upward.
This user says he has been running his experiments for the last several
months and did not have a problem until his system was upgrading to
AFS 1.3.71, about the same time are AFS file server problem started to
happen. He left the Matlab application in the hung state, we killed it
and the number of rpc's in scout went below 100.
By the way, when the file server hangs our web server, which access data
on the file server hangs, because it cannot access AFS on this server.
A restart of the file server clears the problem, but it comes back as
the Windows client starts hammering the server again.
For those of you having problems try running scount, and check out your
rpc count:
scout -server hostname -freq 5
You can specify multiple servers and compare stats.
The following was posted when the 1.3.7 client came out. Is the official
word that the windows OpenAFS client should not be used with Windows
applications do to this issue?
---
OpenAFS installed on the machine.  No gateway mode.
Try opening a 100MB file in Microsoft Word from AFS
and then perform a Save as ... to another filename
within AFS.
You will receive a Delay Writes warning and then if you are
using Word XP, Word will crash.
Jeffrey Altman
John W. Sopko Jr. wrote:
 Can you elaborate on This causes all Microsoft Office
 applications to have failures when writing to AFS.

 Do you mean when using the standard AFS Windows client to AFS, or using the
 AFS Light Gateway to write to AFS on a Windows system? Thanks.

 Jeffrey Altman wrote:

 OpenAFS 1.3.70 has now shipped.
 Examples of some of the hard work which is ahead of us:


 * The architecture of the SMB/CIFS server does not allow for sequential
   processing of SMB/CIFS requests.  This prevents us from implementing
   support for digital signing but more importantly breaks applications
   which use overlapped writes.  This causes all Microsoft Office
   applications to have failures when writing to AFS.  I can't think of
   a more important suite of applications which must simply work if AFS
   is truly to be used in a transparent manner from the end user
   experience.

--
John W. Sopko Jr.   University of North Carolina
email: sopko AT cs.unc.edu  Computer Science Dept., CB 3175
Phone: 919-962-1844 Sitterson Hall; Room 044
Fax:   919-962-1799 Chapel Hill, NC 27599-3175
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Derrick J Brashear
the Windows client problem you reference below will be fixed in 1.3.72
however, the underlying filesystem issue is almost certainly the same one 
other people are having, and i can give you a patch if you're willing to 
try it which will not fix the issue but may help us track it

On Wed, 13 Oct 2004, John W. Sopko Jr. wrote:
Our linux/AFS 1.2.11 file server has been hanging the last few weeks.
We have been upgrading machines to Windows XP SPII and OpenAFS 1.7.x
over the last month or so. Here is one issue I found that was causing
the problem:
We have a user who uses a Windows application called Matlab for
generating and processing hundreds of files in AFS space from a Windows
XP machine. He was running OpenAFS 1.2.x client. His machine was upgraded
to Service pack II and OpenAFS 1.3.71. His Matlab application hangs in
windows and our file server eventually melts down.
I am not an expert at debugging AFS, let me know if you want me to try
something. I cranked up the debug on the FileLog to 25. I could see his
machine was constantly logging messages like this, (the user name really
is debug):
Wed Oct 13 12:15:47 2004 FindClient: authenticating connection: authClass=2
Wed Oct 13 12:15:47 2004 FindClient: rxkad conn:
name=debug,inst=,cell=,exp=1097688735,kvno=8
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Getting debugging info for afsd on a 2.6.8.1 kernel.

2004-10-13 Thread Ken Aaker
I've just started putting 1.3.71 up on some Linux machines and I was 
wondering how to collect debugging information for a client hang? I've 
tried running afsd with -debug, but after the daemons are forked and go 
to background processes, I haven't been able to find any more output.

I've built the 1.3.71 code against a SuSE 9.1 kernel which is a 2.6.5 
with a couple of patches, and it works just fine.

But the code that I built on a Fedora Core 2 system with a 2.6.8.1 
kernel will hang when I do the first write to any volume. (I just do a 
touch foo).  The system is still alive, only the afsd's are stuck. The 
afsd background processes are all in a wait state, either labeled afs_os 
or a hex wait channel. The system is running a 2.6.8.1 FC 2 stock 
uniprocessor kernel, a Xeon 2GHz. I run a RHEL3 system on the same 
system and cache partition and that works fine. I'll try to collect any 
more information, once I know where to get it.

Ken
--
work - [EMAIL PROTECTED]
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Getting debugging info for afsd on a 2.6.8.1 kernel.

2004-10-13 Thread Derrick J Brashear
magic sysrq output is probably a good place to start
also, can you cmdebug the client?
On Wed, 13 Oct 2004, Ken Aaker wrote:
The system is still alive, only the afsd's are stuck. The afsd background 
processes are all in a wait state, either labeled afs_os or a hex wait 
channel. The system is running a 2.6.8.1 FC 2 stock uniprocessor kernel, a 
Xeon 2GHz. I run a RHEL3 system on the same system and cache partition and 
that works fine. I'll try to collect any more information, once I know where 
to get it.
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] fileserver crashes

2004-10-13 Thread Jeffrey Altman
Jeffrey Altman wrote:
This bug should be fixed in the current daily builds.
  //afs/athena.mit.edu/user/j/a/jaltman/Public/OpenAFS
I would appreciate it if you would ask your user to test it.
The following was posted when the 1.3.7 client came out. Is the official
word that the windows OpenAFS client should not be used with Windows
applications do to this issue?
---
OpenAFS installed on the machine.  No gateway mode.
Try opening a 100MB file in Microsoft Word from AFS
and then perform a Save as ... to another filename
within AFS.
You will receive a Delay Writes warning and then if you are
using Word XP, Word will crash.
Jeffrey Altman

This bug is still present.  I have not been provided the resources
necessary to fix it yet.
Actually, the overlapped i/o bug may have been fixed as a side effect to 
fixing the AFSRPC Connection use once and discard bug.

Please test this in your environment as well.
Jeffrey Altman



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] Getting debugging info for afsd on a 2.6.8.1 kernel. cmdebug output.

2004-10-13 Thread Ken Aaker
I haven't gotten the magic SysRq information, but I did collect this...
cmdebug seemed to work just fine. The cmdebug -long output was too big, 
so, I abbreviated it. But I still have it.

Ken
 ps flax  output---
1 0  2334 1  15   0 00 -  SW   ?  0:00
[afs_rxlistener]
1 0  2336 1  15   0 00 afs_os SW   ?  0:00
[afs_callback]
1 0  2338 1  15   0 00 369792 SW   ?  0:00
[afs_rxevent]
1 0  2340 1  15   0 00 369792 SW   ?  0:00 [afsd]
1 0  2342 1  15   0 00 369792 SW   ?  0:00
[afs_checkserver]
1 0  2344 1  15   0 00 afs_os SW   ?  0:00
[afs_background]
1 0  2346 1  15   0 00 afs_os SW   ?  0:00
[afs_background]
1 0  2348 1  20   0 00 afs_os SW   ?  0:00
[afs_cachetrim]
0  1043  2379  2356  17   0  4744  336 afs_os Spts/1
0:00  \_ touch foo
--
--- cmdebug -long --- output -
Lock afs_xvcache status: (none_waiting)
Lock afs_xdcache status: (none_waiting)
Lock afs_xserver status: (none_waiting)
Lock afs_xvcb status: (none_waiting)
Lock afs_xbrs status: (none_waiting)
Lock afs_xcell status: (none_waiting)
Lock afs_xconn status: (none_waiting)
Lock afs_xuser status: (none_waiting)
Lock afs_xvolume status: (none_waiting)
Lock puttofile status: (none_waiting)
Lock afs_ftf status: (none_waiting)
Lock afs_xcbhash status: (none_waiting)
Lock afs_xaxs status: (none_waiting)
Lock afs_xinterface status: (none_waiting)
** Cache entry @ 0xfab9a608 for 2.536870933.61420.4663048 [sbsroch.com]
6394 bytesDV 1 refcnt 1
callback d8bb8520expires 1097693004
0 opens0 writers
normal file
states (0x0)
** Cache entry @ 0xfab84000 for 1.1.1.1 [dynroot]
2048 bytesDV 3 refcnt 1
callback expires 0
0 opens0 writers
volume root
states (0x4), read-only
** Cache entry @ 0xfab8f628 for 2.536870933.53230.4616355 [sbsroch.com]
430133 bytesDV 1 refcnt 1
callback d8bb8520expires 1097693004
0 opens0 writers
normal file
states (0x0)
** Cache entry @ 0xfab8d4a8 for 1.1.16777220.1 [dynroot]
22 bytesDV 1 refcnt 0
callback expires 0
0 opens0 writers
mount point
states (0x4), read-only
** Cache entry @ 0xfab84430 for 2.536870916.1.1 [sbsroch.com]
2048 bytesDV 23 refcnt 1
callback d8bb8520expires 1097694284
0 opens0 writers
volume root
states (0x5), stat'd, read-only
** Cache entry @ 0xfab87670 for 2.536870933.22514.4783362 [sbsroch.com]
155648 bytesDV 1 refcnt 1
callback d8bb8520expires 1097693004
0 opens0 writers
normal file
states (0x0)
** Cache entry @ 0xfab9d418 for 2.536870916.4.3 [sbsroch.com]
6 bytesDV 1 refcnt 0
callback d8bb8520expires 1097694284
0 opens0 writers
mount point
states (0x5), stat'd, read-only
** Cache entry @ 0xfab95030 for 2.536871020.31650.115163 [sbsroch.com]
2048 bytesDV 6 refcnt 1
callback d8bb8520expires 1097625676
0 opens0 writers
normal file
states (0x0)
** Cache entry @ 0xfab8daf0 for 2.536870933.56316.4616994 [sbsroch.com]
40960 bytesDV 1 refcnt 1
callback d8bb8520expires 1097693004
0 opens0 writers
normal file
states (0x0)
** Cache entry @ 0xfab9d200 for 2.536870928.1.1 [sbsroch.com]
2048 bytesDV 16 refcnt 1
callback d732cba0expires 1097694284
0 opens0 writers
volume root
states (0x5), stat'd, read-only
** Cache entry @ 0xfab8c600 for 2.536870928.4.203 [sbsroch.com]
15 bytesDV 1 refcnt 0
callback d732cba0expires 1097694284
0 opens0 writers
mount point
states (0x5), stat'd, read-only
** Cache entry @ 0xfab9ac50 for 2.536870916.16.13817 [sbsroch.com]
11 bytesDV 0 refcnt 0
callback d8bb8520expires 1097666124
0 opens0 writers
mount point
states (0x4), read-only
** Cache entry @ 0xfaba7550 for 2.536870933.1.1 [sbsroch.com]
45056 bytesDV 48384 refcnt 1
callback d8bb8520expires 1097693004
0 opens0 writers
volume root
states (0x0)
** Cache entry @ 0xfab9cbb8 for 2.536870928.14.10009 [sbsroch.com]
10 bytesDV 0 refcnt 0
callback expires 1097629388
0 opens0 writers
mount point
states (0x4), read-only
** Cache entry @ 0xfab84218 for 2.536870928.16.10010 [sbsroch.com]
14 bytesDV 0 refcnt 0
callback expires 1097618380
0 opens0 writers
mount point
states (0x4), read-only
** Cache entry @ 0xfab960f0 for 2.536870928.22.10013 [sbsroch.com]
15 bytesDV 0 refcnt 0
callback expires 1097618380
0 opens0 writers
mount point
states (0x4), 

Re: [OpenAFS] Getting debugging info for afsd on a 2.6.8.1 kernel. cmdebug output.

2004-10-13 Thread Derrick J Brashear
On Wed, 13 Oct 2004, Ken Aaker wrote:
I haven't gotten the magic SysRq information, but I did collect this...
cmdebug seemed to work just fine. The cmdebug -long output was too big, so, I 
abbreviated it. But I still have it.
does cmdebug without -long show anything?
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Getting debugging info for afsd on a 2.6.8.1 kernel. cmdebug output.

2004-10-13 Thread Ken Aaker




cmdebug -server storm

Showed no output at all.

-- 
work - [EMAIL PROTECTED]	(507) 289-6910 ext 1
home - [EMAIL PROTECTED]