Re: Prevent winbind idmap corruption
Ooops, bug in patch: Duplicate deletion of mapping on rollback. Corrected version is attached. Sorry! Michael Index: nsswitch/winbindd_idmap.c === RCS file: /cvsroot/samba/source/nsswitch/winbindd_idmap.c,v retrieving revision 1.3.4.13 diff -u -r1.3.4.13 winbindd_idmap.c --- nsswitch/winbindd_idmap.c 27 Apr 2002 03:04:08 - 1.3.4.13 +++ nsswitch/winbindd_idmap.c 19 Dec 2002 12:32:25 - @@ -44,6 +44,8 @@ if ((hwm = tdb_fetch_int32(idmap_tdb, isgroup ? HWM_GROUP : HWM_USER)) == -1) { +DEBUG(0, (Failed to fetch %s : %s\n, isgroup ? HWM_GROUP : HWM_USER, +tdb_errorstr(idmap_tdb))); return False; } @@ -63,7 +65,45 @@ /* Store new high water mark */ -tdb_store_int32(idmap_tdb, isgroup ? HWM_GROUP : HWM_USER, hwm); +if (tdb_store_int32(idmap_tdb, isgroup ? HWM_GROUP : HWM_USER, hwm)) { +DEBUG(0, (Failed to store %s %d : %s\n, isgroup ? HWM_GROUP : HWM_USER, +hwm, tdb_errorstr(idmap_tdb))); +return False; +} + +return True; +} + +/* Deallocate either a user or group id, used for failure rollback */ + +static BOOL deallocate_id(uid_t id, BOOL isgroup) +{ +int hwm; + +/* Get current high water mark */ + +if ((hwm = tdb_fetch_int32(idmap_tdb, + isgroup ? HWM_GROUP : HWM_USER)) == -1) { +DEBUG(0, (Failed to fetch %s : %s\n, isgroup ? HWM_GROUP : HWM_USER, +tdb_errorstr(idmap_tdb))); +return False; +} + +if (hwm != id + 1) { +/* Should actually never happen, internal redundancy... */ +DEBUG(0, (winbind %s mismatch on deallocation!\n, isgroup ? HWM_GROUP : +HWM_USER)); +return False; +} + +hwm--; + +/* Store new high water mark */ + +if (tdb_store_int32(idmap_tdb, isgroup ? HWM_GROUP : HWM_USER, hwm)) { +DEBUG(0, (Failed to store %s %d : %s\n, isgroup ? HWM_GROUP : HWM_USER, + hwm, tdb_errorstr(idmap_tdb))); +return False; +} return True; } @@ -109,16 +149,36 @@ fstring keystr2; /* Store new id */ - + slprintf(keystr2, sizeof(keystr2), %s %d, isgroup ? GID : UID, *id); data.dptr = keystr2; data.dsize = strlen(keystr2) + 1; -tdb_store(idmap_tdb, key, data, TDB_REPLACE); -tdb_store(idmap_tdb, data, key, TDB_REPLACE); +/* If any of the following actions fails try to + revert modifications successfully made so far. */ result = True; + +if (result tdb_store(idmap_tdb, key, data, TDB_REPLACE)) { +DEBUG(0, (Failed to store id mapping %s:%s : %s\n, + key.dptr, data.dptr, tdb_errorstr(idmap_tdb))); + +if (!deallocate_id(*id, isgroup)) +DEBUG(0, (Failed to rollback id mapping\n)); + +result = False; +} + +if (result tdb_store(idmap_tdb, data, key, TDB_REPLACE)) { +DEBUG(0, (Failed to store reverse id mapping %s:%s : %s\n, + data.dptr, key.dptr, tdb_errorstr(idmap_tdb))); + +if (!deallocate_id(*id, isgroup) || tdb_delete(idmap_tdb, key)) +DEBUG(0, (Failed to rollback id mapping\n)); + +result = False; +} } }
RE: Samba CPU Usage with large directories ...
Scott Taylor [mailto:[EMAIL PROTECTED]] wrote: We have a samba server running version 2.2.5 on kernel 2.4.18 with the SGI XFS patch. The shared volume consists of an XFS partition on a 3-ware raid5 controller. The network connection is via a 4 port bonded pipe to the switch. We notice that the samba CPU usage during write operations increases dramatically once a directory contains more than a certian number of files - thought to be somewhere around the 1500 to 2000 mark. We have tried allowing samba more memory, which did not seem to help - and have had little or no success finding any information on the web, hence this post. My guess (and that's all it is) is that this is an operating system issue. I presume you are using Linux 2.4.18 although you didn't say. Try writing a small C benchmark program that just does straight fopen/fread/frwrite/fclose operations, and time them, and see how you fare. I'll bet you find that the system calls (esp. the open call) take a lot longer on the big directories. Make sure your benchmark program uses the same file naming conventions as your real code, in case the problem has something to do with the efficiency of hashing or searching the specific names. PG -- Paul Green, Senior Technical Consultant, Stratus Technologies. Day: +1 978-461-7557; FAX: +1 978-461-3610 Speaking from Stratus not for Stratus
Re: Bug and Fix - Follow Up
On Tue, Dec 17, 2002 at 09:01:16PM -0600, Matt Roberts, GRDA wrote: In my search for the cause of the behavior seen in my earlier post, I traced the function call path to these two interesting functions, int source/lib/util.c at about line 133: BOOL set_global_scope(const char *scope) { SAFE_FREE(smb_scope); smb_scope = strdup(scope); if (!smb_scope) return False; strupper(smb_scope); return True; } const char *global_scope(void) { return smb_scope; } Since the latter function returns the string 'smb_scope', regardless of what is in it, wouldn't the first function protect against being set to a NULL value by rewriting it similar to this? BOOL set_global_scope(const char *scope) { SAFE_FREE(smb_scope); if (!smb_scope) { smb_scope = strdup(); return False; } smb_scope = strdup(scope); strupper(smb_scope); return True; } Already committed - thanks ! Jeremy.
smbclient and large file support
smbclient (and smbtar) in version 2.2.7a (and prior) has problems with large files ( 4GB). The following patch (against 2.2.7a) fixes all known problems with this. This code has been checked into the CVS tree in all branches as well. -- == Herb Lewis Silicon Graphics Networking Engineer 1600 Amphitheatre Pkwy MS-510 Strategic Software Organization Mountain View, CA 94043-1351 [EMAIL PROTECTED] Tel: 650-933-2177 http://www.sgi.com Fax: 650-932-2177 PGP Key: 0x8408D65D == --- samba-2.2.7a/source/client/client.c Wed Dec 4 09:16:34 2002 +++ samba-2.2.7a-fixed/source/client/client.c Thu Dec 19 15:41:51 2002 @@ -92,9 +92,9 @@ /* timing globals */ off_t get_total_size = 0; -int get_total_time_ms = 0; +unsigned int get_total_time_ms = 0; off_t put_total_size = 0; -int put_total_time_ms = 0; +unsigned int put_total_time_ms = 0; /* totals globals */ static double dir_total; --- samba-2.2.7a/source/client/clitar.c Tue Apr 30 06:26:18 2002 +++ samba-2.2.7a-fixed/source/client/clitar.c Thu Dec 19 15:50:20 2002 @@ -45,10 +45,10 @@ struct file_info_struct { - size_t size; + SMB_BIG_UINT size; uint16 mode; - int uid; - int gid; + uid_t uid; + gid_t gid; /* These times are normally kept in GMT */ time_t mtime; time_t atime; @@ -125,11 +125,11 @@ int blocksize=20; int tarhandle; -static void writetarheader(int f, char *aname, int size, time_t mtime, +static void writetarheader(int f, char *aname, SMB_BIG_UINT size, time_t mtime, char *amode, unsigned char ftype); static void do_atar(char *rname,char *lname,file_info *finfo1); static void do_tar(file_info *finfo); -static void oct_it(long value, int ndgs, char *p); +static void oct_it(SMB_BIG_UINT value, int ndgs, char *p); static void fixtarname(char *tptr, char *fp, int l); static int dotarbuf(int f, char *b, int n); static void dozerobuf(int f, int n); @@ -168,7 +168,7 @@ / Write a tar header to buffer / -static void writetarheader(int f, char *aname, int size, time_t mtime, +static void writetarheader(int f, char *aname, SMB_BIG_UINT size, time_t mtime, char *amode, unsigned char ftype) { union hblock hb; @@ -175,7 +175,7 @@ int i, chk, l; char *jp; - DEBUG(5, (WriteTarHdr, Type = %c, Size= %i, Name = %s\n, ftype, size, aname)); + DEBUG(5, (WriteTarHdr, Type = %c, Size= %.0f, Name = %s\n, ftype, (double)size, +aname)); memset(hb.dummy, 0, sizeof(hb.dummy)); @@ -207,10 +207,10 @@ hb.dbuf.name[NAMSIZ-1]='\0'; safe_strcpy(hb.dbuf.mode, amode, strlen(amode)); - oct_it(0L, 8, hb.dbuf.uid); - oct_it(0L, 8, hb.dbuf.gid); - oct_it((long) size, 13, hb.dbuf.size); - oct_it((long) mtime, 13, hb.dbuf.mtime); + oct_it((SMB_BIG_UINT)0, 8, hb.dbuf.uid); + oct_it((SMB_BIG_UINT)0, 8, hb.dbuf.gid); + oct_it((SMB_BIG_UINT) size, 13, hb.dbuf.size); + oct_it((SMB_BIG_UINT) mtime, 13, hb.dbuf.mtime); memcpy(hb.dbuf.chksum, , sizeof(hb.dbuf.chksum)); memset(hb.dbuf.linkname, 0, NAMSIZ); hb.dbuf.linkflag=ftype; @@ -217,7 +217,7 @@ for (chk=0, i=sizeof(hb.dummy), jp=hb.dummy; --i=0;) chk+=(0xFF *jp++); - oct_it((long) chk, 8, hb.dbuf.chksum); + oct_it((SMB_BIG_UINT) chk, 8, hb.dbuf.chksum); hb.dbuf.chksum[6] = '\0'; (void) dotarbuf(f, hb.dummy, sizeof(hb.dummy)); @@ -450,7 +450,7 @@ / Convert from decimal to octal string / -static void oct_it (long value, int ndgs, char *p) +static void oct_it (SMB_BIG_UINT value, int ndgs, char *p) { /* Converts long to octal string, pads with leading zeros */ @@ -621,7 +621,7 @@ static void do_atar(char *rname,char *lname,file_info *finfo1) { int fnum; - uint32 nread=0; + SMB_BIG_UINT nread=0; char ftype; file_info2 finfo; BOOL close_done = False; @@ -643,6 +643,7 @@ finfo.mtime = finfo1 - mtime; finfo.atime = finfo1 - atime; finfo.ctime = finfo1 - ctime; +finfo.name = finfo1 - name; } else { finfo.size = def_finfo.size; @@ -652,13 +653,14 @@ finfo.mtime = def_finfo.mtime; finfo.atime = def_finfo.atime; finfo.ctime = def_finfo.ctime; +finfo.name = def_finfo.name; } if (dry_run) { - DEBUG(3,(skipping file %s of size %d bytes\n, + DEBUG(3,(skipping file %s of size %12.0f bytes\n, finfo.name, - (int)finfo.size)); + (double)finfo.size)); shallitime=0; ttarf+=finfo.size + TBLOCK
Returning the size of the file to Clients
This was just brought up on the samba-vms list. Samba makes calls on behalf of the client to return a file size. The problem for this on OpenVMS, is that some of the text file sizes include the record information. When these files are sent to the client they are converted to a byte stream format like UNIX uses. But this results in a file that is a slightly different size than the physical size of the file, usually smaller. Only some applications, such as wordpad seem to be sensitive to this, as others use the amount of data transferred. It has been reported that wordpad adds garbage bytes to the end of the buffer for the difference. The 2.2.4 port of Samba to OpenVMS solves this by reading the entire file in order to give the correct size. This of course creates a big performance hit when displaying a directory. Is there anyway to differentiate for when the Client is opening a file for an application, and when a directory is being listed? I am also going to look to see if there is a more optimal way to calculate the size of these text files. Thanks, -John [EMAIL PROTECTED] Personal Opinion Only