Re: [U2] File corrupt
Hi, I modified the findlarge bash script to check our UV files to ensure they are not 32bit and > 1.8gb - it is run via cron each day. #!/bin/bash # # SCRIPT: findlarge # # AUTHOR: Randy Michael # # DATE: 11/30/2000 # # REV: 1.0.A # # PURPOSE: This script is used to search for files that # are larger than $1 Meg. Bytes. The search starts at # the current directory that the user is in, `pwd`, and # includes files in and below the user's current directory. # The output is both displayed to the user and stored # in a file for later review. # # REVISION LIST: # # DATE: 23/09/2009 - amended to run under bash C.L. YOUI #16/04/2009 - modified so only 'italerts receives messages when zero large files found (C.L.) # # set -n # Uncomment to check syntax without ANY execution # set -x # Uncomment to debug this script function usage { /bin/echo -e "\\n***" /bin/echo -e "\\n\\nUSAGE:findlarge [Number_Of_Meg_Bytes] [optional email address for alert]" /bin/echo -e "\\nEXAMPLE: filelarge 5" /bin/echo -e" \\n\\nWill Find Files Larger Than 5 Mb in, and Below, the Current Directory..." /bin/echo -e" \\n\\nEXITING...\\n" /bin/echo -e "\\n***" exit } function cleanup { /bin/echo -e "\\n" /bin/echo -e "\\n\\nEXITING ON A TRAPPED SIGNAL..." /bin/echo -e "\\n\\n\\n" exit } # Set a trap to exit. REMEMBER - CANNOT TRAP ON kill -9 trap 'cleanup' 1 2 3 15 # Check for the correct number of arguments and a number # Greater than zero if [ $# -lt 1 ] then usage fi if [ $1 -lt 1 ] then usage fi # check for supplied mail recipients ITALERTS='itale...@youi.com.au' if [ -n "$2" ] then MAILTO="$2" else MAILTO=$ITALERTS fi # Define and initialize files and variables here... THISHOST=`/bin/hostname`# Hostname of this machine DATESTAMP=`/bin/date +"%h%d:%Y:%T"` SEARCH_PATH=`/bin/pwd` # Top level directory to search MEG_BYTES=$1# Number of Mb for file size trigger DATAFILE="/tmp/filesize_datafile.out" # Data storage file >$DATAFILE # Initialize to a null file OUTFILE="/tmp/largefiles.out" # Output user file >$OUTFILE # Initialize to a null file HOLDFILE="/tmp/temp_hold_file.out" # Temporary storage file >$HOLDFILE # Initialize to a null file # Prepare the output user file /bin/echo -e "\\n\\nSearching for Files Larger Than ${MEG_BYTES}Mb starting in:" /bin/echo -e "\\n==> $SEARCH_PATH" /bin/echo -e "\\n\\nPlease Standby for the Search Results..." /bin/echo -e "\\n\\nLarge Files Search Results:" >> $OUTFILE /bin/echo -e "\\nHostname of Machine: $THISHOST" >> $OUTFILE /bin/echo -e "\\nTop Level Directory of Search:" >> $OUTFILE /bin/echo -e "\\n==> $SEARCH_PATH" >> $OUTFILE /bin/echo -e "\\nDate/Time of Search: `date`" >> $OUTFILE #/bin/echo -e "\\n\\nSearch Results Sorted by Time:" >> $OUTFILE # Search for files > $MEG_BYTES starting at the $SEARCH_PATH /usr/bin/find $SEARCH_PATH -type f -size +${MEG_BYTES}00c \ -print > $HOLDFILE # How many files were found? NUMBER_OF_FILES=0 if [ -s $HOLDFILE ] then #NUMBER_OF_FILES=`/bin/cat $HOLDFILE | wc -l` #/bin/echo -e "\\n\\nNumber of Files Found: ==> $NUMBER_OF_FILES\\n\\n" >> $OUTFILE # Append to the end of the Output File... ## C.L. YOUI /bin/ls -lt `/bin/cat $HOLDFILE` >> $OUTFILE for FNAME in `/bin/cat $HOLDFILE` do FTYPE=`/usr/bin/od -t x4 -An -N4 $FNAME` FTYPEBASE=`/usr/bin/expr substr $FTYPE 1 6` case $FTYPEBASE in acef01) FTYPE="[ UV 32bit headers ]" FDETS=`/bin/ls -la $FNAME` /bin/echo -e "\\n$FTYPE$FDETS" >> $OUTFILE NUMBER_OF_FILES=$(($NUMBER_OF_FILES + 1)) ;; acef02) FTYPE="[ UV 64bit headers ]" ;; *) FTYPE="[ Non-UV file ]" ;; esac done # ENDS C.L. fi /bin/cat $OUTFILE # Show the header information plus any content if [ $NUMBER_OF_FILES -gt 0 ] then /bin/echo -e "\\n\\nNumber of Files Found: ==> $NUMBER_OF_FILES\\n\\n" >> $OUTFILE /bin/echo -e "\\n\\nNumber of Files Found: ==> $NUMBER_OF_FILES\\n\\n" /bin/echo -e "\\n\\nThese sea
[U2] Unidata - Key Stucture - Compound Keys or Sequetial
Subject: Keys, Large Transaction Files, Just recently ran into an interesting phenomenon - I was working with a file with compound keys and the selects over a date range were atrocious. I copy the data to a new file, using sequential keys and the selects averages 200-2000X faster (for the doubters, I have to say is the actual # were something like 197X to 2070x, the second # being as second select after the data was cached). The avg length of key on the file was 32 characters. The avg length of a sequential keys was about 5 characters. The fields was a 'date' field. The field was indexed. The range of the select was 2 days. It seems there's a Unidata threshold large key sizes exceed with indexing that kills peformance. Also, sequential keys hash the best. I managed a file with 80M records at another site and had no problems with file sizing or overflow. Brad ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users
Re: [U2] File corrupt
Hi Kevin, I just wish this fact was more advertised. I would certainly have been checking for this if I had known. This issue came up a year or two back and I posted a comment that UV should be able to detect that it is heading into this situation before it corrupts the file. The response from one of the UV developers was that this is not possible. (Which makes it strange that we did it for QM even though our limit is 16Tb, not 2Gb). Martin Phillips Ladybridge Systems Ltd 17b Coldstream Lane, Hardingstone, Northampton, NN4 6DB +44-(0)1604-709200 ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users