Re: [freenet-support] Would someone please re-insert these files?

2005-08-12 Thread Conrad J. Sabatier

On 12-Aug-2005 Anonymous via Panta Rhei wrote:
 
 I've had a helluva time trying to d/l these and i'd be gratefull for
 a re-insert by anyone who has them.
 
 File:  xpcorp.ISO
 Key:   [EMAIL PROTECTED],t0N0QCiz0-a~LL08UL4XtA
 Bytes: 512360448
 
 File:  WinXPPlus.ISO
 Key:   [EMAIL PROTECTED],gNGGF1n0TvCRjdzgtA-NRA
 Bytes:
 
 Thank you.

You may be interested in a pair of bash scripts I wrote to help with
large splitfile downloads.

The first, get_splitfile, will try indefinitely to fetch the minimum
number of blocks needed to decode a file, looping forever over the list
of data/check blocks that make up each segment until either the
required number has been successfully retrieved or a user interrupt. 
It will automatically spawn a number of background fetches equal to the
number of segments in the splitfile times two (one thread for data
blocks, one for check blocks).

Note that no actual decoding is done.  The idea is to get the raw
blocks into the local datastore, so that a later request via fproxy
will succeed more quickly and easily.

Note also that each successfully retrieved block is stored in a
directory *outside* of your actual freenet datastore, so a little
additional disk space will be required.

The second script, watch_splitfile, allows you to monitor the progress
of the first script's operation.

Read the header comments in each file for usage instructions and other
information.

Enjoy!

P.S. Toad, should we maybe add these to our scripts collection in CVS? 
:-)

-- 
Conrad J. Sabatier [EMAIL PROTECTED] -- In Unix veritas
#!/usr/local/bin/bash
#
# get_splitfile -- download an FEC splitfile
#
# usage: get_splitfile key output_filename
#
# Uses a separate thread for each of the splitfile's segments to improve
# throughput
#
# Requires: fcpget (freenet tools) for fetching metadata,
# fnget (zzed's freenet tools) for fetching the actual data blocks
#
# Note: doesn't actually decode the file, but just downloads the blocks so they'
ll
# (hopefully) be ready and available in the local datastore when a real attempt 
to
# download/decode the file is done via fproxy
#

HTL=$(awk -F'=' '/^%?maxHopsToLive=/ {print $2}' ../freenet/freenet.conf)

##
#
# Subroutine definitions precede main program
#
##

#
# usage - display correct command line usage on stderr
#

usage()
{
{
echo
echo usage: get_splitfile key output_filename
echo
} 2
}


#
# number format conversion routines
#


#
# d2h - convert unsigned decimal to hex
#

d2h()
{
local d=$1
local h=
local nybble

while [ ${d} -ne 0 ]
do
nybble=$((d  15))
case ${nybble} in
10) nybble=a;;
11) nybble=b;;
12) nybble=c;;
13) nybble=d;;
14) nybble=e;;
15) nybble=f;;
 *) nybble=${nybble};;
esac
h=${nybble}${h}
((d = 4))
done

echo ${h}
}

#
# h2d - convert hex string (without any leading 0x) to decimal
#

h2d()
{
local h=$1
local d=0
local nybble

while [ -n ${h} ]
do
nybble=${h:0:1}
case ${nybble} in
a) nybble=10;;
b) nybble=11;;
c) nybble=12;;
d) nybble=13;;
e) nybble=14;;
f) nybble=15;;
*) ;;
esac
d=$(((d  4) + nybble))
h=${h:1}
done

echo ${d}
}

#
#
# Begin main program
#
#

# Check command line, exit if incorrect

if [ ${#} -ne 2 ]
then
usage
exit 1
fi

key=${1}

output_filename=${2}

# Import variables from spider
#
# Note: if not using spider, simply define the variables FCPGET (full path to
# freenet tools' fcpget program) and FCP_HOST (the address of the node to connec
t to)
#

. ${HOME}/freenet/spider/spider.vars

# Prepend freenet: to key if necessary

if [[ ${key#freenet:*} == ${key} ]]
then
key=freenet:${key}
fi

# Replace slashes in key with colons to create filename (without leading
# freenet:) prefix for metadata file and segment files

filename_prefix=$(echo ${key:8} | sed -e 's#/#:#g')

# Create a temporary directory for downloaded data

tempdir=${output_filename}.tmp
mkdir -p ${tempdir}

cd ${tempdir}

# Get the raw metadata for the file

metadata_file=${output_filename}.metadata

while [[ ! -s ${metadata_file} ]]
do
${FCPGET} -n ${FCP_HOST} -R -l ${HTL} -v 3 ${key} ${metadata_file} 
|| exit 
1
done

# Get the splitfile segment map info


Re: [freenet-support] Would someone please re-insert these files?

2005-08-12 Thread Conrad J. Sabatier
Oops, forgot to include a third, supporting script
(FECSegmentSplitFile, attached here).

On 12-Aug-2005 Conrad J. Sabatier wrote:
 
 You may be interested in a pair of bash scripts I wrote to help with
 large splitfile downloads.
 
 The first, get_splitfile, will try indefinitely to fetch the minimum
 number of blocks needed to decode a file, looping forever over the
 list of data/check blocks that make up each segment until either the
 required number has been successfully retrieved or a user interrupt. 
 It will automatically spawn a number of background fetches equal to
 the number of segments in the splitfile times two (one thread for
 data blocks, one for check blocks).
 
 Note that no actual decoding is done.  The idea is to get the raw
 blocks into the local datastore, so that a later request via fproxy
 will succeed more quickly and easily.
 
 Note also that each successfully retrieved block is stored in a
 directory *outside* of your actual freenet datastore, so a little
 additional disk space will be required.
 
 The second script, watch_splitfile, allows you to monitor the
 progress of the first script's operation.
 
 Read the header comments in each file for usage instructions and
 other information.
 
 Enjoy!
 
 P.S. Toad, should we maybe add these to our scripts collection in
 CVS? :-)

-- 
Conrad J. Sabatier [EMAIL PROTECTED] -- In Unix veritas
#!/usr/local/bin/bash
#
# FECSegmentSplitFile -- Do an FECSegmentSplitFile for the splitfile metadata
# contained in the file named in argument $1, save the file's segment info to
# individual header and blockmap files, using the output filename prefix named
# in argument $2
#
# usage: FECSegmentSplitFile metadata_filename filename_prefix
#

#
# usage - display correct command line usage on stderr
#

usage()
{
{
echo
echo usage: FECSegmentSplitFile metadata_filename 
filename_prefix
echo
} 2
}

# Check command line, exit if incorrect

if [ ${#} -ne 2 ]
then
usage
exit 1
fi

metadata_file=$1
filename_prefix=$2

metadata_size=$(stat -f %Xz ${metadata_file})

# Import variables from spider
#
# Note: if not using spider, simply define the variable FCP_HOST
# (the address of the node to connect to)
#

. ${HOME}/freenet/spider/spider.vars


# Open a connection to the node on file descriptor 3

exec 3/dev/tcp/${FCP_HOST}/8481

{
# Send the FCP message preamble to the node

echo -ne \00\00\00\02

# Send command to node

echo FECSegmentSplitFile
echo DataLength=${metadata_size}
echo Data
} 3

# Send the data from the metadata file to the node

cat ${metadata_file} 3

# Finish out the command

echo EndMessage 3

# Read and save the node's response to segment files prefixed with
# output file name.
#
# Node's reply should be in the form of SegmentHeader/BlockMap pairs
#

i=0
exec 3

while read -r
do
if [[ ${REPLY} != SegmentHeader ]]
then
exit 1
fi

cat /dev/null  ${filename_prefix}.segment.${i}.header

# read and save segment header

while read -r
do
if [[ ${REPLY} == EndMessage ]]
then
break
else
echo ${REPLY}  ${filename_prefix}.segment.${i}.header
fi
done

read -r

if [[ ${REPLY} != BlockMap ]]
then
exit 1
fi

cat /dev/null  ${filename_prefix}.segment.${i}.blockmap

# read and save segment blockmap

while read -r
do
if [[ ${REPLY} == EndMessage ]]
then
break
else
echo ${REPLY}  
${filename_prefix}.segment.${i}.blockmap
fi
done

sort -o ${filename_prefix}.segment.${i}.blockmap 
${filename_prefix}.segment.${i
}.blockmap

((++i))

done

exit 0
#!/usr/local/bin/bash
#
# get_splitfile -- download an FEC splitfile
#
# usage: get_splitfile key output_filename
#
# Uses a separate thread for each of the splitfile's segments to improve
# throughput
#
# Requires: fcpget (freenet tools) for fetching metadata,
# fnget (zzed's freenet tools) for fetching the actual data blocks
#
# Note: doesn't actually decode the file, but just downloads the blocks so they'
ll
# (hopefully) be ready and available in the local datastore when a real attempt 
to
# download/decode the file is done via fproxy
#

HTL=$(awk -F'=' '/^%?maxHopsToLive=/ {print $2}' ../freenet/freenet.conf)

##
#
# Subroutine definitions precede main program
#
##

#
# usage - display correct command line usage on stderr
#

usage()
{
{
echo
echo usage: get_splitfile key 

Re: [freenet-support] Insertion speed

2005-08-12 Thread Matthew Toseland
110kB/sec is pretty high for freenet. Insertion means insertion INTO THE
NETWORK. This does not mean storing it on your local node, it means
copying it to many nodes across the network. At best it will be as fast
as your uplink.

On Thu, Aug 11, 2005 at 02:40:13AM +0530, Gautham Anil wrote:
 Hi,
 
 When I insert a big file into freenet, I see that file immediately being 
 copied to the store's temp folder. But the speed at which it copies is 
 excruciatingly slow (110KBps). Note that the original file, the store 
 and the browser are all there in a single P4 1.5GHz. Is this a limit 
 imposed by the browser or something funny with freenet?
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.


signature.asc
Description: Digital signature
___
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:[EMAIL PROTECTED]

Re: [freenet-support] Would someone please re-insert these files?

2005-08-12 Thread Matthew Toseland
No, they are unnecessary. It is quite possible to set #retries to 5
in fproxy...

On Fri, Aug 12, 2005 at 06:22:11AM -0500, Conrad J. Sabatier wrote:
 
 On 12-Aug-2005 Anonymous via Panta Rhei wrote:
  
  I've had a helluva time trying to d/l these and i'd be gratefull for
  a re-insert by anyone who has them.
  
  File:  xpcorp.ISO
  Key:   [EMAIL PROTECTED],t0N0QCiz0-a~LL08UL4XtA
  Bytes: 512360448
  
  File:  WinXPPlus.ISO
  Key:   [EMAIL PROTECTED],gNGGF1n0TvCRjdzgtA-NRA
  Bytes:
  
  Thank you.
 
 You may be interested in a pair of bash scripts I wrote to help with
 large splitfile downloads.
 
 The first, get_splitfile, will try indefinitely to fetch the minimum
 number of blocks needed to decode a file, looping forever over the list
 of data/check blocks that make up each segment until either the
 required number has been successfully retrieved or a user interrupt. 
 It will automatically spawn a number of background fetches equal to the
 number of segments in the splitfile times two (one thread for data
 blocks, one for check blocks).
 
 Note that no actual decoding is done.  The idea is to get the raw
 blocks into the local datastore, so that a later request via fproxy
 will succeed more quickly and easily.
 
 Note also that each successfully retrieved block is stored in a
 directory *outside* of your actual freenet datastore, so a little
 additional disk space will be required.
 
 The second script, watch_splitfile, allows you to monitor the progress
 of the first script's operation.
 
 Read the header comments in each file for usage instructions and other
 information.
 
 Enjoy!
 
 P.S. Toad, should we maybe add these to our scripts collection in CVS? 
 :-)
 
 -- 
 Conrad J. Sabatier [EMAIL PROTECTED] -- In Unix veritas

Content-Description: get_splitfile
 #!/usr/local/bin/bash
 #
 # get_splitfile -- download an FEC splitfile
 #
 # usage: get_splitfile key output_filename
 #
 # Uses a separate thread for each of the splitfile's segments to improve
 # throughput
 #
 # Requires: fcpget (freenet tools) for fetching metadata,
 # fnget (zzed's freenet tools) for fetching the actual data blocks
 #
 # Note: doesn't actually decode the file, but just downloads the blocks so 
 they'
 ll
 # (hopefully) be ready and available in the local datastore when a real 
 attempt 
 to
 # download/decode the file is done via fproxy
 #
 
 HTL=$(awk -F'=' '/^%?maxHopsToLive=/ {print $2}' ../freenet/freenet.conf)
 
 ##
 #
 # Subroutine definitions precede main program
 #
 ##
 
 #
 # usage - display correct command line usage on stderr
 #
 
 usage()
 {
   {
   echo
   echo usage: get_splitfile key output_filename
   echo
   } 2
 }
 
 
 #
 # number format conversion routines
 #
 
 
 #
 # d2h - convert unsigned decimal to hex
 #
 
 d2h()
 {
   local d=$1
   local h=
   local nybble
   
   while [ ${d} -ne 0 ]
   do
   nybble=$((d  15))
   case ${nybble} in
   10) nybble=a;;
   11) nybble=b;;
   12) nybble=c;;
   13) nybble=d;;
   14) nybble=e;;
   15) nybble=f;;
*) nybble=${nybble};;
   esac
   h=${nybble}${h}
   ((d = 4))
   done
 
   echo ${h}
 }
 
 #
 # h2d - convert hex string (without any leading 0x) to decimal
 #
 
 h2d()
 {
   local h=$1
   local d=0
   local nybble
   
   while [ -n ${h} ]
   do
   nybble=${h:0:1}
   case ${nybble} in
   a) nybble=10;;
   b) nybble=11;;
   c) nybble=12;;
   d) nybble=13;;
   e) nybble=14;;
   f) nybble=15;;
   *) ;;
   esac
   d=$(((d  4) + nybble))
   h=${h:1}
   done
 
   echo ${d}
 }
 
 #
 #
 # Begin main program
 #
 #
 
 # Check command line, exit if incorrect
 
 if [ ${#} -ne 2 ]
 then
   usage
   exit 1
 fi
 
 key=${1}
 
 output_filename=${2}
 
 # Import variables from spider
 #
 # Note: if not using spider, simply define the variables FCPGET (full path to
 # freenet tools' fcpget program) and FCP_HOST (the address of the node to 
 connec
 t to)
 #
 
 . ${HOME}/freenet/spider/spider.vars
 
 # Prepend freenet: to key if necessary
 
 if [[ ${key#freenet:*} == ${key} ]]
 then
   key=freenet:${key}
 fi
 
 # Replace slashes in key with colons to create filename (without leading
 # freenet:) prefix for metadata file and segment files
 
 filename_prefix=$(echo ${key:8} | sed -e 's#/#:#g')
 
 # Create a temporary directory for downloaded data