Re: [Amanda-users] Amanda 2.6.0p2 + S3 results in cURL errors

2008-09-23 Thread Dustin J. Mitchell
On Thu, Sep 11, 2008 at 5:32 PM, Dustin J. Mitchell [EMAIL PROTECTED] wrote:
 Oh, dear -- the sourceforge bug tracker is completely unused these
 days.  If there are others on the list who have submitted things to
 the sourceforge tracker, please post to the list.  I should find a way
 to hide that tracker.

As I commented on that tracker item, the fix is now in trunk at r1293.
 I wrote a test script to write a lot of small blocks, and observed
that it broke on trunk, and works with this patch applied.

I don't know right now whether this will go into a 2.6.0p3 or just 2.6.1.

Dustin

-- 
Storage Software Engineer
http://www.zmanda.com


[Amanda-users] Amanda 2.6.0p2 + S3 results in cURL errors

2008-09-16 Thread moekyle

Your response sounds about right to me. I found that if I manually deleted 
files that the issue for list keys went away so it really did seem like a 
buffer size issue. 

Once you have a patch I will go back and clean out my extra files and basically 
make the tapes clean. I did not dive real deep into the code and found another 
key function in the data and the difference in prefix checking and just tried 
it. Since that did fix my immediate issue I stuck with it. 

I wasn't sure if the sourceforge tracker was being used which was why I posted 
my response here as well. I imagine anyone using the S3 would have this issue 
but have seen very few comments about it.

Thanks for the help and please post when the patch is available.

Derrick

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--




[Amanda-users] Amanda 2.6.0p2 + S3 results in cURL errors

2008-09-11 Thread moekyle

I have been having this same issue for some time and finally found a fix.

Here is what I posted to the bug tracker in sourceforge.

While listing S3 keys: CURL error: Failed writing body (CURLcode 23)
taper: Error writing label DailySet1/010 to device

Had this error any time amanda was attempting to write a new backup to an
S3 slot that previously had data on it.

I found that if I change s3-device.c to have the same s3_list_keys function
to be the same as others in the file the issue went away.

delete_file(S3Device *self,
int file)
{
gboolean result;
GSList *keys;
char *my_prefix = g_strdup_printf(%sf%08x-, self-prefix, file);

result = s3_list_keys(self-s3, self-bucket, self-prefix, -,
keys);
if (!result) {


Not sure whay the my_prefix is trying to do but by removing it from the
s3_list_keys it fixed my issue and now backups work correctly.



So edit s3-device.c and change the line in delete_file section to look like 
this: 
result = s3_list_keys(self-s3, self-bucket, self-prefix, -,
keys);

once that is done. do a make all and a make install.

That should fix your issues.

Derrick

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--




Re: [Amanda-users] Amanda 2.6.0p2 + S3 results in cURL errors

2008-09-11 Thread Dustin J. Mitchell
On Thu, Sep 11, 2008 at 2:12 PM, moekyle [EMAIL PROTECTED] wrote:
 Here is what I posted to the bug tracker in sourceforge.

Oh, dear -- the sourceforge bug tracker is completely unused these
days.  If there are others on the list who have submitted things to
the sourceforge tracker, please post to the list.  I should find a way
to hide that tracker.

 delete_file(S3Device *self,
 int file)
 {
 gboolean result;
 GSList *keys;
 char *my_prefix = g_strdup_printf(%sf%08x-, self-prefix, file);

 result = s3_list_keys(self-s3, self-bucket, self-prefix, -,
 keys);
 if (!result) {
[snip]
 Not sure whay the my_prefix is trying to do but by removing it from the
 s3_list_keys it fixed my issue and now backups work correctly.
 result = s3_list_keys(self-s3, self-bucket, self-prefix, -,
 keys);

If you look at the documentation for s3_list_keys, you'll see that it
lists all strings matching PREFIX*DELIMITER*, but only including the
PREFIX*DELIMITER portion.  The S3 device names objects in a bucket
like
  slot-01f001e-filestart
  ..
  slot-01f001eb03ad.data
  slot-01f001eb03ae.data
  slot-01f001eb03af.data
  ..
The first object is the header, and the remainder are blocks of data,
where the 'b' in the middle is the border between the file number
(${FILENUM}, 0x1e in this case) and the block number within that file
(0x3ad..0x3ae shown here).  'slot-01' here is the device prefix
(${DPFX}).  The pattern as present in the released source code looks
for all objects matching ${DPFX}f${FILENUM}*, and requests the full
name of each.  It then proceeds to delete each of those objects -- in
effect, deleting all data with the given file number.

With your patch, you're asking for all files matching ${DPFX}*-*,
returning only the portion matching ${DPFX}*-.  This will match the
tapestart object for *all* files, but only return the first part of
the object name:
  slot-01f0001-
  slot-01f0002-
  slot-01f0003-
  ..
these keys do not exist, so the deletion should fail -- but perhaps
Amazon does not respond with an error when deleting a nonexistent
object?  Anyway, the end result is that you've effectively disabled
delete_file, so you are probably using more S3 storage than you need,
and will get old data intermingled with new data when you try to make
a recovery.  This avoids the error Lisa reported simply by masking the
problem.

On looking more deeply, I think I see the problem: s3_list_keys, or in
particular list_fetch, limits the response to 100k.  I'll need to dig
into this a little more deeply, but I should have a patch out shortly.

-- 
Storage Software Engineer
http://www.zmanda.com