sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-755439216


   > some of the tests are parameterized to do test runs with/without dynamoDB. 
They shouldn't be run if the -Ddynamo option wasn't set, but what has 
inevitably happened is that regressions into the test runs have crept in and 
we've not noticed.
   
   I didn't specify the `-Ddynamo` option. The command I used is:
   ```
   mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
   ```
   
   I'm testing against my own S3A endpoint "s3a://sunchao/" which is in 
us-west-1 and I just followed the doc to setup `auth-keys.xml`. I didn't modify 
`core-site.xml`.
   
   > BTW, does this mean your initial PR went in without running the ITests? 
   
   Unfortunately no ... sorry I was not aware of the test steps here (first 
time contributing to hadoop-aws). I'll try to do some remedy in this PR.  Test 
failures I got:
   ```
   [ERROR] Tests run: 24, Failures: 1, Errors: 16, Skipped: 0, Time elapsed: 
20.537 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost
   [ERROR] 
testDeleteSingleFileInDir[raw-delete-markers](org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost)
  Time elapsed: 2.036 s  <<< FAILURE!
   java.lang.AssertionError: operation returning after fs.delete(simpleFile) 
action_executor_acquired starting=0 current=0 diff=0, action_http_get_request 
starting=0 current=0 diff=0,                action_http_head_request starting=4 
current=5 diff=1, committer_bytes_committed starting=0 current=0 diff=0, 
committer_bytes_uploaded starting=0 current=0 diff=0, committer_commit_job 
starting=0  current=0 diff=0, committer_commits.failures starting=0 current=0 
diff=0, committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0,           
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed     
     starting=0 current=0 diff=0, committer_magic_files_created starting=0 
current=0 diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0    diff=0, committ
 er_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created         starting=2 current=3 diff=1, 
directories_deleted starting=0 current=0 diff=0, fake_directories_created 
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8 
diff=2,           files_copied starting=0 current=0 diff=0, files_copied_bytes 
starting=0 current=0 diff=0, files_created starting=1 current=1 diff=0, 
files_delete_rejected starting=0 current=0 diff=0,             files_deleted 
starting=0 current=1 diff=1, ignored_errors starting=0 current=0 diff=0, 
multipart_instantiated starting=0 current=0 diff=0, 
multipart_upload_abort_under_path_invoked starting=0     current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0,              
multipart_upload_part_put_bytes 
 starting=0 current=0 diff=0, multipart_upload_started starting=0 current=0 
diff=0, object_bulk_delete_request starting=3 current=4 diff=1,                 
         object_continue_list_request starting=0 current=0 diff=0, 
object_copy_requests starting=0 current=0 diff=0, object_delete_objects 
starting=6 current=9 diff=3, object_delete_request starting=0     current=1 
diff=1, object_list_request starting=5 current=6 diff=1, 
object_metadata_request starting=4 current=5 diff=1, object_multipart_aborted 
starting=0 current=0 diff=0,                       object_multipart_initiated 
starting=0 current=0 diff=0, object_put_bytes starting=0 current=0 diff=0, 
object_put_request starting=3 current=4 diff=1, object_put_request_completed 
starting=3       current=4 diff=1, object_select_requests starting=0 current=0 
diff=0, op_copy_from_local_file starting=0 current=0 diff=0, op_create 
starting=1 current=1 diff=0, op_create_non_recursive           starting=0 
current=0 diff=0, op_delete starting=0
  current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0     current=0 diff=0, op_get_file_status starting=2 current=2 
diff=0, op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0,  op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open           starting=0 current=0 diff=0, op_rename 
starting=0 current=0 diff=0, 
s3guard_metadatastore_authoritative_directories_updated starting=0 current=0 
diff=0, s3guard_metadatastore_initialization       starting=0 current=0 diff=0, 
s3guard_metadatastore_put_path_request starting=0 current=0 diff=0, 
s3guard_metadatastore_record_deletes starting=0 current=0 diff=0,               
                   s3guard_metadatastore_record_reads starting=0 current=0 
diff=0, s3
 guard_metadatastore_record_writes starting=0 current=0 diff=0, 
s3guard_metadatastore_retry starting=0 current=0 diff=0,           
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0      diff=0, stream_aborted starting=0 
current=0 diff=0, stream_read_bytes starting=0 current=0 diff=0, 
stream_read_bytes_backwards_on_seek starting=0 current=0 diff=0,                
                 stream_read_bytes_discarded_in_abort starting=0 current=0 
diff=0, stream_read_bytes_discarded_in_close starting=0 current=0 diff=0, 
stream_read_close_operations starting=0 current=0 diff=0,       
stream_read_closed starting=0 current=0 diff=0, stream_read_exceptions 
starting=0 current=0 diff=0, stream_read_fully_operations starting=0 current=0 
diff=0, stream_read_opened starting=0         current=0 diff=0, 
stream_read_operations starting=0 current=0 diff=0, stream_read_o
 perations_incomplete starting=0 current=0 diff=0, 
stream_read_seek_backward_operations starting=0 current=0      diff=0, 
stream_read_seek_bytes_discarded starting=0 current=0 diff=0, 
stream_read_seek_bytes_skipped starting=0 current=0 diff=0, 
stream_read_seek_forward_operations starting=0 current=0 diff=0,  
stream_read_seek_operations starting=0 current=0 diff=0, 
stream_read_seek_policy_changed starting=0 current=0 diff=0, 
stream_read_total_bytes starting=0 current=0 diff=0,                          
stream_read_version_mismatches starting=0 current=0 diff=0, 
stream_write_block_uploads starting=0 current=0 diff=0, 
stream_write_block_uploads_aborted starting=0 current=0 diff=0,                 
stream_write_block_uploads_committed starting=0 current=0 diff=0, 
stream_write_bytes starting=0 current=0 diff=0, stream_write_exceptions 
starting=0 current=0 diff=0,                              
stream_write_exceptions_completing_upload starting=0 current=0 diff=0, 
stream_write_queue_duration s
 tarting=0 current=0 diff=0, stream_write_total_data starting=0 current=0 
diff=0,                stream_write_total_time starting=0 current=0 diff=0: 
object_delete_objects expected:<2> but was:<3>
   ```
   
   And seems most of the failures are due to error like the following:
   ```
   Caused by: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
resource not found: Table: sunchao not found (Service: AmazonDynamoDBv2; Status 
Code: 400; Error Code:      ResourceNotFoundException; Request ID: XXX; Proxy: 
null)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1828)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1412)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1374)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
   » at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
   » at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
   » at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
   » at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:5413)
   » at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:5380)
   » at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:2098)
   » at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:2063)
   » at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
   » at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStoreTableManager.initTable(DynamoDBMetadataStoreTableManager.java:171)
   » ... 23 more
   ```
   
   Not sure if I missed some steps in my test setup.
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to