Hello, I'm using fake-s3 for testing "s3a://"-backed storage locally. This requires path-style access, which is not accessible via configuration.
I'm aware of [HDFS-8727], stating that setting a custom endpoint switches to path-style access automatically. However, this is not working for me. I'm using Hadoop 2.7.1 with Spark 1.5.2. I've set fs.s3a.connection.ssl.enabled = false fs.s3a.endpoint = "fakes3.localdomain:4567" # Added to /etc/hosts In the logs, I find [...] 15/11/24 11:54:49 DEBUG S3Signer: Calculated string to sign: "HEAD application/x-www-form-urlencoded; charset=utf-8 Tue, 24 Nov 2015 10:54:49 GMT /bucket-name/" 15/11/24 11:54:49 DEBUG request: Sending Request: HEAD http://bucket-name.fakes3.localdomain:4567 / Headers: (Authorization: AWS XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX, Date: Tue, 24 Nov 2015 10:54:49 GMT, User-Agent: aws-sdk-java/1.7.4 Linux/3.16.0-4-amd64 OpenJDK_64-Bit_Server_VM/24.91-b01/1.7.0_91, Content-Type: application/x-www-form-urlencoded; charset=utf-8, ) 15/11/24 11:54:49 DEBUG AmazonHttpClient: Retriable error detected, will retry in 20000ms, attempt number: 9 15/11/24 11:55:09 DEBUG PoolingClientConnectionManager: Connection request: [route: {}->http://bucket-name.fakes3.localdomain:4567][total kept alive: 0; route allocated: 0 of 15; total allocated: 0 of 15] 15/11/24 11:55:09 DEBUG PoolingClientConnectionManager: Connection leased: [id: 10][route: {}->http://bucket-name.fakes3.localdomain:4567][total kept alive: 0; route allocated: 1 of 15; total allocated: 1 of 15] 15/11/24 11:55:09 DEBUG DefaultClientConnection: Connection org.apache.http.impl.conn.DefaultClientConnection@4397caf1 closed 15/11/24 11:55:09 DEBUG DefaultClientConnection: Connection org.apache.http.impl.conn.DefaultClientConnection@4397caf1 shut down 15/11/24 11:55:09 DEBUG DefaultClientConnection: Connection org.apache.http.impl.conn.DefaultClientConnection@4397caf1 closed 15/11/24 11:55:09 DEBUG PoolingClientConnectionManager: Connection released: [id: 10][route: {}->http://bucket-name.fakes3.localdomain:4567][total kept alive: 0; route allocated: 0 of 15; total allocated: 0 of 15] 15/11/24 11:55:09 INFO AmazonHttpClient: Unable to execute HTTP request: bucket-name.fakes3.localdomain: Name or service not known Any help, what I'm doing wrong in my setup would be appreciated. Best Eike [HDFS-8727] https://issues.apache.org/jira/browse/HDFS-8727 --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org