[ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16180:
---------------------------------
    Description: 
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
Stack trace:
{noformat}
Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
Malformed input or input contains unmappable characters: 
/home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/tab1/尼=2
        at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
        at sun.nio.fs.UnixPath.<init>(UnixPath.java:71)
        at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
        at java.io.File.toPath(File.java:2234)
        at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
        at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.<init>(RawLocalFileSystem.java:694)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
        at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
        at 
org.apache.hadoop.hive.io.HdfsUtils$HadoopFileStatus.<init>(HdfsUtils.java:211)
        at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3122)
        at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3478)
        at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1650)
        at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1579)
        at sun.reflect.GeneratedMethodAccessor209.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.sql.hive.client.Shim_v2_1.loadPartition(HiveShim.scala:1145)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:788)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:287)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:270)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:778)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:885)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99){noformat}
[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]

[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]

 

It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by resetting locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}

  was:
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by resetting locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}


> LocalFileSystem throw Malformed input or input contains unmappable characters
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-16180
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16180
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.8.0, 3.2.0
>            Reporter: Yuming Wang
>            Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/tab1/尼=2
>       at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>       at sun.nio.fs.UnixPath.<init>(UnixPath.java:71)
>       at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>       at java.io.File.toPath(File.java:2234)
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.<init>(RawLocalFileSystem.java:694)
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>       at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>       at 
> org.apache.hadoop.hive.io.HdfsUtils$HadoopFileStatus.<init>(HdfsUtils.java:211)
>       at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3122)
>       at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3478)
>       at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1650)
>       at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1579)
>       at sun.reflect.GeneratedMethodAccessor209.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.sql.hive.client.Shim_v2_1.loadPartition(HiveShim.scala:1145)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:788)
>       at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:287)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:270)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:778)
>       at 
> org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:885)
>       at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>       at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99){noformat}
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]
>  
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export LC_MESSAGES="en_US.UTF-8"
> export LC_PAPER="en_US.UTF-8"
> export LC_NAME="en_US.UTF-8"
> export LC_ADDRESS="en_US.UTF-8"
> export LC_TELEPHONE="en_US.UTF-8"
> export LC_MEASUREMENT="en_US.UTF-8"
> export LC_IDENTIFICATION="en_US.UTF-8"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to