[jira] [Comment Edited] (HDFS-15315) IOException on close() when using Erasure Coding

2020-05-22 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113812#comment-17113812
 ] 

Zhao Yi Ming edited comment on HDFS-15315 at 5/22/20, 7:22 AM:
---

[~graypacket] which solr version will you use? I tried on the solr 8.5.1 and 
save the indexes on the hdfs with ec policy XOR-2-1-1024k, it worked 
well.(index about 5000 docs). Also I found the trace

 
{code:java}
Caused by: java.io.IOException: Unable to close file because the last block 
does not have enough number of replicas.
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2519)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2480)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2445)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
{code}
used the DFSOutputStream, I think if it used the ec, the outputstream should be 
DFSStripedOutputStream.

Can you check the solr tomcat WEB-INF/lib or the 
/solr-webapp/webapp/WEB-INF/lib folder(I am not sure your env about the solr 
web, so list the normal 2 here) to have a look the hadoop jars? Thanks!

 


was (Author: zhaoyim):
[~graypacket] which solr version will you use? I tried on the solr 8.5.1 and 
save the indexes on the hdfs with ec policy XOR-2-1-1024k, it worked 
well.(index about 5000 docs). Also I found the trace

 
{code:java}
Caused by: java.io.IOException: Unable to close file because the last block 
does not have enough number of replicas.
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2519)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2480)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2445)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
{code}
used the DFSOutputStream, I think if it used the ec, the outputstream should be 
DFSStripedOutputStream.

Can you check the solor tomcat WEB-INF/lib or the 
/solr-webapp/webapp/WEB-INF/lib folder(I am not sure your env about the solr 
web, so list the normal 2 here) to have a look the hadoop jars? Thanks!

 

> IOException on close() when using Erasure Coding
> 
>
> Key: HDFS-15315
> URL: https://issues.apache.org/jira/browse/HDFS-15315
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1, ec, hdfs
>Affects Versions: 3.1.1
> Environment: XOR-2-1-1024k policy on hadoop 3.1.1 with 3 datanodes
>Reporter: Anshuman Singh
>Assignee: Zhao Yi Ming
>Priority: Major
>
> When using Erasure Coding policy on a directory, the replication factor is 
> set to 1. Solr fails in indexing documents with error - _java.io.IOException: 
> Unable to close file because the last block does not have enough number of 
> replicas._ It works fine without EC (with replication factor as 3.) It seems 
> to be identical to this issue. [ 
> https://issues.apache.org/jira/browse/HDFS-11486|https://issues.apache.org/jira/browse/HDFS-11486]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15315) IOException on close() when using Erasure Coding

2020-05-20 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112777#comment-17112777
 ] 

Zhao Yi Ming edited comment on HDFS-15315 at 5/21/20, 3:51 AM:
---

Assign to me have a try.


was (Author: zhaoyim):
Assign to me.

> IOException on close() when using Erasure Coding
> 
>
> Key: HDFS-15315
> URL: https://issues.apache.org/jira/browse/HDFS-15315
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1, ec, hdfs
>Affects Versions: 3.1.1
> Environment: XOR-2-1-1024k policy on hadoop 3.1.1 with 3 datanodes
>Reporter: Anshuman Singh
>Assignee: Zhao Yi Ming
>Priority: Major
>
> When using Erasure Coding policy on a directory, the replication factor is 
> set to 1. Solr fails in indexing documents with error - _java.io.IOException: 
> Unable to close file because the last block does not have enough number of 
> replicas._ It works fine without EC (with replication factor as 3.) It seems 
> to be identical to this issue. [ 
> https://issues.apache.org/jira/browse/HDFS-11486|https://issues.apache.org/jira/browse/HDFS-11486]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org