[jira] [Created] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-449:
-

 Summary: Add a NULL check to protect DeadNodeHandler#onMessage
 Key: HDDS-449
 URL: https://issues.apache.org/jira/browse/HDDS-449
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge
Assignee: LiXin Ge


Add a NULL check to protect the situation below(may only happened in the case 
of unit test):
 1.A new datanode register to SCM.
 2. There is no container allocated in the new datanode temporarily.
 3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
 4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
{{node2ContainerMap}} and {{containers}} will be {{NULL}}
 5.NullPointerException will be throwen in the following iterate of 
{{containers}} like:
{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 s 
<<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
[ERROR] 
testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  Time 
elapsed: 0.33 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
at 
org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-448:
-

 Summary: Move NodeStat to NodeStatemanager from SCMNodeManager.
 Key: HDDS-448
 URL: https://issues.apache.org/jira/browse/HDDS-448
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge


This issue try to make the SCMNodeManager clear and clean, as the stat 
information should be kept by NodeStatemanager (NodeStateMap). It's also 
described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-368) all tests in TestOzoneRestClient failed due to "Unparseable date"

2018-08-22 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-368:
-

 Summary: all tests in TestOzoneRestClient failed due to 
"Unparseable date"
 Key: HDDS-368
 URL: https://issues.apache.org/jira/browse/HDDS-368
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: LiXin Ge


OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)

java version: 1.8.0_111

mvn: Apache Maven 3.3.9

Default locale: zh_CN, platform encoding: UTF-8

Test command: mvn test -Dtest=TestOzoneRestClient -Phdds

 
All the tests in TestOzoneRestClient failed in my local machine with exception 
like:
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) Time 
elapsed: 0.01 s <<< ERROR!
java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
Unparseable date: "m, 28 1970 19:23:50 GMT"
 at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
 at 
org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
date: "m, 28 1970 19:23:50 GMT"
at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
... 39 more
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-347) Ozone Integration Tests : testCloseContainerViaStandaAlone fails sometimes

2018-08-10 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-347:
-

 Summary: Ozone Integration Tests : 
testCloseContainerViaStandaAlone fails sometimes
 Key: HDDS-347
 URL: https://issues.apache.org/jira/browse/HDDS-347
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: LiXin Ge


This issue was finded in the automatic JenKins unit test of HDDS-265.


 The container life cycle state is : Open -> Closing -> closed, this test 
submit the container close command and wait for container state change to *not 
equal to open*, actually even when the state condition(not equal to open) is 
satisfied, the container may still in process of closing, so the LOG which will 
printf after the container closed can't be find sometimes and the test fails.
{code:java|title=KeyValueContainer.java|borderStyle=solid}
try {
  writeLock();

  containerData.closeContainer();
  File containerFile = getContainerFile();
  // update the new container data to .container File
  updateContainerFile(containerFile);

} catch (StorageContainerException ex) {
{code}
Looking at the code above, the container state changes from CLOSING to CLOSED 
in the first step, the remaining *updateContainerFile* may take hundreds of 
milliseconds, so even we modify the test logic to wait for the *CLOSED* state 
will not guarantee the test success, too.


 These are two way to fix this:
 1, Remove one of the double check which depends on the LOG.
 2, If we have to preserve the double check, we should wait for the *CLOSED* 
state and sleep for a while to wait for the LOG appears.


 patch 000 is based on the second way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-03-30 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13376:
---

 Summary: TLS support error in Native Build of 
hadoop-hdfs-native-client
 Key: HDFS-13376
 URL: https://issues.apache.org/jira/browse/HDFS-13376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation, native
Affects Versions: 3.1.0
Reporter: LiXin Ge


mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
-Pdist,native -DskipTests -Dtar
{noformat}
[exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
 [exec]   FATAL ERROR: The required feature thread_local storage is not 
supported by
 [exec]   your compiler.  Known compilers that support this feature: GCC, 
Visual
 [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
later).
 [exec]
 [exec]
 [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
 [exec] -- Configuring incomplete, errors occurred!
{noformat}

My environment:
Linux: Red Hat 4.4.7-3
cmake: 3.8.2
java: 1.8.0_131
gcc: 4.4.7
maven: 3.5.0

Seems this is because the low version of gcc, will report after confirming it. 
Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13375) TLS support error in Native Build of hadoop-hdfs-native-client

2018-03-30 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13375:
---

 Summary: TLS support error in Native Build of 
hadoop-hdfs-native-client
 Key: HDFS-13375
 URL: https://issues.apache.org/jira/browse/HDFS-13375
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation, native
Affects Versions: 3.1.0
Reporter: LiXin Ge


mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
-Pdist,native -DskipTests -Dtar
{noformat}
[exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
 [exec]   FATAL ERROR: The required feature thread_local storage is not 
supported by
 [exec]   your compiler.  Known compilers that support this feature: GCC, 
Visual
 [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
later).
 [exec]
 [exec]
 [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
 [exec] -- Configuring incomplete, errors occurred!
{noformat}

My environment:
Linux: Red Hat 4.4.7-3
cmake: 3.8.2
java: 1.8.0_131
gcc: 4.4.7
maven: 3.5.0

Seems this is because the low version of gcc, will report after confirming it. 
Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13192) change the code order to avoid unnecessary call of assignment

2018-02-23 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13192:
---

 Summary: change the code order to avoid unnecessary call of 
assignment
 Key: HDFS-13192
 URL: https://issues.apache.org/jira/browse/HDFS-13192
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption
Affects Versions: 3.1.0
Reporter: LiXin Ge


The assignment of {{version,suite}} and {{keyName}} should happen lazily, right 
before it's used in case the {{fileXAttr}} is *null*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13087) Fix: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes

2018-01-30 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13087:
---

 Summary: Fix: Snapshots On encryption zones get incorrect EZ 
settings when encryption zone changes
 Key: HDFS-13087
 URL: https://issues.apache.org/jira/browse/HDFS-13087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 3.1.0
Reporter: LiXin Ge


Snapshots are supposed to be immutable and read only, so the EZ settings which 
in a snapshot path shouldn't change when the origin encryption zone changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13086) Add a field about exception type to BlockOpResponseProto

2018-01-30 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13086:
---

 Summary: Add a field about exception type to BlockOpResponseProto
 Key: HDFS-13086
 URL: https://issues.apache.org/jira/browse/HDFS-13086
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: LiXin Ge
Assignee: LiXin Ge


When user re-read a file in the way of short-circuit reads, it may come across 
unknown errors due to the reasons that the file has been appended after the 
first read which changes it's meta file, or the file has been moved away by the 
balancer.

Such unknown errors will unnecessary disable short-circuit reads for 10 
minutes. HDFS-12528 Make the {{expireAfterWrite}} of 
{{DomainSocketFactory$pathMap}} configurable to give user a choice of never 
disable the domain socket. 

We can go a step further that add a field about exception type to 
BlockOpResponseProto, so that we can Ignore the acceptable FNFE and set a 
appropriate disable time to handle the unacceptable exceptions when different 
type of exception happens in the same cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13021) Incorrect storage policy of snapshott file was returned by getStoragePolicy command

2018-01-15 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-13021:
---

 Summary: Incorrect storage policy of snapshott file was returned 
by getStoragePolicy command
 Key: HDFS-13021
 URL: https://issues.apache.org/jira/browse/HDFS-13021
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, snapshots
Affects Versions: 3.1.0
Reporter: LiXin Ge
Assignee: LiXin Ge


Snapshots are supposed to be immutable and read only, so the file status which 
in a snapshot path shouldn't follow the original file's change.

The StoragePolicy in snapshot situation acts like a bug now.

---

Reproduction:Operation on snapshottable dir {{/storagePolicy}}

*before make snapshot:*

{code:java}

 [bin]# hdfs storagepolicies -setStoragePolicy -path /storagePolicy -policy 
PROVIDED
 Set storage policy PROVIDED on /storagePolicy

 [bin]# hadoop fs -put /home/file /storagePolicy/file_PROVIDED

 [bin]# hdfs storagepolicies -getStoragePolicy -path 
/storagePolicy/file_PROVIDED
 The storage policy of /storagePolicy/file_PROVIDED:
 BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
replicationFallbacks=[ARCHIVE]}

{code}

*make snapshot and check:*

{code:java}

[bin]# hdfs dfs -createSnapshot /storagePolicy s3_PROVIDED
Created snapshot /storagePolicy/.snapshot/s3_PROVIDED

[bin]# hdfs storagepolicies -getStoragePolicy -path 
/storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED
The storage policy of /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED:
BlockStoragePolicy{PROVIDED:1, storageTypes=[PROVIDED, DISK], 
creationFallbacks=[PROVIDED, DISK], replicationFallbacks=[PROVIDED, DISK]} 

{code}

*change the StroagePolicy and check again:*
{code:java}
[bin]# hdfs storagepolicies -setStoragePolicy -path /storagePolicy -policy HOT
Set storage policy HOT on /storagePolicy

[bin]# hdfs storagepolicies -getStoragePolicy -path 
/storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED
The storage policy of /storagePolicy/.snapshot/s3_PROVIDED/file_PROVIDED:
BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
replicationFallbacks=[ARCHIVE]}    It shouldn't be HOT
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12094) Log torrent when none isa-l EC is used.

2017-07-06 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-12094:
---

 Summary: Log torrent when none isa-l EC is used.
 Key: HDFS-12094
 URL: https://issues.apache.org/jira/browse/HDFS-12094
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-beta1
Reporter: LiXin Ge
Assignee: LiXin Ge
Priority: Minor


My hadoop is built without isa-l support, after the EC policy is enabled, 
whenever I get/put directory which contains many files, the log of warnings(see 
below) spam on the screen!
This is unfriendly and depress the performance. Since we come to the beta 
version now, 
these logs should be deprecated and a one-time warning log instead of exception 
may be much better.
{quote}
2017-07-06 15:42:41,398 WARN erasurecode.CodecUtil: Failed to create raw 
erasure encoder xor_native, fallback to next codec if possible
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawEncoder
at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory.createEncoder(NativeXORRawErasureCoderFactory.java:35)
at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:177)
at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:302)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:309)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1216)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1195)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1133)
...
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)

Caused by: java.lang.RuntimeException: libhadoop was built without ISA-L support
at 
org.apache.hadoop.io.erasurecode.ErasureCodeNative.checkNativeCodeLoaded(ErasureCodeNative.java:69)
at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawDecoder.(NativeXORRawDecoder.java:33)
... 25 more
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11929) Fix: incorrect usage of hdfs oiv_legacy

2017-06-05 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11929:
---

 Summary: Fix: incorrect usage of hdfs oiv_legacy
 Key: HDFS-11929
 URL: https://issues.apache.org/jira/browse/HDFS-11929
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-alpha3
Reporter: LiXin Ge
Assignee: LiXin Ge
Priority: Minor
 Fix For: 3.0.0-alpha4


The usage of hdfs oiv_legacy missed one processor named NameDistribution which 
actually available and mentioned at the processors part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11928) Segment overflow in FileDistributionCalculator

2017-06-05 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11928:
---

 Summary: Segment overflow in FileDistributionCalculator
 Key: HDFS-11928
 URL: https://issues.apache.org/jira/browse/HDFS-11928
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-alpha3
Reporter: LiXin Ge
Assignee: LiXin Ge
 Fix For: 3.0.0-alpha4


When run hdfs oiv command to analyse a fsimage file with FileDistribution 
processor,
the range segment of file size get overflowed:
{quote}
(1.98 GB, 1.98 GB]  2
(1.98 GB, 1.99 GB]  4
(1.99 GB, -2 GB]7
(2 GB, -1.99 GB]7
(2.02 GB, -1.98 GB] 2
(2.02 GB, -1.97 GB] 9
(2.03 GB, -1.96 GB] 5
(2.04 GB, -1.95 GB] 11
(2.05 GB, -1.95 GB] 4
{quote}
this patch fixs this problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11927) hdfs oiv_legacy:Normalize the verification of input parameter

2017-06-05 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11927:
---

 Summary: hdfs oiv_legacy:Normalize the verification of input 
parameter
 Key: HDFS-11927
 URL: https://issues.apache.org/jira/browse/HDFS-11927
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-alpha3
Reporter: LiXin Ge
Assignee: LiXin Ge
 Fix For: 3.0.0-alpha4


hdfs oiv_legacy:Normalize the verification of input parameter
At present, hdfs oiv_legacy tool lacks in verification of input 
parameter.people can type in irrelevant option like:
bq../hdfs oiv_legacy -i fsimage_000 -o out -p XML -step 1024
or type in option with wrong format which he think come into effec but actually 
not: 
bq../hdfs oiv_legacy -i fsimage_000 -o out -p FileDistribution 
maxSize 4096 step 512 format
or some meaningless word which can also get through:
bq../hdfs oiv_legacy -i fsimage_000 -o out -p XML Hello World

We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11925) HDFS oiv:Normalize the verification of input parameter

2017-06-04 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11925:
---

 Summary: HDFS oiv:Normalize the verification of input parameter
 Key: HDFS-11925
 URL: https://issues.apache.org/jira/browse/HDFS-11925
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-alpha3
Reporter: LiXin Ge
Assignee: LiXin Ge
 Fix For: 3.0.0-alpha4


At present, hdfs oiv tool lacks in verification of input parameter.people can 
type in irrelevant option like:
bq. ./hdfs oiv -i fsimage_000 -p XML -step 1024
or type in option with wrong format which he think come into effec but actually 
not: 
bq. ./hdfs oiv -i fsimage_000 -p FileDistribution maxSize 4096 
step 512 format
or some meaningless word which can also get through:
bq. ./hdfs oiv -i fsimage_000 -p XML Hello Han Meimei

We'd better not let these cases go unchecked. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11840) Log HDFS Mover exception message of exit to its own log

2017-05-17 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11840:
---

 Summary: Log HDFS Mover exception message of exit to its own log
 Key: HDFS-11840
 URL: https://issues.apache.org/jira/browse/HDFS-11840
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Affects Versions: 3.0.0-alpha2
Reporter: LiXin Ge
Assignee: LiXin Ge
Priority: Minor
 Fix For: 3.0.0-alpha2


Currently, the exception message of why mover exit is logged to stderr. 
It is hard to figure out why Mover was aborted as we may lose the console 
message,
but it would be much better if we also log this to Mover log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11765) Fix:Performance regression due to incorrect use of DataChecksum

2017-05-08 Thread LiXin Ge (JIRA)
LiXin Ge created HDFS-11765:
---

 Summary: Fix:Performance regression due to incorrect use of 
DataChecksum
 Key: HDFS-11765
 URL: https://issues.apache.org/jira/browse/HDFS-11765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native, performance
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: LiXin Ge
 Fix For: 3.0.0-alpha2


Recently I have upgraded my Hadoop version from 2.6 to 3.0, and I find that the 
write performance decreased by 13%. After some days comparative analysis, It's 
seems introduced by HADOOP-10865. 
Since James Thomas have done the work that native checksum can run against 
byte[] arrays instead of just against byte buffers, we may use native method 
preferential because it runs faster than others.[~szetszwo]] and [~iwasakims] 
could you take a look at this to see if  it make bad effect on your benchmark 
test? [~tlipcon] could you help to see if I have make mistakes in this patch?
thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org