Re: [jira] [Created] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-20 Thread vlab

Unless you need 'out' later, have this statement.
FileWriter out(restartMeta);
then when exiting the try block, 'out' will go out of scope

i assume this FileWriter that is create is delete'd else where
(else there is a memory leak).   {but then this code snippet could be 
java and can be messy.}


On 8/20/2014 8:50 PM, Ted Yu (JIRA) wrote:

Ted Yu created HDFS-6902:


  Summary: FileWriter should be closed in finally block in 
BlockReceiver#receiveBlock()
  Key: HDFS-6902
  URL: https://issues.apache.org/jira/browse/HDFS-6902
  Project: Hadoop HDFS
   Issue Type: Bug
 Reporter: Ted Yu
 Priority: Minor


Here is code starting from line 828:
{code}
 try {
   FileWriter out = new FileWriter(restartMeta);
   // write out the current time.
   out.write(Long.toString(Time.now() + restartBudget));
   out.flush();
   out.close();
 } catch (IOException ioe) {
{code}
If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)





[jira] [Created] (HDFS-6902) FileWriter should be closed in finally block in BlockReceiver#receiveBlock()

2014-08-20 Thread Ted Yu (JIRA)
Ted Yu created HDFS-6902:


 Summary: FileWriter should be closed in finally block in 
BlockReceiver#receiveBlock()
 Key: HDFS-6902
 URL: https://issues.apache.org/jira/browse/HDFS-6902
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Here is code starting from line 828:
{code}
try {
  FileWriter out = new FileWriter(restartMeta);
  // write out the current time.
  out.write(Long.toString(Time.now() + restartBudget));
  out.flush();
  out.close();
} catch (IOException ioe) {
{code}
If write() or flush() call throws IOException, out wouldn't be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6901) Remove unnecessary CryptoCodec and KeyProvider.Options definition in FSNamesystem

2014-08-20 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6901:


 Summary: Remove unnecessary CryptoCodec and KeyProvider.Options 
definition in FSNamesystem
 Key: HDFS-6901
 URL: https://issues.apache.org/jira/browse/HDFS-6901
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


CryptoCodec and KeyProvider.Options are not necessary in FSN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6900) Eliminate DU thread per block pool slice

2014-08-20 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6900:
---

 Summary: Eliminate DU thread per block pool slice
 Key: HDFS-6900
 URL: https://issues.apache.org/jira/browse/HDFS-6900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.0
Reporter: Arpit Agarwal


We use one DU thread per block pool slice to compute disk usage information. In 
addition to the thread overhead this results in the disk usage information 
being out of date for up to 10 minutes at a time. We can refresh it more 
frequently but then we'd be launching a shell command per block pool slice even 
more often.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6899) Allow changing the capacity of a storage volume for testing

2014-08-20 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6899:
---

 Summary: Allow changing the capacity of a storage volume for 
testing
 Key: HDFS-6899
 URL: https://issues.apache.org/jira/browse/HDFS-6899
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, test
Affects Versions: 2.5.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


It would be useful to limit the capacity of individual storage directories for 
testing purposes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-08-20 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6898:
---

 Summary: DN must reserve space for a full block when an RBW block 
is created
 Key: HDFS-6898
 URL: https://issues.apache.org/jira/browse/HDFS-6898
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DN will successfully create two RBW blocks on the same volume even if the free 
space is sufficient for just one full block.

One or both block writers may subsequently get a DiskOutOfSpace exception. This 
can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6897) If DatanodeThreshold is not set up, do not show information regarding DatanodeThreshold during NN startup

2014-08-20 Thread Benoy Antony (JIRA)
Benoy Antony created HDFS-6897:
--

 Summary: If DatanodeThreshold is not set up, do not show 
information regarding DatanodeThreshold during NN startup
 Key: HDFS-6897
 URL: https://issues.apache.org/jira/browse/HDFS-6897
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor


During Namenode startup, we see the following message:
{code}
The number of live datanodes XXX has reached the minimum number 0. Safe mode 
will be turned off automatically once the thresholds have been reached.
{code}
We have not setup the datanode threshold. So this message is not necessary.
It may be good to turn off this message if datanode threshold is not setup.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-3988) HttpFS can't do GETDELEGATIONTOKEN without a prior authenticated request

2014-08-20 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HDFS-3988.
--

Resolution: Done

HADOOP-10771 took care of this.

> HttpFS can't do GETDELEGATIONTOKEN without a prior authenticated request
> 
>
> Key: HDFS-3988
> URL: https://issues.apache.org/jira/browse/HDFS-3988
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.6.0
>
>
> A request to obtain a delegation token cannot it initiate an authentication 
> sequence, it must be accompanied by an auth cookie obtained in a prev request 
> using a different operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6896) Add XDR packaging method for each Mount request

2014-08-20 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6896:


 Summary: Add XDR packaging method for each Mount request
 Key: HDFS-6896
 URL: https://issues.apache.org/jira/browse/HDFS-6896
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6895) Add XDR parser method for each Mount response

2014-08-20 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6895:


 Summary: Add XDR parser method for each Mount response
 Key: HDFS-6895
 URL: https://issues.apache.org/jira/browse/HDFS-6895
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6894) Add XDR parser method for each NFS/Mount response

2014-08-20 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6894:


 Summary: Add XDR parser method for each NFS/Mount response
 Key: HDFS-6894
 URL: https://issues.apache.org/jira/browse/HDFS-6894
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
 Environment: This can be an abstract method in NFS3Response to force 
the subclasses to implement.
Reporter: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6893) crypto subcommand is not sorted properly in hdfs's hadoop_usage

2014-08-20 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-6893:
--

 Summary: crypto subcommand is not sorted properly in hdfs's 
hadoop_usage
 Key: HDFS-6893
 URL: https://issues.apache.org/jira/browse/HDFS-6893
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Trivial


crypto subcommand should be after classpath, not zkfc, in the hdfs usage output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6892) Add XDR packaging method for each NFS/Mount request

2014-08-20 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6892:


 Summary: Add XDR packaging method for each NFS/Mount request
 Key: HDFS-6892
 URL: https://issues.apache.org/jira/browse/HDFS-6892
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Brandon Li


The method can be used for unit tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6891) Follow-on work for transparent data at rest encryption

2014-08-20 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6891:
-

 Summary: Follow-on work for transparent data at rest encryption
 Key: HDFS-6891
 URL: https://issues.apache.org/jira/browse/HDFS-6891
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Charles Lamb


This is an umbrella JIRA to track remaining subtasks from HDFS-6134.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6890) NFS readdirplus doesn't return dotdot attributes

2014-08-20 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6890:


 Summary: NFS readdirplus doesn't return dotdot attributes
 Key: HDFS-6890
 URL: https://issues.apache.org/jira/browse/HDFS-6890
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li


In RpcProgramNfs3#readdirplus():
{noformat}
entries[1] = new READDIRPLUS3Response.EntryPlus3(dotdotFileId, "..",
  dotdotFileId, postOpDirAttr, new FileHandle(dotdotFileId));
{noformat}
It should return the directory's parent attribute instead of postOpDirAttr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6889) Provide an iterator-based listing API for FileSystem

2014-08-20 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6889:


 Summary: Provide an iterator-based listing API for FileSystem
 Key: HDFS-6889
 URL: https://issues.apache.org/jira/browse/HDFS-6889
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee


Iterator based listing methods already exist in {{FileContext}} for both simple 
listing and listing with locations. However, {{FileSystem}} lacks the former.  
From what I understand, it wasn't added to {{FileSystem}} because it was 
believed to be phased out soon. Since {{FileSystem}} is very well alive today 
and new features are getting added frequently, I propose adding an iterator 
based {{listStatus}} method. As for the name of the new method, we can use the 
same name used in {{FileContext}} : {{listStatusIterator()}}.

It will be particularly useful when listing giant directories. Without this, 
the client has to build up a huge data structure and hold it in memory. We've 
seen client JVMs running out of memory because of this.

Once this change is made, we can modify FsShell, etc. in followup jiras.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Hadoop-Hdfs-trunk #1843

2014-08-20 Thread Apache Jenkins Server
See 

Changes:

[brandonli] HDFS-6868. portmap and nfs3 are documented as hadoop commands 
instead of hdfs. Contributed by Brandon Li

[zjshen] YARN-2249. Avoided AM release requests being lost on work preserving 
RM restart. Contributed by Jian He.

[cmccabe] HADOOP-10968. hadoop native build fails to detect java_libarch on 
ppc64le (Dinar Valeev via Colin Patrick McCabe)

[jianhe] YARN-2409. RM ActiveToStandBy transition missing stoping previous 
rmDispatcher. Contributed by Rohith

--
[...truncated 18692 lines...]
Results :

Failed tests: 
  TestBookKeeperJournalManager.setupBookkeeper:74 Not all bookies started 
expected:<3> but was:<0>
  TestBookKeeperHACheckpoints.startBK:71 Not all bookies started expected:<3> 
but was:<0>
  TestBookKeeperAsHASharedDir.setupBookkeeper:75 Not all bookies started 
expected:<3> but was:<0>
  TestBootstrapStandbyWithBKJM.setupBookkeeper:54 Not all bookies started 
expected:<3> but was:<0>
  TestBookKeeperEditLogStreams.setupBookkeeper:48 Not all bookies started 
expected:<3> but was:<0>

Tests run: 10, Failures: 5, Errors: 0, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in 
http://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 15 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-hdfs-nfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.TestMountd
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.497 sec - in 
org.apache.hadoop.hdfs.nfs.TestMountd
Running org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.242 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Running org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.095 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Running org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.772 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.421 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.408 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Running org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Tests run: 

Hadoop-Hdfs-trunk - Build # 1843 - Still Failing

2014-08-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1843/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 18885 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [  02:18 h]
[INFO] Apache Hadoop HttpFS .. SUCCESS [03:26 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [ 56.009 s]
[INFO] Apache Hadoop HDFS-NFS  SUCCESS [01:28 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.047 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:24 h
[INFO] Finished at: 2014-08-20T17:52:00+00:00
[INFO] Final Memory: 97M/1378M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #1833
Archived 2 artifacts
Archive block size is 32768
Received 95 blocks and 120249062 bytes
Compression is 2.5%
Took 1 min 11 sec
Recording test results
Updating HADOOP-10968
Updating YARN-2249
Updating HDFS-6868
Updating YARN-2409
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Created] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-20 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6888:


 Summary: Remove audit logging of getFIleInfo()
 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a one 
of the most called method, users have noticed that audit log is now filled with 
this.  Since we now have HTTP request logging, this seems unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6887) beeter performance

2014-08-20 Thread ilovehadoop (JIRA)
ilovehadoop created HDFS-6887:
-

 Summary: beeter performance
 Key: HDFS-6887
 URL: https://issues.apache.org/jira/browse/HDFS-6887
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: qjm
Affects Versions: 0.23.10
Reporter: ilovehadoop
Priority: Critical
 Fix For: 0.23.2






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6886) Use single editlog record for creating file + overwrite.

2014-08-20 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6886:


 Summary: Use single editlog record for creating file + overwrite.
 Key: HDFS-6886
 URL: https://issues.apache.org/jira/browse/HDFS-6886
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu


As discussed in HDFS-6871, as [~jingzhao] and [~cmccabe]'s suggestion, we could 
do further improvement to use one editlog record for creating file + overwrite 
in this JIRA. We could record the overwrite flag in editlog for creating file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2014-08-20 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6885:


 Summary: Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
 Key: HDFS-6885
 URL: https://issues.apache.org/jira/browse/HDFS-6885
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


After readField using BytesWritable, the data length should be 
{{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
the buffer length. 
This will cause returned {{Rename[]}} is longer than expected and may include 
some incorrect values (They are Rename#NONE, and have not caused problem but 
code is incorrect). 
{code}
BytesWritable writable = new BytesWritable();
writable.readFields(in);

byte[] bytes = writable.getBytes();
Rename[] options = new Rename[bytes.length];

for (int i = 0; i < bytes.length; i++) {
  options[i] = Rename.valueOf(bytes[i]);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)