[jira] [Commented] (PIG-2924) PigStats should not be assuming all Storage classes to be file-based storage

2012-11-07 Thread Bill Graham (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493019#comment-13493019
 ] 

Bill Graham commented on PIG-2924:
--

Sorry for the delay on the review Chelsoo. Looking good. A few more comments. 
Let me know what you think.

- I think we need to pass the {{POStore}} instead of the location. PigStorage 
impls provided by random parties might not all abide by a unique namespacing 
convention in their location syntax. For example, {{VerticaStorer}} uses a 
syntax like "{[db_schema].[table_name]}" (curly brackets included). Another 
implementor could use the same syntax.  
- JobStats.getOuputSize could be simplified by doing this, which is more 
commonly done:
{noformat}
String reporterNames = conf.get(
   PigStatsOutputSizeReader.OUTPUT_SIZE_READER_KEY,
   FileBasedOutputSizeReader.class.getCanonicalName());
{noformat}
- Does {{PigContext.instantiateFuncFromSpec(className)}} (without appending 
"()") not work?
- It seems like it would be reasonable for 
{{PigStatsOutputSizeReader.getOutputSize}} to throw IOException all the way up 
to {{JobStats}}.
- Let's make {{DummyOutputSizeReader}} an inner class of {{TestJobStats}} since 
that package is already totally bloated.
- In {{pig.properties}} reducers' should not have an apostrophe (no possessive 
for inanimate objects).

> PigStats should not be assuming all Storage classes to be file-based storage
> 
>
> Key: PIG-2924
> URL: https://issues.apache.org/jira/browse/PIG-2924
> Project: Pig
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.2, 0.10.0
>Reporter: Harsh J
>Assignee: Cheolsoo Park
> Attachments: PIG-2924-2.patch, PIG-2924-3.patch, PIG-2924-4.patch, 
> PIG-2924.patch
>
>
> Using PigStatsUtil (like Oozie does) to collect JobStats for jobs that use a 
> HBaseStorage blows up when the stats are asked to be accumulated.
> This is because JobStats (which adds stuff up) is assuming all storages are 
> file based and that it can do listStatus/etc. operations on their 
> filespec-provided filename. For HBaseStorage, this is set to the tablename 
> and there's no such file, leading to an exception (FileNotFound or Invalid 
> URI - depending on using 'tablename' or 'hbase://tablename').

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (PIG-3041) Improve ResourceStatistics

2012-11-07 Thread Prashant Kommireddi (JIRA)
Prashant Kommireddi created PIG-3041:


 Summary: Improve ResourceStatistics
 Key: PIG-3041
 URL: https://issues.apache.org/jira/browse/PIG-3041
 Project: Pig
  Issue Type: Bug
Affects Versions: 0.12
Reporter: Prashant Kommireddi
Assignee: Prashant Kommireddi


This is a follow-up JIRA to PIG-2582. ResourceStatistics should be improved and 
a few things we should do for 0.13. 

1. Consider removing method setmBytes(Long mBytes). We deprecated this method 
in 0.12, but the code does not seem intuitive as the setter is actually working 
on the variable "bytes".

2. All setter methods return ResourceStatistics object and this is unnecessary. 
For eg:
{code}
public ResourceStatistics setNumRecords(Long numRecords) {
this.numRecords = numRecords;
return this;
}
{code}

Each one of these variables has an associated getter.

I will take this up once we are in the 0.13 cycle.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-3015) Rewrite of AvroStorage

2012-11-07 Thread Cheolsoo Park (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493010#comment-13493010
 ] 

Cheolsoo Park commented on PIG-3015:


Hi Russell,

Thank you very much for offering help. :-)
{quote}
However, I don't think its acceptable to break backwards-compatibility with the 
existing AvroStorage, and having two implementations at once seems confusing. 
It would be best to extend this implementation with those features required to 
maintain compatibility with the Piggybank AvroStorage before committing it as a 
builtin.
{quote}
Sure, we can wait until completing the new AvroStorage before commit it, and I 
won't insist to maintain two versions of AvroStorage if that's confusing to 
others.

But given that the new AvroStorage will have different options from the current 
AvroStorage, it seems unavoidable to introduce some backward incompatibility. 
For example, Joseph's proposal for new options are very different from those of 
the current AvroStorage. Would that be acceptable?

> Rewrite of AvroStorage
> --
>
> Key: PIG-3015
> URL: https://issues.apache.org/jira/browse/PIG-3015
> Project: Pig
>  Issue Type: Improvement
>  Components: piggybank
>Reporter: Joseph Adler
>Assignee: Joseph Adler
>
> The current AvroStorage implementation has a lot of issues: it requires old 
> versions of Avro, it copies data much more than needed, and it's verbose and 
> complicated. (One pet peeve of mine is that old versions of Avro don't 
> support Snappy compression.)
> I rewrote AvroStorage from scratch to fix these issues. In early tests, the 
> new implementation is significantly faster, and the code is a lot simpler. 
> Rewriting AvroStorage also enabled me to implement support for Trevni.
> I'm opening this ticket to facilitate discussion while I figure out the best 
> way to contribute the changes back to Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (PIG-2582) Store size in bytes (not mbytes) in ResourceStatistics

2012-11-07 Thread Bill Graham (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Graham updated PIG-2582:
-

Fix Version/s: 0.12

> Store size in bytes (not mbytes) in ResourceStatistics
> --
>
> Key: PIG-2582
> URL: https://issues.apache.org/jira/browse/PIG-2582
> Project: Pig
>  Issue Type: Bug
>Reporter: Travis Crawford
>Assignee: Prashant Kommireddi
>Priority: Minor
> Fix For: 0.12
>
> Attachments: PIG-2582_1.patch, PIG-2582_2.patch, PIG-2582.patch
>
>
> In 
> [ResourceStatistics.java|http://svn.apache.org/viewvc/pig/trunk/src/org/apache/pig/ResourceStatistics.java?view=markup]
>  we see mBytes is public, and has a public getter/setter.
> {code}
> 47public Long mBytes; // size in megabytes
> 196   public Long getmBytes() {
> 197   return mBytes;
> 198   }
> 199   public ResourceStatistics setmBytes(Long mBytes) {
> 200   this.mBytes = mBytes;
> 201   return this;
> 202   }
> {code}
> Typically sizes are stored as bytes, potentially having convenience functions 
> to return with different units.
> If mBytes can be marked private without causing woes it might be worth 
> storing size as bytes instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (PIG-2582) Store size in bytes (not mbytes) in ResourceStatistics

2012-11-07 Thread Bill Graham (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Graham resolved PIG-2582.
--

  Resolution: Fixed
Release Note: Committed, thanks Prashant!

> Store size in bytes (not mbytes) in ResourceStatistics
> --
>
> Key: PIG-2582
> URL: https://issues.apache.org/jira/browse/PIG-2582
> Project: Pig
>  Issue Type: Bug
>Reporter: Travis Crawford
>Assignee: Prashant Kommireddi
>Priority: Minor
> Fix For: 0.12
>
> Attachments: PIG-2582_1.patch, PIG-2582_2.patch, PIG-2582.patch
>
>
> In 
> [ResourceStatistics.java|http://svn.apache.org/viewvc/pig/trunk/src/org/apache/pig/ResourceStatistics.java?view=markup]
>  we see mBytes is public, and has a public getter/setter.
> {code}
> 47public Long mBytes; // size in megabytes
> 196   public Long getmBytes() {
> 197   return mBytes;
> 198   }
> 199   public ResourceStatistics setmBytes(Long mBytes) {
> 200   this.mBytes = mBytes;
> 201   return this;
> 202   }
> {code}
> Typically sizes are stored as bytes, potentially having convenience functions 
> to return with different units.
> If mBytes can be marked private without causing woes it might be worth 
> storing size as bytes instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-3016) Modernize more tests

2012-11-07 Thread Jonathan Coveney (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492958#comment-13492958
 ] 

Jonathan Coveney commented on PIG-3016:
---

Sounds great, Chealsoo. thanks for following through on this!

> Modernize more tests
> 
>
> Key: PIG-3016
> URL: https://issues.apache.org/jira/browse/PIG-3016
> Project: Pig
>  Issue Type: Improvement
>Reporter: Jonathan Coveney
>Assignee: Jonathan Coveney
> Fix For: 0.12
>
> Attachments: PIG-3016-0.patch, PIG-3016-1-nowhitespace.patch, 
> PIG-3016-1.patch
>
>
> This takes the same idea as PIG-3006 and applies it to the remaining tests. 
> Note that the one thing I did not do was get rid of MiniCluster. That can be 
> for another JIRA. All of this refactoring is effort enough for the time being 
> :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread Cheolsoo Park (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheolsoo Park updated PIG-2405:
---

   Resolution: Fixed
Fix Version/s: 0.12
   Status: Resolved  (was: Patch Available)

Committed to trunk/0.11. Thank you for your contribution, Fagnfang!

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11, 0.12
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java

[jira] [Updated] (PIG-3035) With latest version of hadoop23 pig does not return the correct exception stack trace from backend

2012-11-07 Thread Rohini Palaniswamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohini Palaniswamy updated PIG-3035:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Daniel. Committed to 0.10.1, 0.11 and trunk

> With latest version of hadoop23 pig does not return the correct exception 
> stack trace from backend
> --
>
> Key: PIG-3035
> URL: https://issues.apache.org/jira/browse/PIG-3035
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.10.0
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.11, 0.10.1, 0.12
>
> Attachments: PIG-3035.patch
>
>
> With latest version of hadoop, the job failure shown to user is always 
> Container killed by the ApplicationMaster. For eg: 
> "Backend error : Unable to recreate exception from backed error: 
> AttemptID:attempt_1352163563357_0002_m_00_1 Info:Container killed by the 
> ApplicationMaster"
> Steps to Reproduce:
>   Change hadoop version from 2.0.0-alpha to 0.23.5-SNAPSHOT in 
> ivy/libraries.properties. Tests 
> TestScalarAliases.testScalarErrMultipleRowsInInput and 
> TestEvalPipeline2.testNonStandardData will fail when run with 
> -Dhadoopversion=23.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-2980) documentation for DateTime datatype

2012-11-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492934#comment-13492934
 ] 

Thejas M Nair commented on PIG-2980:


Thanks for the patch Zhijie. It looks good.
But it says that datestamp constants are supported, but I guess if you pass 
'1970-01-01T00:00:00.000+00:00' to pig (say as an argument to a udf), i believe 
it would get interpreted as a string . Ie, we support chararray constants that 
can be cast to datetime, but not a datetime constant per se. Is that correct ? 
(I think it makes sense to support datetime constants, using a format that does 
not cause ambiguity wrt chararray type. But that would be another jira).



> documentation for DateTime datatype
> ---
>
> Key: PIG-2980
> URL: https://issues.apache.org/jira/browse/PIG-2980
> Project: Pig
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thejas M Nair
>Assignee: Zhijie Shen
> Fix For: 0.11
>
> Attachments: PIG-2980.patch
>
>
> Documentation for new DateTime type needs to be added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread fang fang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492918#comment-13492918
 ] 

fang fang chen commented on PIG-2405:
-

Hi Cheolsoo,

Sorry for missing fix. Updated the patch based on your comments. 
Thanks for your comments.

Thanks

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:12

[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated PIG-2405:


Attachment: PIG-2405-trunk.patch

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:120)
> at 
> org.apache.pig.test.TestHBaseStorage.tearDown(TestHBaseStorage.java:112)
> [junit] Test org.apache.pig.test.TestMRCompiler 

[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated PIG-2405:


Attachment: (was: PIG-2405-trunk.patch)

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:120)
> at 
> org.apache.pig.test.TestHBaseStorage.tearDown(TestHBaseStorage.java:112)
> [junit] Test org.apache.pig.test.Test

Re: Review Request: PIG-2405: some unit test case failed with open JDK

2012-11-07 Thread Fang Fang Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7898/
---

(Updated Nov. 8, 2012, 2:46 a.m.)


Review request for pig.


Description
---

For PIG-2405, fix 3 UTs:TestDataModel, TestMRCompiler, and TestPruneColumn


This addresses bug PIG-2405.
https://issues.apache.org/jira/browse/PIG-2405


Diffs (updated)
-

  trunk/test/org/apache/pig/test/TestDataModel.java 1406517 
  trunk/test/org/apache/pig/test/TestMRCompiler.java 1406517 
  trunk/test/org/apache/pig/test/TestPruneColumn.java 1406517 
  trunk/test/org/apache/pig/test/utils/TestHelper.java 1406517 

Diff: https://reviews.apache.org/r/7898/diff/


Testing
---

The 3 UTs passed in both sun and opensource JDK.
test-commit passed.
test-patch failed, but the failure isn't introduced by the patch. There are 
[javadoc] 38 warnings in original trunk branch.[PIG-3033]
[exec] -1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 12 new or modified tests.
[exec]
[exec] -1 javadoc. The javadoc tool appears to have generated 1 warning 
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number of javac 
compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
[exec]
[exec] +1 release audit. The applied patch does not increase the total number 
of release audit warnings.


Thanks,

Fang Fang Chen



[jira] [Updated] (PIG-3016) Modernize more tests

2012-11-07 Thread Cheolsoo Park (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheolsoo Park updated PIG-3016:
---

Attachment: PIG-3016-1-nowhitespace.patch
PIG-3016-1.patch

Since PIG-3006 is committed now, I re-based the patch to trunk. I am uploading 
two patches: w/ and w/o white space changes.

In addition to re-basing it, I made two changes as follows:
- Reverted the following change since it is not correct:
{code:title=TestDataModel.java}
-assertFalse(f1.equals(f2));
+assertEquals(f1, f2);
{code}
- In several places, changed the following
{code}

if (!iter.hasNext())
fail("No Output received");
{code}
to
{code}
assertTrue("No Output received", iter.hasNext());
{code}

Other than this, I made no further changes to Jonathan's original patch.

I am giving +1 to Jonathan's patch. After running a full unit test suite with 
both hadoop 20 and 23 tonight, I am going to commit it to trunk.

Please let me know if you have any concerns.

Thanks!

> Modernize more tests
> 
>
> Key: PIG-3016
> URL: https://issues.apache.org/jira/browse/PIG-3016
> Project: Pig
>  Issue Type: Improvement
>Reporter: Jonathan Coveney
>Assignee: Jonathan Coveney
> Fix For: 0.12
>
> Attachments: PIG-3016-0.patch, PIG-3016-1-nowhitespace.patch, 
> PIG-3016-1.patch
>
>
> This takes the same idea as PIG-3006 and applies it to the remaining tests. 
> Note that the one thing I did not do was get rid of MiniCluster. That can be 
> for another JIRA. All of this refactoring is effort enough for the time being 
> :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated PIG-2405:


Attachment: PIG-2405-trunk.patch

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:120)
> at 
> org.apache.pig.test.TestHBaseStorage.tearDown(TestHBaseStorage.java:112)
> [junit] Test org.apache.pig.test.TestMRCompiler 

[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread fang fang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang fang chen updated PIG-2405:


Attachment: (was: PIG-2405-trunk.patch)

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:120)
> at 
> org.apache.pig.test.TestHBaseStorage.tearDown(TestHBaseStorage.java:112)
> [junit] Test org.apache.pig.test.Test

Re: Review Request: PIG-2405: some unit test case failed with open JDK

2012-11-07 Thread Fang Fang Chen


> On Nov. 7, 2012, 9 p.m., Cheolsoo Park wrote:
> > Looks good. Thanks Fangfang!

Thanks Cheolsoo for review.


- Fang Fang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7898/#review13222
---


On Nov. 7, 2012, 10:55 a.m., Fang Fang Chen wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/7898/
> ---
> 
> (Updated Nov. 7, 2012, 10:55 a.m.)
> 
> 
> Review request for pig.
> 
> 
> Description
> ---
> 
> For PIG-2405, fix 3 UTs:TestDataModel, TestMRCompiler, and TestPruneColumn
> 
> 
> This addresses bug PIG-2405.
> https://issues.apache.org/jira/browse/PIG-2405
> 
> 
> Diffs
> -
> 
>   trunk/test/org/apache/pig/test/TestDataModel.java 1406517 
>   trunk/test/org/apache/pig/test/TestMRCompiler.java 1406517 
>   trunk/test/org/apache/pig/test/TestPruneColumn.java 1406517 
>   trunk/test/org/apache/pig/test/utils/TestHelper.java 1406517 
> 
> Diff: https://reviews.apache.org/r/7898/diff/
> 
> 
> Testing
> ---
> 
> The 3 UTs passed in both sun and opensource JDK.
> test-commit passed.
> test-patch failed, but the failure isn't introduced by the patch. There are 
> [javadoc] 38 warnings in original trunk branch.[PIG-3033]
> [exec] -1 overall.
> [exec]
> [exec] +1 @author. The patch does not contain any @author tags.
> [exec]
> [exec] +1 tests included. The patch appears to include 12 new or modified 
> tests.
> [exec]
> [exec] -1 javadoc. The javadoc tool appears to have generated 1 warning 
> messages.
> [exec]
> [exec] +1 javac. The applied patch does not increase the total number of 
> javac compiler warnings.
> [exec]
> [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
> [exec]
> [exec] +1 release audit. The applied patch does not increase the total number 
> of release audit warnings.
> 
> 
> Thanks,
> 
> Fang Fang Chen
> 
>



[jira] Subscription: PIG patch available

2012-11-07 Thread jira
Issue Subscription
Filter: PIG patch available (32 issues)

Subscriber: pigdaily

Key Summary
PIG-3035With latest version of hadoop23 pig does not return the correct 
exception stack trace from backend
https://issues.apache.org/jira/browse/PIG-3035
PIG-3034Remove Penny code from Pig repository
https://issues.apache.org/jira/browse/PIG-3034
PIG-3029TestTypeCheckingValidatorNewLP has some path reference issues for 
cross-platform execution
https://issues.apache.org/jira/browse/PIG-3029
PIG-3028testGrunt dev test needs some command filters to run correctly 
without cygwin
https://issues.apache.org/jira/browse/PIG-3028
PIG-3027pigTest unit test needs a newline filter for comparisons of golden 
multi-line
https://issues.apache.org/jira/browse/PIG-3027
PIG-3026Pig checked-in baseline comparisons need a pre-filter to address 
OS-specific newline differences
https://issues.apache.org/jira/browse/PIG-3026
PIG-3025TestPruneColumn unit test - SimpleEchoStreamingCommand perl inline 
script needs simplification
https://issues.apache.org/jira/browse/PIG-3025
PIG-3024TestEmptyInputDir unit test - hadoop version detection logic is 
brittle
https://issues.apache.org/jira/browse/PIG-3024
PIG-3014CurrentTime() UDF has undesirable characteristics
https://issues.apache.org/jira/browse/PIG-3014
PIG-3010Allow UDF's to flatten themselves
https://issues.apache.org/jira/browse/PIG-3010
PIG-2979Pig.jar doesn't work with hadoop-2.0.x
https://issues.apache.org/jira/browse/PIG-2979
PIG-2978TestLoadStoreFuncLifeCycle fails with hadoop-2.0.x
https://issues.apache.org/jira/browse/PIG-2978
PIG-2959Add a pig.cmd for Pig to run under Windows
https://issues.apache.org/jira/browse/PIG-2959
PIG-2957TetsScriptUDF fail due to volume prefix in jar
https://issues.apache.org/jira/browse/PIG-2957
PIG-2956Invalid cache specification for some streaming statement
https://issues.apache.org/jira/browse/PIG-2956
PIG-2955 Fix bunch of Pig e2e tests on Windows 
https://issues.apache.org/jira/browse/PIG-2955
PIG-2937generated field in nested foreach does not inherit the variable 
name as the field name
https://issues.apache.org/jira/browse/PIG-2937
PIG-2924PigStats should not be assuming all Storage classes to be 
file-based storage
https://issues.apache.org/jira/browse/PIG-2924
PIG-2873Converting bin/pig shell script to python
https://issues.apache.org/jira/browse/PIG-2873
PIG-2834MultiStorage requires unused constructor argument
https://issues.apache.org/jira/browse/PIG-2834
PIG-2824Pushing checking number of fields into LoadFunc
https://issues.apache.org/jira/browse/PIG-2824
PIG-2661Pig uses an extra job for loading data in Pigmix L9
https://issues.apache.org/jira/browse/PIG-2661
PIG-2657Print warning if using wrong jython version
https://issues.apache.org/jira/browse/PIG-2657
PIG-2507Semicolon in paramenters for UDF results in parsing error
https://issues.apache.org/jira/browse/PIG-2507
PIG-2433Jython import module not working if module path is in classpath
https://issues.apache.org/jira/browse/PIG-2433
PIG-2417Streaming UDFs -  allow users to easily write UDFs in scripting 
languages with no JVM implementation.
https://issues.apache.org/jira/browse/PIG-2417
PIG-2405svn tags/release-0.9.1: some unit test case failed with open JDK
https://issues.apache.org/jira/browse/PIG-2405
PIG-2362Rework Ant build.xml to use macrodef instead of antcall
https://issues.apache.org/jira/browse/PIG-2362
PIG-2312NPE when relation and column share the same name and used in Nested 
Foreach 
https://issues.apache.org/jira/browse/PIG-2312
PIG-1942script UDF (jython) should utilize the intended output schema to 
more directly convert Py objects to Pig objects
https://issues.apache.org/jira/browse/PIG-1942
PIG-1431Current DateTime UDFs: ISONOW(), UNIXNOW()
https://issues.apache.org/jira/browse/PIG-1431
PIG-1237Piggybank MutliStorage - specify field to write in output
https://issues.apache.org/jira/browse/PIG-1237

You may edit this subscription at:
https://issues.apache.org/jira/secure/FilterSubscription!default.jspa?subId=13225&filterId=12322384


[jira] [Commented] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread Cheolsoo Park (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492729#comment-13492729
 ] 

Cheolsoo Park commented on PIG-2405:


Hi Fangfang,

I was running tests with your new patch in the RB to commit it, but I found 
that TestPruneColumn.testStream2 fails. I believe that you omitted the 
following change in your new patch.
{code:title=TestPruneColumn.java}
-assertTrue(checkLogFileMessage(new String[]{"Map key required for 
event_serve: $0->[key4, key3]", 
-"Map key required for cm_data_raw: $0->[key4, key3, key5]"}));
+assertTrue(checkLogFileMessage(new String[]{"Map key required for 
event_serve: $0->[key3, key4]", 
+"Map key required for cm_data_raw: $0->[key3, key4, key5]"}));
{code}
Can you please upload a new patch to this JIRA? I will commit it as soon as you 
upload a new patch.

I also have a super minor comment. This is just a suggestion, so I won't 
insist. I found that you replaced {{assertEquals}} with {{assertTrue}}:
{code:title=TestPruneColumn.java}
-assertEquals("([2#1,1#1])", t.toString());
+assertTrue(TestHelper.sortString("\\[(.*)\\]", t.toString(), ",")
+.equals("([1#1, 2#1])"));
{code}
Can you please not change assertEquals to assertTrue? We made a good amount of 
effort to modernize test code in PIG-3006, and this was one of patterns that we 
fixed.

Thanks for your patience!

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source

[jira] [Created] (PIG-3040) Provide better support of union type with AvroStorage

2012-11-07 Thread Chris McConnell (JIRA)
Chris McConnell created PIG-3040:


 Summary: Provide better support of union type with AvroStorage
 Key: PIG-3040
 URL: https://issues.apache.org/jira/browse/PIG-3040
 Project: Pig
  Issue Type: Improvement
  Components: piggybank
Affects Versions: 0.10.0
Reporter: Chris McConnell
Priority: Minor


It would be nice to revisit the AvroStorage, possibly related to 
https://issues.apache.org/jira/browse/PIG-3015 to work with the union type via 
Avro. 

In discussions with Cheolsoo, a possible fix could be similar to recursive 
records (https://issues.apache.org/jira/browse/PIG-2875) utilization of a 
bytearray could be flexible enough to work, but does place the burden on the 
developer. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492696#comment-13492696
 ] 

Rohini Palaniswamy commented on PIG-2405:
-

+1 from me. Actually did not realize that TestMRCompiler case was more 
complicated until I saw Cheolsoo's comment that he does not have a better 
suggestion. After taking a deeper look, I couldn't think of one too without 
changing src code. This one is a pretty good solution. Thanks for fixing this 
Fang. I will let Cheolsoo take a final look and commit because he has been 
working with you so far.

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager

Re: Review Request: PIG-2405: some unit test case failed with open JDK

2012-11-07 Thread Cheolsoo Park

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7898/#review13222
---

Ship it!


Looks good. Thanks Fangfang!

- Cheolsoo Park


On Nov. 7, 2012, 10:55 a.m., Fang Fang Chen wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/7898/
> ---
> 
> (Updated Nov. 7, 2012, 10:55 a.m.)
> 
> 
> Review request for pig.
> 
> 
> Description
> ---
> 
> For PIG-2405, fix 3 UTs:TestDataModel, TestMRCompiler, and TestPruneColumn
> 
> 
> This addresses bug PIG-2405.
> https://issues.apache.org/jira/browse/PIG-2405
> 
> 
> Diffs
> -
> 
>   trunk/test/org/apache/pig/test/TestDataModel.java 1406517 
>   trunk/test/org/apache/pig/test/TestMRCompiler.java 1406517 
>   trunk/test/org/apache/pig/test/TestPruneColumn.java 1406517 
>   trunk/test/org/apache/pig/test/utils/TestHelper.java 1406517 
> 
> Diff: https://reviews.apache.org/r/7898/diff/
> 
> 
> Testing
> ---
> 
> The 3 UTs passed in both sun and opensource JDK.
> test-commit passed.
> test-patch failed, but the failure isn't introduced by the patch. There are 
> [javadoc] 38 warnings in original trunk branch.[PIG-3033]
> [exec] -1 overall.
> [exec]
> [exec] +1 @author. The patch does not contain any @author tags.
> [exec]
> [exec] +1 tests included. The patch appears to include 12 new or modified 
> tests.
> [exec]
> [exec] -1 javadoc. The javadoc tool appears to have generated 1 warning 
> messages.
> [exec]
> [exec] +1 javac. The applied patch does not increase the total number of 
> javac compiler warnings.
> [exec]
> [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
> [exec]
> [exec] +1 release audit. The applied patch does not increase the total number 
> of release audit warnings.
> 
> 
> Thanks,
> 
> Fang Fang Chen
> 
>



[jira] [Commented] (PIG-3015) Rewrite of AvroStorage

2012-11-07 Thread Russell Jurney (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492680#comment-13492680
 ] 

Russell Jurney commented on PIG-3015:
-

I agree that we should replace the old AvroStorage with this one, and that we 
should make AvroStorage a builtin.

However, I don't think its acceptable to break backwards-compatibility with the 
existing AvroStorage, and having two implementations at once seems confusing. 
It would be best to extend this implementation with those features required to 
maintain compatibility with the Piggybank AvroStorage before committing it as a 
builtin.

It sounds like you're on top of this, Joe and Chelsoo :) I'll be a tester.

> Rewrite of AvroStorage
> --
>
> Key: PIG-3015
> URL: https://issues.apache.org/jira/browse/PIG-3015
> Project: Pig
>  Issue Type: Improvement
>  Components: piggybank
>Reporter: Joseph Adler
>Assignee: Joseph Adler
>
> The current AvroStorage implementation has a lot of issues: it requires old 
> versions of Avro, it copies data much more than needed, and it's verbose and 
> complicated. (One pet peeve of mine is that old versions of Avro don't 
> support Snappy compression.)
> I rewrote AvroStorage from scratch to fix these issues. In early tests, the 
> new implementation is significantly faster, and the code is a lot simpler. 
> Rewriting AvroStorage also enabled me to implement support for Trevni.
> I'm opening this ticket to facilitate discussion while I figure out the best 
> way to contribute the changes back to Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Build failed in Jenkins: Pig-trunk #1355

2012-11-07 Thread Cheolsoo Park
I know that we discussed about fixing the jenkins build. Any updates?

I am looking at our build history. Basically, our build runs on hadoop1,
hadoop2, and hadoop6.

a) When it runs on hadoop1, it passes!
b) When it runs on hadoop2, it fails with a clover license error!
c) When it runs on hadoop6, it fails because a dozen of test cases fail!
https://builds.apache.org/job/Pig-trunk/1355/testReport/

Questions:
- Can we disable clover in our build? I don't seem to have permission to
change it.
- Did anyone look into hadoop6 to see what's wrong with this server?

Thanks,
Cheolsoo

On Wed, Nov 7, 2012 at 2:34 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [cheolsoo] PIG-3006: Modernize a chunk of the tests (jcoveney via cheolsoo)
>
> --
> [...truncated 36551 lines...]
> [junit] at
> org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:138)
> [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> [junit] at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> [junit] Shutting down DataNode 2
> [junit] at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> [junit] at java.lang.reflect.Method.invoke(Method.java:597)
> [junit] at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> [junit] at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> [junit] at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> [junit] at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
> [junit] at
> org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> [junit] at
> junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
> [junit] at
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
> [junit] at
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
> [junit] at
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
> [junit] 12/11/07 10:33:49 WARN datanode.FSDatasetAsyncDiskService:
> AsyncDiskService has already shut down.
> [junit] 12/11/07 10:33:49 INFO mortbay.log: Stopped
> SelectChannelConnector@localhost:0
> [junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping server on 34423
> [junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 0 on
> 34423: exiting
> [junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 2 on
> 34423: exiting
> [junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping IPC Server
> listener on 34423
> [junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 1 on
> 34423: exiting
> [junit] 12/11/07 10:33:49 INFO metrics.RpcInstrumentation: shut down
> [junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping IPC Server
> Responder
> [junit] 12/11/07 10:33:49 INFO datanode.DataNode: Waiting for
> threadgroup to exit, active threads is 1
> [junit] 12/11/07 10:33:49 WARN datanode.DataNode: DatanodeRegistration(
> 127.0.0.1:38013,
> storageID=DS-1579807559-67.195.138.20-38013-1352283953748, infoPort=40168,
> ipcPort=34423):DataXceiveServer:java.nio.channels.AsynchronousCloseException
> [junit] at
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
> [junit] at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
> [junit] at
> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
> [junit] at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
> [junit] at java.lang.Thread.run(Thread.java:662)
> [junit]
> [junit] 12/11/07 10:33:49 INFO datanode.DataNode: Exiting
> DataXceiveServer
> [junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block
> blk_-4020104927767701976_1133 file
> build/test/data/dfs/data/data2/current/blk_-4020104927767701976 for deletion
> [junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block
> blk_221494451919448074_1134 file
> build/test/data/dfs/data/data2/current/blk_221494451919448074 for deletion
> [junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block
> blk_4240634999001519329_1127 file
> build/test/data/dfs/data/data2/current/blk_4240634999001519329 for deletion
> [junit] 12/11/07 10:33:50 INFO datanode.DataNode: Deleted block
> blk_-4020104927767701976_1133 at file
> build/test/data/dfs/data/data2/current/blk_-4020104927767701976
> [junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block
> blk_8351800239428358225_1134 file
> build/test/data/dfs/data/data1/current/blk_8351800239428358225 for deletion
> 

[jira] [Created] (PIG-3039) Not possible to use custom version of jackson jars

2012-11-07 Thread Rohini Palaniswamy (JIRA)
Rohini Palaniswamy created PIG-3039:
---

 Summary: Not possible to use custom version of jackson jars
 Key: PIG-3039
 URL: https://issues.apache.org/jira/browse/PIG-3039
 Project: Pig
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Rohini Palaniswamy


User is trying

register jackson_core_asl-1.9.4_1.jar;
register jackson_mapper_asl-1.9.4_1.jar;
register jackson_xc-1.9.4_1.jar;

But pig.jar/pig-withouthadoop.jar has jackson jars and JarManager packages the 
jackson from pig.jar into job.jar(PIG-2457). We could not find any possible 
workaround with mapreduce framework to put the user jar first in the classpath 
as job.jar always takes precedence.

 The pig script works fine with 0.9 and is a regression in 0.10.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (PIG-2405) svn tags/release-0.9.1: some unit test case failed with open JDK

2012-11-07 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated PIG-2405:
--

Fix Version/s: 0.11

> svn tags/release-0.9.1: some unit test case failed with open JDK
> 
>
> Key: PIG-2405
> URL: https://issues.apache.org/jira/browse/PIG-2405
> Project: Pig
>  Issue Type: Bug
>Affects Versions: 0.9.1
> Environment: ant-1.8.2
> open jdk: 1.6
>Reporter: fang fang chen
>Assignee: fang fang chen
> Fix For: 0.11
>
> Attachments: PIG-2405-trunk.patch
>
>
> [junit] Test org.apache.pig.test.TestDataModel FAILED
> Testcase: testTupleToString took 0.004 sec
> FAILED
> toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
> junit.framework.ComparisonFailure: toString expected:<...ad a little 
> lamb)},[[hello#world,goodbye#all]],42,50,3.14...> but was:<...ad a 
> little lamb)},[[goodbye#all,hello#world]],42,50,3.14...>
>  at 
> org.apache.pig.test.TestDataModel.testTupleToString(TestDataModel.java:269
> [junit] Test org.apache.pig.test.TestHBaseStorage FAILED
> Tests run: 18, Failures: 0, Errors: 12, Time elapsed: 188.612 sec
> Testcase: testHeterogeneousScans took 0.018 sec
> Caused an ERROR
> java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml (Too many 
> open files)
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /root/pigtest/conf/hadoop-site.xml (Too many open files)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1162)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1035)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:436)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:271)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:167)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:130)
> at 
> org.apache.pig.test.TestHBaseStorage.prepareTable(TestHBaseStorage.java:809)
> at 
> org.apache.pig.test.TestHBaseStorage.testHeterogeneousScans(TestHBaseStorage.java:741)
> Caused by: java.io.FileNotFoundException: /root/pigtest/conf/hadoop-site.xml 
> (Too many open files)
> at java.io.FileInputStream.(FileInputStream.java:112)
> at java.io.FileInputStream.(FileInputStream.java:72)
> at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
> at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
> at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
> Source)
> at 
> org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
> at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
> at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
> at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
> at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079)
> Caused an ERROR
> Could not resolve the DNS name of hostname:39611
> java.lang.IllegalArgumentException: Could not resolve the DNS name of 
> hostname:39611
> at 
> org.apache.hadoop.hbase.HServerAddress.checkBindAddressCanBeResolved(HServerAddress.java:105)
> at 
> org.apache.hadoop.hbase.HServerAddress.(HServerAddress.java:66)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:755)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:590)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:555)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:171)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:145)
> at 
> org.apache.pig.test.TestHBaseStorage.deleteAllRows(TestHBaseStorage.java:120)
> at 
> org.apache.pig.test.TestHBaseStorage.tearDown(TestHBaseStorage.java:112)
> [junit] Test org.apache.pig.test.TestMRCompiler FAILED
> Testcase

[jira] [Updated] (PIG-3038) Support for Credentials for UDF,Loader and Storer

2012-11-07 Thread Rohini Palaniswamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohini Palaniswamy updated PIG-3038:


Summary: Support for Credentials for UDF,Loader and Storer  (was: Support 
for Credentials for UDF/Loader/Storer)

Currently it is possible to add Credentials in LoadFunc (in the Job passed to 
setLocation) and StoreFunc (in the Job passed to setStoreLocation). In case of 
StoreFunc, the Job passed is different everytime for the 3 calls to 
setStoreLocation made in front end and only the Credentials set during the call 
to setStoreLocation from PigOutputFormat.checkOutputSpecs take effect. That is 
very obscure and needs people to know the internals of the working of pig to 
get it working.

  UDFs(EvalFunc) on the other had have no API in them where the Job is passed 
and Credentials can be added. Users will have to serialize and store the 
credentials in UDFProperties and retrieve from there which is not secure.

Proposal:
  Add a getCredentials() API to UDFContext similar to JobConf.getCredentials(). 
Users can add their tokens to the Credentials object returned. On the backend 
the API will return the Credentials the job was launched with. The approach 
would be cleaner and backward compatible. 
  
  Alternative is to have credential related APIs added to EvalFunc, LoadFunc 
and StoreFunc. To deal with backward compatibility in that case, one will have 
to do reflection and determine whether the method is implemented or not and 
then call it which is not that good. 
  
Note: However we do it, adding a API with Credentials will break backward 
compatibility with Hadoop 0.20.2. We will have to decide whether we we plan to 
continue support for it even in pig 0.12. 

> Support for Credentials for UDF,Loader and Storer
> -
>
> Key: PIG-3038
> URL: https://issues.apache.org/jira/browse/PIG-3038
> Project: Pig
>  Issue Type: New Feature
>Affects Versions: 0.10.0
>Reporter: Rohini Palaniswamy
> Fix For: 0.12
>
>
>   Pig does not have a clean way (APIs) to support adding Credentials (hbase 
> token, hcat/hive metastore token) to Job and retrieving it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (PIG-3038) Support for Credentials for UDF/Loader/Storer

2012-11-07 Thread Rohini Palaniswamy (JIRA)
Rohini Palaniswamy created PIG-3038:
---

 Summary: Support for Credentials for UDF/Loader/Storer
 Key: PIG-3038
 URL: https://issues.apache.org/jira/browse/PIG-3038
 Project: Pig
  Issue Type: New Feature
Affects Versions: 0.10.0
Reporter: Rohini Palaniswamy
 Fix For: 0.12


  Pig does not have a clean way (APIs) to support adding Credentials (hbase 
token, hcat/hive metastore token) to Job and retrieving it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-3037) Order by partition by

2012-11-07 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492513#comment-13492513
 ] 

Ted Malaska commented on PIG-3037:
--

Here is a simple use case

I want to add the max and min price for the day up onto the current record

Ticker|time|price
FB|1|5
FB|2|4
FB|3|6

would output the following

Ticker|time|price|max|min
FB|1|5|5|5
FB|2|4|5|4
FB|3|6|6|4

For every trade of FB in a given day or week.  

> Order by partition by
> -
>
> Key: PIG-3037
> URL: https://issues.apache.org/jira/browse/PIG-3037
> Project: Pig
>  Issue Type: Bug
>Reporter: Ted Malaska
>Priority: Minor
>
> Why doesn't PIG support partition by on order by?
> If PIG would then PIG could be used for a mess of windowing functions.  
> Is there some reason why we can't add a custom partitioner on a order by 
> operation?
> Is there a work around to do windowing in PIG.  
> I understand the I can group and then order with in a group, but what if the 
> items with in the group are super big will I have memory issues?  Because I 
> need to order the values with in a group.  i.e. stock tickers is the group 
> and they need to be sorted on time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (PIG-3037) Order by partition by

2012-11-07 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-3037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated PIG-3037:
-

Description: 
Why doesn't PIG support partition by on order by?

If PIG would then PIG could be used for a mess of windowing functions.  

Is there some reason why we can't add a custom partitioner on a order by 
operation?

Is there a work around to do windowing in PIG.  

I understand the I can group and then order with in a group, but what if the 
items with in the group are super big will I have memory issues?  Because I 
need to order the values with in a group.  i.e. stock tickers is the group and 
they need to be sorted on time.


  was:
Why doesn't PIG support partition by on order by?

If PIG would then PIG could be used for a mess of windowing functions.  

Is there some reason why we can't add a custom partitioner on a order by 
operation?

Is there a work around to do windowing in PIG.  

I understand the I can group and then group with in a group, but what if the 
items with in the group are super big will I have memory issues?  Because I 
need to order the values with in a group.  i.e. stock tickers is the group and 
they need to be sorted on time.



> Order by partition by
> -
>
> Key: PIG-3037
> URL: https://issues.apache.org/jira/browse/PIG-3037
> Project: Pig
>  Issue Type: Bug
>Reporter: Ted Malaska
>Priority: Minor
>
> Why doesn't PIG support partition by on order by?
> If PIG would then PIG could be used for a mess of windowing functions.  
> Is there some reason why we can't add a custom partitioner on a order by 
> operation?
> Is there a work around to do windowing in PIG.  
> I understand the I can group and then order with in a group, but what if the 
> items with in the group are super big will I have memory issues?  Because I 
> need to order the values with in a group.  i.e. stock tickers is the group 
> and they need to be sorted on time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (PIG-3006) Modernize a chunk of the tests

2012-11-07 Thread Gianmarco De Francisci Morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492477#comment-13492477
 ] 

Gianmarco De Francisci Morales commented on PIG-3006:
-

Great job guys!

> Modernize a chunk of the tests
> --
>
> Key: PIG-3006
> URL: https://issues.apache.org/jira/browse/PIG-3006
> Project: Pig
>  Issue Type: Improvement
>Reporter: Jonathan Coveney
>Assignee: Jonathan Coveney
> Fix For: 0.12
>
> Attachments: PIG-3006-0.patch, PIG-3006-1.patch, PIG-3006-2.patch, 
> PIG-3006-3.patch, PIG-3006-4.patch
>
>
> A lot of the tests use antiquated patterns. My goal was to refactor them in a 
> couple ways:
> - get rid of the annotation specifying Junit 4. All should use JUnit 4 
> (question: where is the Junit 3 dependency even being pulled in?)
> - Nothing should extend TestCase. Everything should be annotation driven.
> - Properly use asserts. There was a lot of assertTrue(null==thing), so I 
> replaced it with assertNull(thing), and so on.
> - Get rid of MiniCluster use in a handful of cases.
> I've run every test and they pass, EXCEPT TestLargeFile which is failing on 
> trunk anyway.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (PIG-3037) Order by partition by

2012-11-07 Thread Ted Malaska (JIRA)
Ted Malaska created PIG-3037:


 Summary: Order by partition by
 Key: PIG-3037
 URL: https://issues.apache.org/jira/browse/PIG-3037
 Project: Pig
  Issue Type: Bug
Reporter: Ted Malaska
Priority: Minor


Why doesn't PIG support partition by on order by?

If PIG would then PIG could be used for a mess of windowing functions.  

Is there some reason why we can't add a custom partitioner on a order by 
operation?

Is there a work around to do windowing in PIG.  

I understand the I can group and then group with in a group, but what if the 
items with in the group are super big will I have memory issues?  Because I 
need to order the values with in a group.  i.e. stock tickers is the group and 
they need to be sorted on time.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: PIG-2405: some unit test case failed with open JDK

2012-11-07 Thread Fang Fang Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7898/
---

(Updated Nov. 7, 2012, 10:55 a.m.)


Review request for pig.


Changes
---

Update patch file.


Description
---

For PIG-2405, fix 3 UTs:TestDataModel, TestMRCompiler, and TestPruneColumn


This addresses bug PIG-2405.
https://issues.apache.org/jira/browse/PIG-2405


Diffs (updated)
-

  trunk/test/org/apache/pig/test/TestDataModel.java 1406517 
  trunk/test/org/apache/pig/test/TestMRCompiler.java 1406517 
  trunk/test/org/apache/pig/test/TestPruneColumn.java 1406517 
  trunk/test/org/apache/pig/test/utils/TestHelper.java 1406517 

Diff: https://reviews.apache.org/r/7898/diff/


Testing
---

The 3 UTs passed in both sun and opensource JDK.
test-commit passed.
test-patch failed, but the failure isn't introduced by the patch. There are 
[javadoc] 38 warnings in original trunk branch.[PIG-3033]
[exec] -1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 12 new or modified tests.
[exec]
[exec] -1 javadoc. The javadoc tool appears to have generated 1 warning 
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number of javac 
compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
[exec]
[exec] +1 release audit. The applied patch does not increase the total number 
of release audit warnings.


Thanks,

Fang Fang Chen



Meetup Update

2012-11-07 Thread Russell Jurney
I have the video. I'll post the questions and a Followup post soon.

Russell Jurney twitter.com/rjurney


Build failed in Jenkins: Pig-trunk #1355

2012-11-07 Thread Apache Jenkins Server
See 

Changes:

[cheolsoo] PIG-3006: Modernize a chunk of the tests (jcoveney via cheolsoo)

--
[...truncated 36551 lines...]
[junit] at 
org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:138)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] Shutting down DataNode 2
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
[junit] at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
[junit] at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
[junit] at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
[junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
[junit] at 
junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
[junit] 12/11/07 10:33:49 WARN datanode.FSDatasetAsyncDiskService: 
AsyncDiskService has already shut down.
[junit] 12/11/07 10:33:49 INFO mortbay.log: Stopped 
SelectChannelConnector@localhost:0
[junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping server on 34423
[junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 0 on 34423: 
exiting
[junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 2 on 34423: 
exiting
[junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping IPC Server listener on 
34423
[junit] 12/11/07 10:33:49 INFO ipc.Server: IPC Server handler 1 on 34423: 
exiting
[junit] 12/11/07 10:33:49 INFO metrics.RpcInstrumentation: shut down
[junit] 12/11/07 10:33:49 INFO ipc.Server: Stopping IPC Server Responder
[junit] 12/11/07 10:33:49 INFO datanode.DataNode: Waiting for threadgroup 
to exit, active threads is 1
[junit] 12/11/07 10:33:49 WARN datanode.DataNode: 
DatanodeRegistration(127.0.0.1:38013, 
storageID=DS-1579807559-67.195.138.20-38013-1352283953748, infoPort=40168, 
ipcPort=34423):DataXceiveServer:java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] 
[junit] 12/11/07 10:33:49 INFO datanode.DataNode: Exiting DataXceiveServer
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block 
blk_-4020104927767701976_1133 file 
build/test/data/dfs/data/data2/current/blk_-4020104927767701976 for deletion
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block 
blk_221494451919448074_1134 file 
build/test/data/dfs/data/data2/current/blk_221494451919448074 for deletion
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block 
blk_4240634999001519329_1127 file 
build/test/data/dfs/data/data2/current/blk_4240634999001519329 for deletion
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Deleted block 
blk_-4020104927767701976_1133 at file 
build/test/data/dfs/data/data2/current/blk_-4020104927767701976
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Scheduling block 
blk_8351800239428358225_1134 file 
build/test/data/dfs/data/data1/current/blk_8351800239428358225 for deletion
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Deleted block 
blk_8351800239428358225_1134 at file 
build/test/data/dfs/data/data1/current/blk_8351800239428358225
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Deleted block 
blk_221494451919448074_1134 at file 
build/test/data/dfs/data/data2/current/blk_221494451919448074
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Deleted block 
blk_4240634999001519329_1127 at file 
build/test/data/dfs/data/data2/current/blk_4240634999001519329
[junit] 12/11/07 10:33:50 INFO datanode.DataBlockScanner: Exiting 
DataBlockScanner thread.
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: Waiting for threadgroup 
to exit, active threads is 0
[junit] 12/11/07 10:33:50 INFO datanode.DataNode: 
DatanodeRegistration(127.0.0.1:38013, 
stora